JPH05232997A - Voice coding device - Google Patents

Voice coding device

Info

Publication number
JPH05232997A
JPH05232997A JP4035881A JP3588192A JPH05232997A JP H05232997 A JPH05232997 A JP H05232997A JP 4035881 A JP4035881 A JP 4035881A JP 3588192 A JP3588192 A JP 3588192A JP H05232997 A JPH05232997 A JP H05232997A
Authority
JP
Japan
Prior art keywords
vector
frame
lpc coefficient
speech
sound source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP4035881A
Other languages
Japanese (ja)
Other versions
JP3248215B2 (en
Inventor
Masahiro Serizawa
芹沢  昌宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NEC Corp filed Critical NEC Corp
Priority to JP03588192A priority Critical patent/JP3248215B2/en
Priority to DE69329476T priority patent/DE69329476T2/en
Priority to EP93102794A priority patent/EP0557940B1/en
Priority to CA002090205A priority patent/CA2090205C/en
Publication of JPH05232997A publication Critical patent/JPH05232997A/en
Application granted granted Critical
Publication of JP3248215B2 publication Critical patent/JP3248215B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders

Abstract

PURPOSE:To improve the sound quality of a reproduced voice signal by using a distance scale weighted by an LPC coefficient corresponding to each section in each section previously set up in a vector at the time of searching a code vector. CONSTITUTION:Each code book is searched by using an input voice signal divided and weighted by an intra-frame divider 105 and a weighting filter 110 and a square error between respective code vectors of an adaptive code book 145 and a sound source code book 180 divided, weighted and filter-reproduced through intra-frame dividers 150, 185, reproducing filters 155, 190 and weighting filters 160, 195 as distance scales.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は音声信号を低いビットレ
ート、特に8kbps以下で高品質に符号化するための
音声符号化方式に関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a speech coding system for coding a speech signal with a low bit rate, particularly at a high quality of 8 kbps or less.

【0002】[0002]

【従来の技術】従来の音声符号化装置としては、例え
ば、M.Schroeder氏とB.Atal氏による
“Code−exited linear predi
ction : High quality spee
ch at very lowbit rates”
(IEEE Proc. ICASSP−85,pp.
937−940,1985)と題した論文(文献1)等
に記載されているCELP(Code Excited
LPC Coding)が知られている。この方式で
は、送信側で、まず音声信号に線形予測分析を施して得
られる線形予測残差を一定区間長のフレームに切り出し
音源ベクトルを作成する。次に過去の再生音源信号を切
り出して生成した適応コードブック中で音源ベクトルと
の聴感重み付け再生二乗距離が最小の適応コードベクト
ルを選択し、音源ベクトルから選択した適応コードベク
トルを減算して音源ベクトルを得る。更に予め作成した
音源コードブック中で音源ベクトルとの聴感重み付け再
生二乗距離が最小となる音源コードベクトルを選択し、
その音源コードベクトルに対する最適なゲインをを計算
する。そして選択された適応コードベクトル、音源コー
ドベクトル、それらのゲインおよび線形予測係数を表す
インデクスを伝送する。受信側ではこれらのインデクス
を基に音声信号を再生する。
2. Description of the Related Art As a conventional speech coding apparatus, for example, M. Schroeder and B. "Code-exited linear predi" by Atal
action: High quality speed
ch at very lowbit rates ”
(IEEE Proc. ICASSP-85, pp.
CELP (Code Excited) described in a paper (reference 1) and the like entitled 937-940, 1985).
LPC Coding) is known. In this method, on the transmission side, first, a linear prediction residual obtained by performing a linear prediction analysis on an audio signal is cut out into a frame having a constant section length to create a sound source vector. Next, in the adaptive codebook generated by cutting out the past reproduced sound source signal, select the adaptive code vector with the smallest perceptual weighted reproduction squared distance from the sound source vector, and subtract the selected adaptive code vector from the sound source vector to generate the sound source vector. To get Furthermore, in the sound source code book created in advance, select the sound source code vector with the smallest audible weighted reproduction squared distance with the sound source vector,
Calculate the optimal gain for that source code vector. Then, the selected adaptive code vector, excitation code vector, their gains, and indexes representing the linear prediction coefficients are transmitted. The receiving side reproduces the audio signal based on these indexes.

【0003】[0003]

【発明が解決しようとする課題】しかしながら従来の音
声符号化装置では、コードブック探索においてベクトル
内で同一のLPC係数を用いて信号の重み付け再生二乗
距離を計算していたため、特にベクトル長が長い場合、
ベクトル内の音声信号の周波数特性の変化を十分に近似
できず、音質が劣化するという問題があった。
However, in the conventional speech coding apparatus, since the weighted reproduction square distance of the signal is calculated by using the same LPC coefficient in the vector in the codebook search, especially when the vector length is long. ,
There is a problem that the change in the frequency characteristic of the voice signal in the vector cannot be sufficiently approximated and the sound quality is deteriorated.

【0004】本発明の目的は、上述の問題を解決し、音
声信号をより効率的に量子化するためのコードブック探
索方式を有する音声符号化装置を提供することにある。
An object of the present invention is to solve the above-mentioned problems and to provide a speech coder having a codebook search method for more efficiently quantizing a speech signal.

【0005】[0005]

【課題を解決するための手段】本発明は、入力音声信号
を一定区間長のフレームに切りだし音声ベクトルを生成
する手段と、前記入力音声信号を予め定めた区間長毎に
線形予測分析してLPC係数を抽出する手段と、前記音
声ベクトルと前記LPC係数とを用いて過去の再生音源
信号から前記フレームの長さに等しい当該フレームの適
応コードベクトルを定める手段と、前記音声ベクトルと
前記LPC係数と前記定めた適応コードベクトルとを用
いて予め設計された前記フレームの長さに等しい複数の
ベクトルを持つ音源コードブックから当該フレームの音
源コードベクトルを定める手段とを少なくとも有する音
声符号化装置において、前記当該フレームの適応コード
ベクトルと前記当該フレームの音源コードベクトルとを
定める際に重み関数を用い、当該フレーム内の予め定め
た区間毎に前記入力音声信号を線形予測分析して区間L
PC係数を求める手段と、前記求めた区間LPC係数か
ら前記重み関数を形成させる手段とを有することを特徴
とする。
SUMMARY OF THE INVENTION According to the present invention, a means for cutting an input voice signal into a frame having a constant section length to generate a voice vector, and a linear prediction analysis of the input voice signal for each predetermined section length. Means for extracting an LPC coefficient, means for determining an adaptive code vector of the frame that is equal to the length of the frame from a past reproduced sound source signal using the speech vector and the LPC coefficient, the speech vector and the LPC coefficient In a speech coding apparatus having at least means for determining a sound source code vector of a frame from a sound source code book having a plurality of vectors equal to the length of the frame previously designed using the predetermined adaptive code vector, When determining the adaptive code vector of the frame and the sound source code vector of the frame, The use, section by said input audio signal for each predetermined interval in the frame linear prediction analysis L
It is characterized by comprising means for obtaining a PC coefficient and means for forming the weighting function from the obtained section LPC coefficient.

【0006】[0006]

【作用】本発明による音声符号化装置の作用を示す。The operation of the speech coding apparatus according to the present invention will be described.

【0007】本発明では、適応コードベクトルおよび音
源コードベクトルを決定する際に、まず入力音声信号を
一定区間長に切り出して音声ベクトルと入力音声を線形
予測して得た区間毎のLPC係数を用いてベクトル毎に
フィルタ再生した再生音声ベクトルを計算する。次にベ
クトル内で予め設定した区間(例えば0〜N/2−1と
N/2〜N−1の2区間、Nはベクトル長=フレーム
長)毎に線形予測分析して得た区間LPC係数を用いて
重み付けを行い次式の聴感重み付け再生二乗和を計算
し、これを距離尺度として距離尺度最小の各コードベク
トルを選択する。
In the present invention, when determining the adaptive code vector and the sound source code vector, first, the input speech signal is cut out into a certain section length, and the speech vector and the LPC coefficient for each section obtained by linearly predicting the input speech are used. Then, a reproduced voice vector that is filtered and reproduced for each vector is calculated. Next, a section LPC coefficient obtained by a linear prediction analysis for each section (for example, two sections of 0 to N / 2-1 and N / 2 to N-1, where N is a vector length = frame length) in the vector. Is used to calculate the perceived weighted reproduction square sum of the following equation, and this is used as a distance measure to select each code vector having the minimum distance measure.

【0008】[0008]

【数1】 [Equation 1]

【0009】xw kは音声ベクトルの要素xk の聴感重み
付け音声ベクトルのk番目の要素
X w k is the k-th element of the perceptually weighted speech vector of the speech vector element x k

【0010】[0010]

【数2】 [Equation 2]

【0011】である。ここで[0011] here

【0012】[0012]

【数3】 [Equation 3]

【0013】は重み付け関数である。但しIs a weighting function. However

【0014】[0014]

【数4】 [Equation 4]

【0015】であり、q-iは次式を満たすi時刻の遅延
を表現するシフトオペレータである。
Where q -i is a shift operator that expresses a delay of i time satisfying the following expression.

【0016】[0016]

【数5】 [Equation 5]

【0017】iは区間の番号である。wi (l)は音声
ベクトルのi番目の区間の(例えば区間を含む分析窓で
入力音声信号を線形予測分析して得た)区間LPC係数
のl次の値、Lは分析次数である。γ1 とγ1 は聴感重
み付けを調整する係数である。
I is the section number. w i (l) is the l-th value of the section LPC coefficient of the i-th section of the speech vector (for example, obtained by linear prediction analysis of the input speech signal in an analysis window including the section), and L is the analysis order. γ 1 and γ 1 are coefficients that adjust the perceptual weighting.

【0018】[0018]

【数6】 [Equation 6]

【0019】はインデクスjの適応コードベクトルの要
素cac,k(j)の重み付け再生適応ベクトルのk番目の
要素
Is the k-th element of the weighted reproduction adaptive vector of the element c ac, k (j) of the adaptive code vector of index j

【0020】[0020]

【数7】 [Equation 7]

【0021】である。但し[0021] However

【0022】[0022]

【数8】 [Equation 8]

【0023】である。It is

【0024】ai (l)は音声ベクトルに対応する(例
えば切り出したフレームを含む分析窓で入力音声信号を
線形予測分析して得た)LPC係数から得た(例えば量
子化復号して得た)LPC係数のl次の値である。
A i (l) is obtained from the LPC coefficient corresponding to the speech vector (for example, obtained by linear prediction analysis of the input speech signal in the analysis window including the cut-out frame) (for example, obtained by quantization decoding). ) It is the value of the LPC coefficient of order l.

【0025】[0025]

【数9】 [Equation 9]

【0026】はインデクスjの音源コードベクトルの要
素cec,k(j)の重み付け再生音源ベクトルのk番目の
要素
Is the k-th element of the weighted reproduction excitation vector of the element c ec, k (j) of the excitation code vector of index j

【0027】[0027]

【数10】 [Equation 10]

【0028】である。qac,j、qec,jは各々インデクス
jの適応コードベクトルおよび音源コードベクトルが選
ばれた時の最適ゲインである。式(1)を用いてコード
ブック探索を行うことにより、ベクトル内での音声信号
の周波数特性の変化に対応することができ、符号化音声
の音質を向上できる。
[0028] q ac, j and q ec, j are the optimum gains when the adaptive code vector and excitation code vector of the index j are selected, respectively. By performing the codebook search using the equation (1), it is possible to deal with the change in the frequency characteristic of the voice signal in the vector, and it is possible to improve the sound quality of the encoded voice.

【0029】[0029]

【実施例】次に、図1を参照して本発明の実施例につい
て説明する。
EXAMPLES Next, examples of the present invention will be described with reference to FIG.

【0030】図1は、本発明の一実施例を示すブロック
図である。入力端子10より音声信号を入力し、フレー
ム分割器100、聴感重み付け用分割器120、再生フ
ィルタ用分割器130に渡す。フレーム分割器100は
入力端子10から音声信号を受け取り、フレーム長毎
(例えば5msec)に切り出して得た音声ベクトルを
フレーム内分割器105に送る。フレーム内分割器10
5はフレーム分割器100から受け取った音声ベクトル
を更に分割(例えば1/2に分割)し、分割した分割音
声ベクトルを重み付けフィルタ器110に渡す。
FIG. 1 is a block diagram showing an embodiment of the present invention. An audio signal is input from the input terminal 10 and passed to the frame divider 100, the perceptual weighting divider 120, and the reproduction filter divider 130. The frame divider 100 receives the voice signal from the input terminal 10, and sends the voice vector obtained by cutting out for each frame length (for example, 5 msec) to the intraframe divider 105. In-frame divider 10
Reference numeral 5 further divides the voice vector received from the frame divider 100 (for example, divides into 1/2), and passes the divided divided voice vector to the weighting filter unit 110.

【0031】聴感重み付け用分割器120は入力端子1
0から音声信号を受け取り、聴感重み付けに用いるLP
C係数を計算するための切り出し(例えば窓長20ms
ec)を行い、切り出した音声信号をLPC分析器12
5に送る。LPC分析器125は聴感重み付け用分割器
120から受け取った音声データを線形予測分析し、得
たLPC係数をLPC係数補間器127に送る。LPC
係数補間器127はLPC分析器125から受け取った
LPC係数の各分割音声ベクトルに対応するLPC係数
の補間値を計算し、LPC重み付けフィルタ器110、
重み付けフィルタ160、重み付けフィルタ器195に
送る。
The perceptual weighting divider 120 has an input terminal 1
LP that receives audio signal from 0 and uses it for weighting of auditory perception
Cutout for calculating C coefficient (for example, window length 20 ms
ec), and the cut out audio signal is processed by the LPC analyzer 12
Send to 5. The LPC analyzer 125 performs linear predictive analysis on the voice data received from the perceptual weighting divider 120, and sends the obtained LPC coefficient to the LPC coefficient interpolator 127. LPC
The coefficient interpolator 127 calculates the interpolated value of the LPC coefficient corresponding to each divided speech vector of the LPC coefficient received from the LPC analyzer 125, and the LPC weighting filter unit 110,
The weighting filter 160 and the weighting filter 195 are sent.

【0032】再生フィルタ用分割器130は入力端子1
0から音声信号を受け取り、再生に用いるLPC係数を
計算するための切り出し(例えば窓長20msec)を
行い、切り出した音声信号をLPC分析器135に送
る。LPC分析器135は再生フィルタ用分割器130
から受け取った音声データを線形予測分析して得たLP
C係数を、LPC係数量子化器140に送る。LPC係
数量子化器140はLPC分析器135からLPC係数
を受け取り、量子化したインデクスをマルチプレクサ3
00に送り、復号したLPC係数をLPC係数補間器1
42に送る。LPC係数補間器142はLPC分析器1
35から受け取ったLPC係数の各分割音声ベクトルに
対応するLPC係数の補間値を計算し、再生フィルタ器
155、再生フィルタ器190に送る。
The reproduction filter divider 130 has an input terminal 1
The audio signal is received from 0, cut out (for example, window length 20 msec) for calculating the LPC coefficient used for reproduction is performed, and the cut out audio signal is sent to the LPC analyzer 135. The LPC analyzer 135 is a reproduction filter divider 130.
LP obtained by linear predictive analysis of voice data received from
The C coefficient is sent to the LPC coefficient quantizer 140. The LPC coefficient quantizer 140 receives the LPC coefficient from the LPC analyzer 135 and outputs the quantized index to the multiplexer 3
00 and sends the decoded LPC coefficient to the LPC coefficient interpolator 1
Send to 42. The LPC coefficient interpolator 142 is the LPC analyzer 1
The interpolated value of the LPC coefficient corresponding to each divided voice vector of the LPC coefficient received from S.35 is calculated and sent to the reproduction filter unit 155 and the reproduction filter unit 190.

【0033】重み付けフィルタ器110はフレーム内分
割器105から受け取った分割音声ベクトルを、LPC
係数補間器127から受け取った補間LPC係数を用い
て聴感重み付けした聴感重み付け分割音声ベクトルをフ
レーム内接続器115に渡す。フレーム内接続器115
は重み付けフィルタ器110から受け取った聴感重み付
けされた分割音声ベクトルを接続し、減算器175と2
乗誤差最小インデクス検出器170に送る。
Weighting filter unit 110 divides the divided speech vector received from intra-frame divider 105 into LPC.
The perceptually weighted divided speech vector, which is perceptually weighted using the interpolated LPC coefficient received from the coefficient interpolator 127, is passed to the intraframe connector 115. In-frame connector 115
Connects the perceptually weighted divided speech vectors received from the weighting filter unit 110, and subtracts 175 and 2
Send to the minimum power error index detector 170.

【0034】適応コードブック145は加算器205か
ら受け取った再生音源信号を蓄積し、フレーム長で切り
取って生成した適応コードブックベクトルをフレーム内
分割器150に送る。フレーム内分割器150は適応コ
ードブック145から受け取った適応コードベクトルを
更に分割(例えば1/2)し、分割した分割適応コード
ベクトルを再生フィルタ器155に渡す。再生フィルタ
器155はフレーム内分割器150から受け取った分割
適応コードベクトルを、LPC補間器142から受け取
った補間LPC係数を用いて再生し、重み付けフィルタ
器160に渡す。重み付けフィルタ器160は、再生フ
ィルタ器155で再生した信号ベクトルを、LPC係数
補間器127から受け取った補間LPC係数を用いて聴
感重み付けし、フレーム内接続器165に渡す。フレー
ム内接続器165は重み付けフィルタ器160から受け
取った聴感重み付けされた分割適応コードベクトルを接
続した重み付け再生適応コードベクトルを、2乗誤差最
小インデクス検出器170に送る。2乗誤差最小インデ
クス検出器170はフレーム内接続器115から受け取
った重み付け音声ベクトルとフレーム内接続器165か
ら受け取った重み付け再生適応コードベクトルの二乗距
離を計算し、二乗距離が最小の時、重み付け再生適応コ
ードベクトルを減算器175に送り、適応コードブック
145から受け取った適応コードベクトルを加算器20
5に送り、更にそのインデクスをマルチプレクサ300
に送る。
The adaptive codebook 145 accumulates the reproduced sound source signal received from the adder 205 and sends it to the intraframe divider 150 by generating an adaptive codebook vector which is cut out at the frame length. The intra-frame divider 150 further divides (for example, 1/2) the adaptive code vector received from the adaptive codebook 145, and passes the divided divided adaptive code vector to the reproduction filter unit 155. The reproduction filter unit 155 reproduces the divided adaptive code vector received from the intra-frame divider 150 using the interpolated LPC coefficient received from the LPC interpolator 142, and passes it to the weighting filter unit 160. The weighting filter unit 160 weights the signal vector reproduced by the reproduction filter unit 155 by using the interpolated LPC coefficient received from the LPC coefficient interpolator 127, and passes it to the intra-frame connector 165. The intra-frame connector 165 sends the weighted reproduction adaptive code vector obtained by concatenating the perceptually weighted divided adaptive code vectors received from the weighting filter 160 to the square error minimum index detector 170. The square error minimum index detector 170 calculates a squared distance between the weighted speech vector received from the intra-frame connector 115 and the weighted reproduction adaptive code vector received from the intra-frame connector 165. When the squared distance is the minimum, the weighted reproduction is performed. The adaptive code vector is sent to the subtractor 175, and the adaptive code vector received from the adaptive code book 145 is added to the adder 20.
5 and sends the index to multiplexer 300
Send to.

【0035】減算器175はフレーム内接続器115か
ら受け取った重み付け音声ベクトルから2乗誤差最小イ
ンデクス検出器170から受け取った重み付け再生適応
コードベクトルを減算した適応コードブック残差ベクト
ルを、2乗誤差最小インデクス検出器207に送る。
The subtracter 175 subtracts the weighted reproduction adaptive code vector received from the index squared error detector 170 from the weighted speech vector received from the intra-frame connector 115 to obtain the adaptive codebook residual vector and the squared error minimum. It is sent to the index detector 207.

【0036】音源コードブック180はフレーム内分割
器185に音源コードベクトルを送る。フレーム内分割
器185は音源コードブック180から受け取った音源
コードベクトルを更に分割(例えば1/2)し、分割し
た分割音源コードベクトルを再生フィルタ器190に渡
す。再生フィルタ器190はフレーム内分割器185か
ら受け取った分割音源コードベクトルを、LPC係数補
間器142から受け取った補間LPC係数を用いて再生
し、重み付けフィルタ器195に渡す。重み付けフィル
タ器195は、再生フィルタ器190から受け取った再
生ベクトルを、LPC係数補間器127から受け取った
補間LPC係数を用いて聴感重み付けし、フレーム内接
続器200に渡す。フレーム内接続器200は重み付け
フィルタ器195から受け取った聴感重み付けされた分
割音源コードベクトルを接続した重み付け再生音源コー
ドベクトルを、2乗誤差最小インデクス検出器207に
送る。2乗誤差最小インデクス検出器207は減算器1
75から受け取った適応コードブック残差ベクトルとフ
レーム内接続器200から受け取った重み付け再生音源
コードベクトルの二乗距離を計算し、二乗距離が最小の
時、音源コードブック180から受け取った音源コード
ベクトルを加算器205に送り、そのインデクスをマル
チプレクサ300に送る。
The source codebook 180 sends the source codevector to the intraframe divider 185. The intra-frame divider 185 further divides the excitation code vector received from the excitation codebook 180 (for example, 1/2), and passes the divided divided excitation code vector to the reproduction filter 190. The reproduction filter unit 190 reproduces the divided excitation code vector received from the intra-frame divider 185 using the interpolated LPC coefficient received from the LPC coefficient interpolator 142, and passes it to the weighting filter unit 195. The weighting filter unit 195 performs perceptual weighting on the reproduction vector received from the reproduction filter unit 190 using the interpolated LPC coefficient received from the LPC coefficient interpolator 127, and passes it to the intra-frame connector 200. The intra-frame connector 200 sends to the squared error minimum index detector 207 the weighted reproduction excitation code vector obtained by connecting the perceptually weighted divided excitation code vectors received from the weighting filter 195. The square error minimum index detector 207 is a subtractor 1
Compute the squared distance of the adaptive codebook residual vector received from H.75 and the weighted reproduced sound source code vector received from the intra-frame connector 200, and add the sound source code vector received from the sound source codebook 180 when the squared distance is minimum. And sends the index to the multiplexer 300.

【0037】加算器205は2乗誤差最小インデクス検
出器170から受け取った適応コードベクトルと2乗誤
差最小インデクス検出器207から受け取った音源コー
ドベクトルを加算し、適応コードブック145に送る。
The adder 205 adds the adaptive code vector received from the minimum squared error index detector 170 and the excitation code vector received from the minimum squared error index detector 207, and sends it to the adaptive codebook 145.

【0038】マルチプレクサ300はLPC係数量子化
器140、2乗誤差最小インデクス検出器170、2乗
誤差最小インデクス検出器207から受け取ったインデ
クス組み合わせ出力端子305に送る。
The multiplexer 300 sends to the index combination output terminal 305 received from the LPC coefficient quantizer 140, the squared error minimum index detector 170, and the squared error minimum index detector 207.

【0039】尚、以上の符号化方式において、聴感重み
付け用のLPC係数に分析再生フィルタ用のLPC係数
あるいは量子化したLPC係数を用いてもよい。この時
は聴感重み付け用分割器120とLPC分析器125は
不要となる。聴感重み付け用のLPC係数はフレーム内
でフレーム内分割数だけ線形予測分析すれば、LPC係
数補間器127は不要となる。またフレーム内分割数は
1でもよい。またLPC分析器はフレームの数倍の長さ
の周期毎(例えば20msec)に予め定められた窓長
の長さ(例えば20msec)の音声信号を線形予測分
析するようにしてもよい。
In the above coding method, the LPC coefficient for analysis and reproduction filter or the quantized LPC coefficient may be used as the LPC coefficient for weighting the perceptual sensation. At this time, the perceptual weighting divider 120 and the LPC analyzer 125 are not required. The LPC coefficient interpolator 127 becomes unnecessary if the LPC coefficient for perceptual weighting is subjected to linear prediction analysis by the number of intra-frame divisions within the frame. The number of divisions within a frame may be one. Further, the LPC analyzer may perform a linear predictive analysis on an audio signal having a predetermined window length (for example, 20 msec) for each period (for example, 20 msec) that is several times as long as the frame.

【0040】[0040]

【発明の効果】以上説明したように、本発明によれば、
コードブック探索において、フレーム内で分割して計算
した重み付け再生二乗距離を用いてコードブック探索を
行うことにより、従来の方法に比べて良好な音質が得ら
れるという効果がある。
As described above, according to the present invention,
In the codebook search, by performing the codebook search using the weighted reproduction squared distance calculated by dividing within the frame, it is possible to obtain better sound quality than the conventional method.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明による音声符号化装置の一実施例を示す
ブロック図である。
FIG. 1 is a block diagram showing an embodiment of a speech coding apparatus according to the present invention.

【符号の説明】 10 入力端子 100 フレーム分割器 105 フレーム内分割器 110 重み付けフィルタ器 115 フレーム内接続器 120 聴感重み付け用分割器 125 音声信号をLPC分析器 127 LPC係数補間器 130 再生フィルタ用分割器 135 音声信号をLPC分析器 140 LPC係数量子化器 142 LPC係数補間器 145 適応コードブック 150 フレーム内分割器 155 再生フィルタ 160 重み付けフィルタ器 165 フレーム内接続器 170 2乗誤差最小インデクス検出器 175 減算器 180 音源コードブック 185 フレーム内分割器 190 再生フィルタ器 195 重み付けフィルタ器 200 フレーム内接続器 205 加算器 207 2乗誤差最小インデクス検出器 300 マルチプレクサ 305 出力端子[Description of Reference Signs] 10 input terminal 100 frame divider 105 intraframe divider 110 weighting filter 115 115 intraframe connector 120 perceptual weighting divider 125 audio signal LPC analyzer 127 LPC coefficient interpolator 130 reproduction filter divider 135 LPC analyzer 140 LPC coefficient quantizer 142 LPC coefficient interpolator 145 Adaptive codebook 150 In-frame divider 155 Reproduction filter 160 Weighting filter 165 In-frame connector 170 Square error minimum index detector 175 Subtractor 180 Sound Source Codebook 185 Intra-frame Divider 190 Reproduction Filter 195 Weighting Filter 200 In-frame Connector 205 Adder 207 Square Error Minimum Index Detector 300 Multiplexer 305 Output Terminal

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】入力音声信号を一定区間長のフレームに切
りだし音声ベクトルを生成する手段と、前記入力音声信
号を予め定めた区間長毎に線形予測分析してLPC係数
を抽出する手段と、前記音声ベクトルと前記LPC係数
とを用いて過去の再生音源信号から前記フレームの長さ
に等しい当該フレームの適応コードベクトルを定める手
段と、前記音声ベクトルと前記LPC係数と前記定めた
適応コードベクトルとを用いて予め設計された前記フレ
ームの長さに等しい複数のベクトルを持つ音源コードブ
ックから当該フレームの音源コードベクトルを定める手
段とを少なくとも有する音声符号化装置において、 前記当該フレームの適応コードベクトルと前記当該フレ
ームの音源コードベクトルとを定める際に重み関数を用
い、当該フレーム内の予め定めた区間毎に前記入力音声
信号を線形予測分析して区間LPC係数を求める手段
と、前記求めた区間LPC係数から前記重み関数を形成
させる手段とを有することを特徴とする音声符号化装
置。
1. A means for extracting an input voice signal into a frame having a constant section length to generate a voice vector, and a means for linearly predicting and analyzing the input voice signal for each predetermined section length to extract an LPC coefficient. Means for determining an adaptive code vector of the frame that is equal to the length of the frame from a past reproduced sound source signal using the speech vector and the LPC coefficient, the speech vector, the LPC coefficient, and the determined adaptive code vector A speech coding apparatus having at least a means for determining a sound source code vector of the frame from a sound source code book having a plurality of vectors equal to the length of the frame designed in advance by using the adaptive code vector of the frame. A weighting function is used when determining the sound source code vector of the frame, and A speech coding apparatus comprising: means for obtaining a section LPC coefficient by performing a linear prediction analysis of the input speech signal for each predetermined section; and means for forming the weighting function from the obtained section LPC coefficient. ..
JP03588192A 1992-02-24 1992-02-24 Audio coding device Expired - Fee Related JP3248215B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP03588192A JP3248215B2 (en) 1992-02-24 1992-02-24 Audio coding device
DE69329476T DE69329476T2 (en) 1992-02-24 1993-02-23 Speech coding system
EP93102794A EP0557940B1 (en) 1992-02-24 1993-02-23 Speech coding system
CA002090205A CA2090205C (en) 1992-02-24 1993-02-23 Speech coding system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP03588192A JP3248215B2 (en) 1992-02-24 1992-02-24 Audio coding device

Publications (2)

Publication Number Publication Date
JPH05232997A true JPH05232997A (en) 1993-09-10
JP3248215B2 JP3248215B2 (en) 2002-01-21

Family

ID=12454351

Family Applications (1)

Application Number Title Priority Date Filing Date
JP03588192A Expired - Fee Related JP3248215B2 (en) 1992-02-24 1992-02-24 Audio coding device

Country Status (4)

Country Link
EP (1) EP0557940B1 (en)
JP (1) JP3248215B2 (en)
CA (1) CA2090205C (en)
DE (1) DE69329476T2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761632A (en) * 1993-06-30 1998-06-02 Nec Corporation Vector quantinizer with distance measure calculated by using correlations

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69625523T2 (en) 1995-05-10 2003-07-10 Nintendo Co Ltd Control unit with analog joystick
JP3544268B2 (en) 1995-10-09 2004-07-21 任天堂株式会社 Three-dimensional image processing apparatus and image processing method using the same
JP3524247B2 (en) 1995-10-09 2004-05-10 任天堂株式会社 Game machine and game machine system using the same
CN1149465C (en) 1995-10-09 2004-05-12 任天堂株式会社 Stereo image processing system
US6267673B1 (en) 1996-09-20 2001-07-31 Nintendo Co., Ltd. Video game system with state of next world dependent upon manner of entry from previous world via a portal
US6022274A (en) 1995-11-22 2000-02-08 Nintendo Co., Ltd. Video game system using memory module
US6190257B1 (en) 1995-11-22 2001-02-20 Nintendo Co., Ltd. Systems and method for providing security in a video game system
TW419645B (en) * 1996-05-24 2001-01-21 Koninkl Philips Electronics Nv A method for coding Human speech and an apparatus for reproducing human speech so coded
US6241610B1 (en) 1996-09-20 2001-06-05 Nintendo Co., Ltd. Three-dimensional image processing system having dynamically changing character polygon number
US6139434A (en) 1996-09-24 2000-10-31 Nintendo Co., Ltd. Three-dimensional image processing apparatus with enhanced automatic and user point of view control
JP3655438B2 (en) 1997-07-17 2005-06-02 任天堂株式会社 Video game system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05108096A (en) * 1991-10-18 1993-04-30 Sanyo Electric Co Ltd Vector drive type speech encoding device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0342687B1 (en) * 1988-05-20 1995-04-12 Nec Corporation Coded speech communication system having code books for synthesizing small-amplitude components
US4975956A (en) * 1989-07-26 1990-12-04 Itt Corporation Low-bit-rate speech coder using LPC data reduction processing
EP0443548B1 (en) * 1990-02-22 2003-07-23 Nec Corporation Speech coder

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05108096A (en) * 1991-10-18 1993-04-30 Sanyo Electric Co Ltd Vector drive type speech encoding device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5761632A (en) * 1993-06-30 1998-06-02 Nec Corporation Vector quantinizer with distance measure calculated by using correlations

Also Published As

Publication number Publication date
CA2090205A1 (en) 1993-08-25
EP0557940B1 (en) 2000-09-27
DE69329476T2 (en) 2001-02-08
CA2090205C (en) 1998-08-04
EP0557940A3 (en) 1994-03-23
EP0557940A2 (en) 1993-09-01
DE69329476D1 (en) 2000-11-02
JP3248215B2 (en) 2002-01-21

Similar Documents

Publication Publication Date Title
JP3254687B2 (en) Audio coding method
US6249758B1 (en) Apparatus and method for coding speech signals by making use of voice/unvoiced characteristics of the speech signals
JP2626223B2 (en) Audio coding device
RU2005137320A (en) METHOD AND DEVICE FOR QUANTIZATION OF AMPLIFICATION IN WIDE-BAND SPEECH CODING WITH VARIABLE BIT TRANSMISSION SPEED
US6052661A (en) Speech encoding apparatus and speech encoding and decoding apparatus
JPH05232997A (en) Voice coding device
US5526464A (en) Reducing search complexity for code-excited linear prediction (CELP) coding
JPH09152896A (en) Sound path prediction coefficient encoding/decoding circuit, sound path prediction coefficient encoding circuit, sound path prediction coefficient decoding circuit, sound encoding device and sound decoding device
JP3089769B2 (en) Audio coding device
JP3266178B2 (en) Audio coding device
JPH08179795A (en) Voice pitch lag coding method and device
JP3144009B2 (en) Speech codec
JPH0341500A (en) Low-delay low bit-rate voice coder
JP2002268686A (en) Voice coder and voice decoder
JPH09319398A (en) Signal encoder
JP2970407B2 (en) Speech excitation signal encoding device
US8825475B2 (en) Transform-domain codebook in a CELP coder and decoder
JP3490324B2 (en) Acoustic signal encoding device, decoding device, these methods, and program recording medium
JP2736157B2 (en) Encoding device
JPH05232996A (en) Voice coding device
JP3047761B2 (en) Audio coding device
JP3192051B2 (en) Audio coding device
JP3350340B2 (en) Voice coding method and voice decoding method
JP2000029499A (en) Voice coder and voice encoding and decoding apparatus
JPH09179593A (en) Speech encoding device

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees