JPS6170599A - Voice reproduction system - Google Patents

Voice reproduction system

Info

Publication number
JPS6170599A
JPS6170599A JP59192113A JP19211384A JPS6170599A JP S6170599 A JPS6170599 A JP S6170599A JP 59192113 A JP59192113 A JP 59192113A JP 19211384 A JP19211384 A JP 19211384A JP S6170599 A JPS6170599 A JP S6170599A
Authority
JP
Japan
Prior art keywords
waveform
residual
speech
voice
original
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP59192113A
Other languages
Japanese (ja)
Other versions
JPH0576639B2 (en
Inventor
国澤 寛治
糸山 博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Electric Works Co Ltd
Original Assignee
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Works Ltd filed Critical Matsushita Electric Works Ltd
Priority to JP59192113A priority Critical patent/JPS6170599A/en
Publication of JPS6170599A publication Critical patent/JPS6170599A/en
Publication of JPH0576639B2 publication Critical patent/JPH0576639B2/ja
Granted legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Abstract

(57)【要約】本公報は電子出願前の出願データであるた
め要約のデータは記録されません。
(57) [Summary] This bulletin contains application data before electronic filing, so abstract data is not recorded.

Description

【発明の詳細な説明】 [技術分野1 本発明はPARCOR方式やLPG方式などの線形予測
法による音声再生方式に関するものである。
DETAILED DESCRIPTION OF THE INVENTION [Technical Field 1] The present invention relates to an audio reproduction method using a linear prediction method such as a PARCOR method or an LPG method.

1背量技術1 現在PARCOR方式などの線形予測法を用いた音声合
rLIcht?!r種市販されている。この種の線形予
測法による音声合成は、合成t)S(++jをとして、
残iZnの2東平均値が最小となるような予測係数Ak
を求め、予測次数pが充分大きい(p≧8)場合には、
残差Z++が有声音区間では音源とノチに対応したイン
パルス列、無声音区間では白色雑音に近くなることを利
用し、残差波形を有声音区間ではピノ千周期と平均残差
強度をもつインパルス列(場合によっては三角l皮、1
ピンチ残差l皮形なども用い→れる)で、無声音区間で
は平均残差強度をらっ白色雑音でそれぞれ置き換えるこ
とによって、良好な音質を保ちながら情報圧縮を行なう
ものである。
1 Volume technology 1 Currently, audio synthesis rLIcht using linear prediction methods such as PARCOR method? ! R types are commercially available. Speech synthesis using this type of linear prediction method is as follows:
Prediction coefficient Ak that minimizes the two-east average value of residual iZn
is calculated, and if the predicted order p is sufficiently large (p≧8),
Taking advantage of the fact that the residual Z++ is an impulse train corresponding to the sound source and the notch in the voiced section, and close to white noise in the unvoiced section, the residual waveform is converted into an impulse train with a Pino thousand period and an average residual intensity in the voiced section. (In some cases, triangular skin, 1
Pinch residual l-rind etc. are also used), and information is compressed while maintaining good sound quality by replacing the average residual intensity with bright white noise in unvoiced sections.

第2関は残差波形を模塑的に示したもので、(aiの無
声音区間においては白色雑音状の残差の2乗平均値を最
小にするように音声パラメータが決定されるが、(b)
の有声音区間においてはインパルス状の残差が加わって
νするので、この場合の音声パラメータはインパルスと
白色雑音とを含んだ全体の残差の2乗平均値を最小にす
るように決定される。しかしインパルス部分は振幅が大
きいので2乗平均値のがなりの部分を占めるものと考え
られ、しrこがって(b)図において、零レベルが(a
)図の場合よりも2乗平均値のインパルス分りだけ嵩上
げされて実線位置に(るように音声パラメータが決定さ
れるので、白色雑音状残差のみの2P、平均値は最小に
はならなくなる。
The second section is a simulated representation of the residual waveform.In the unvoiced section of (ai), the voice parameters are determined so as to minimize the root mean square value of the white noise-like residual; b)
In the voiced sound section, an impulse-like residual is added to ν, so the speech parameters in this case are determined to minimize the root mean square value of the entire residual including impulses and white noise. . However, since the impulse part has a large amplitude, it is thought that it occupies the slope part of the root mean square value.
) Since the voice parameters are determined so that they are raised by the impulse of the root mean square value and are at the solid line position () than in the case of the figure, the 2P and mean value of only white noise-like residuals will no longer be the minimum.

しかし音声パラメータが抽出される過程では1周期の大
部分を占める白色雑音状部分に相関性が残存しており、
この白色雑音状の残差を最小にすることによって、この
部分から声道の特徴を抽出するのであるから、インパル
スのために白色雑音状残差が最小でなくなるのは不合理
であI)、その分だけ原音の復元性が悪くなると考えら
れる。まrこ音声を合成する際には、有声音区間では音
源波形としてインパルス列のみを使用するので、その意
味からち分析の段階で白色雑音残差が零になるように音
声パラメータを決定しておくことが望ましい。
However, in the process of extracting speech parameters, correlation remains in the white noise-like part that occupies most of one cycle.
By minimizing this white noise-like residual, the features of the vocal tract are extracted from this part, so it is unreasonable for the white noise-like residual to become no longer minimum due to the impulse.I) It is thought that the restoration of the original sound deteriorates accordingly. When synthesizing Mako's speech, only the impulse train is used as the sound source waveform in the voiced section, so the speech parameters are determined so that the white noise residual becomes zero at the analysis stage. It is desirable to leave it there.

[発明の目的1 本発明は上記の問題点に鑑み為さF′したちのであり、
線形予測法による残差波形をインパルス系列および白色
ランダム雑音で近似する音声再生方式の予測分析におい
て、有声音区間での白色雑音状残差のみの2乗平均値を
最小とするように音声パラメータを決定し、それによっ
て音声復元性を向上すること゛を目的とするものである
。    =[発明の開示1 しかして本発明は、線形予測に上る残差波形をインパル
ス系列封よび白色雑音で近似する音声再生方式において
、まず原音声波形を線形予測分析し、得られrこ残差波
形情報を用いて残差波形を生成し、この残差波形を原音
声波形から脆し引いtこ波形について再度線形予測分析
を行なって各音声パラメータを求め、これらの音声パラ
メータと上記残差波形情報とを用いて音声波形を合成す
るようにしたものであり、予測に先立ってインパルスの
みの2乗平均値すなわち第2図(1+)lこおけるDを
相殺しておくようにしたちのである。
[Object of the Invention 1 The present invention has been made in view of the above problems,
In the predictive analysis of a speech reproduction method that approximates the residual waveform obtained by the linear prediction method using an impulse sequence and white random noise, the speech parameters are set so as to minimize the root mean square value of only the white noise-like residual in the voiced speech section. The purpose of this is to improve audio restoration performance. = [Disclosure of the Invention 1 The present invention provides a speech reproduction method in which the residual waveform resulting from linear prediction is approximated by impulse sequence sealing and white noise. A residual waveform is generated using the waveform information, this residual waveform is subtracted from the original audio waveform, linear predictive analysis is performed again on this waveform to obtain each audio parameter, and these audio parameters and the above residual waveform are The speech waveform is synthesized using information, and prior to prediction, the root mean square value of only the impulses, that is, D in (1+)l in FIG. 2 is canceled out.

第1図は本発明の一実施例を示すブロック図である。同
図において、原音声は低域フィルタ1を通ってA/D−
変換2されたのち、メモリ3に記憶される。次にこの原
音声波形S1.はメモリ3がら読み出され、マイクロコ
ンピュータ4によって線形予測分析が行なわれる。1回
目の予測5に上って音声パラメータAkが抽出されると
共に残差情報としてピッチパラメータPおよび振幅パラ
メータUが得られるが、この時の音声パラメータAkお
よび残差波形Z。は捨てられ、残差情報Pお上りしのみ
が利用される。これらのPお上りUが音源発生回路6に
加えられて、純粋のインパルス列と白色稚仔のみからな
る残差波形Zlが生成される。この残差波形がメモリ3
から読み出されrこ原音声波形S、から引き算され、こ
うして得られた波形S1に上り2回目の予測7が行なわ
れる。2回目の予測7で得られた音声パラメータAkと
PおよびUパラメータ(前回得られたPお上びUでら同
じ)が音声情報として記憶または伝送8される。再生側
ではPおよびUを音源発生回路9に加えてZlと全く同
じ音源波形Z2を生成し、二へをディジタルフィルタ1
0に加元て音声パラメータAkにより音声波形S2を合
成し、D/ノ\変換器11および低域フィルタ12全通
して合成音声とする。
FIG. 1 is a block diagram showing one embodiment of the present invention. In the figure, the original audio passes through a low-pass filter 1 and then passes through the A/D-
After conversion 2, it is stored in memory 3. Next, this original audio waveform S1. is read out from the memory 3 and subjected to linear predictive analysis by the microcomputer 4. In the first prediction 5, the audio parameter Ak is extracted and the pitch parameter P and amplitude parameter U are obtained as residual information, and the audio parameter Ak and the residual waveform Z at this time. is discarded, and only the residual information P is used. These P upstreams U are added to the sound source generating circuit 6 to generate a residual waveform Zl consisting only of a pure impulse train and white particles. This residual waveform is stored in memory 3.
The second prediction 7 is performed on the thus obtained waveform S1 by subtracting it from the original speech waveform S. The audio parameters Ak, P, and U parameters obtained in the second prediction 7 (same as P, U, and U obtained previously) are stored or transmitted 8 as audio information. On the playback side, P and U are added to the sound source generation circuit 9 to generate a sound source waveform Z2 that is exactly the same as Zl, and the second is sent to the digital filter 1.
0 and synthesizes the speech waveform S2 using the speech parameter Ak, and passes it through the D/N converter 11 and the low-pass filter 12 to produce synthesized speech.

このように原音声波形S。から残差波形ZIを差し引い
tこ波形3 、 lこよって2回目の予測を行なった場
合に、インパルスの2乗平均値すなわち第2図(b>に
おけるDが相殺さ技るのは次の理由による。いま音源(
残差)波形Z1のみを入力として予測を行なったとする
と、Z、は有声音区間ではインパルス列のみで構f1.
されており、インパルスの1周期に占める時間は小さい
ので殆どサンプリングにかからず、そのrこめにすべて
の音声パラメータは実質的に零となり、その残差は音源
と同一のインパルス列となる。したがって「源波形Z1
を反転した波形<−2,>を原音声波形S、に重畳した
波形S、(=S、−Z、)を人力として予測を行なうと
、残差にはインパルス列の反軟した波形が重畳されるの
で、元のインパルスの2乗平均値が相殺さ跣ることにな
り、純粋の誤差成分である白色雑音状残差のみを2乗平
均値を最小にすること1こより、この部分に残存する相
関性をほぼ完全に除去でbるのである。
In this way, the original speech waveform S. When the second prediction is made by subtracting the residual waveform ZI from the waveform 3 and l, the root mean square value of the impulse, that is, D in Figure 2 (b>) cancels out for the following reason. According to the current sound source (
If prediction is performed using only the waveform Z1 as input, then Z, is only an impulse train in the voiced section, and f1.
Since the time occupied in one impulse cycle is small, almost no sampling is required, and all the audio parameters become substantially zero, and the residual becomes the same impulse train as the sound source. Therefore, “source waveform Z1
When the waveform S, (=S, -Z,) obtained by superimposing the inverted waveform <-2,> on the original speech waveform S, is manually predicted, the softened waveform of the impulse train is superimposed on the residual. Therefore, the root mean square value of the original impulse is canceled out, and only the white noise-like residual, which is a pure error component, remains in this part by minimizing the root mean square value. This almost completely eliminates the correlation between the two.

[発明の効果1 上述のよう1こ本発明は、まず原音声波形を予測して得
られた残差波形情報を用いて残差波形を生成し、この残
差波形を原音声波形から差し引いた波形について再度予
測を行なって各音声パラメータを求め、これらの音声パ
ラメータと上記残塩波形情報とを用いて音声波形を合成
するものであるから、有声音区間と無声音区間とを問わ
ず白色雑音状の残差の2乗平均値を最小とするように音
声パラメータの抽出を行なうことができ、したがって音
源波形をインパルス列と白色雑音とで近似して音声を再
生する場合の原音復元性をきわめて簡単な構成によって
改善し得るという利点がある。
[Effect of the invention 1 As mentioned above, the present invention first generates a residual waveform using the residual waveform information obtained by predicting the original speech waveform, and then subtracts this residual waveform from the original speech waveform. The waveform is predicted again to obtain each voice parameter, and the voice waveform is synthesized using these voice parameters and the residual salt waveform information. Speech parameters can be extracted so as to minimize the root mean square value of the residuals of This has the advantage that it can be improved by a new configuration.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の一実施例を示すブロック回路図、第2
図(II)(lJ)は同上の動作を示す波形図で・ある
。 1は低域フィルタ、2はA/D変換器、3はメモリ、4
はマイクロコンビ1−タ、5は第1の予測回路、6(ま
音源発生回路、7は第2の予測回路、8は記憶または伝
送回路、9は音源発生回路、10はディノタルフィルタ
、11はD/At換器、12はI[L*フィルタ、S、
は原音声波形、Slは重畳された波形、S2は再生音声
波形、Z。は残差波形、Z、およびZ2は近似された音
2!(残差)$L形。 代理人 弁理士 石 1)艮 七 手続補正書(自発) 昭和 59年12月29日
FIG. 1 is a block circuit diagram showing one embodiment of the present invention, and FIG.
Figure (II) (lJ) is a waveform diagram showing the same operation as above. 1 is a low-pass filter, 2 is an A/D converter, 3 is memory, 4
1 is a microcombiner, 5 is a first prediction circuit, 6 is a sound source generation circuit, 7 is a second prediction circuit, 8 is a storage or transmission circuit, 9 is a sound source generation circuit, 10 is a dinotal filter, 11 is a D/At converter, 12 is an I[L* filter, S,
is the original audio waveform, Sl is the superimposed waveform, S2 is the reproduced audio waveform, and Z. is the residual waveform, Z, and Z2 are the approximated sound 2! (Residual) $L type. Agent Patent Attorney Ishi 1) Written amendment to the 7th procedure (voluntary) December 29, 1980

Claims (1)

【特許請求の範囲】[Claims] (1)線形予測による残差波形をインパルス列および白
色雑音で近似する音声再生方式において、まず原音声波
形を線形予測分析し、得られた残差波形情報を用いて残
差波形を生成し、この残差波形を原音声波形から差し引
いた波形について線形予測分析を行なって各音声パラメ
ータを求め、これらの音声パラメータと上記残差波形情
報とを用いて音声波形を合成することを特徴とする音声
再生方式。
(1) In a speech reproduction method that approximates a residual waveform by linear prediction with an impulse train and white noise, first performs a linear prediction analysis on the original speech waveform, generates a residual waveform using the obtained residual waveform information, A voice characterized in that each voice parameter is obtained by performing linear predictive analysis on the waveform obtained by subtracting this residual waveform from the original voice waveform, and a voice waveform is synthesized using these voice parameters and the residual waveform information. Playback method.
JP59192113A 1984-09-13 1984-09-13 Voice reproduction system Granted JPS6170599A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59192113A JPS6170599A (en) 1984-09-13 1984-09-13 Voice reproduction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59192113A JPS6170599A (en) 1984-09-13 1984-09-13 Voice reproduction system

Publications (2)

Publication Number Publication Date
JPS6170599A true JPS6170599A (en) 1986-04-11
JPH0576639B2 JPH0576639B2 (en) 1993-10-25

Family

ID=16285878

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59192113A Granted JPS6170599A (en) 1984-09-13 1984-09-13 Voice reproduction system

Country Status (1)

Country Link
JP (1) JPS6170599A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60239800A (en) * 1984-05-14 1985-11-28 日本電気株式会社 Residual excitation type vocoder

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60239800A (en) * 1984-05-14 1985-11-28 日本電気株式会社 Residual excitation type vocoder

Also Published As

Publication number Publication date
JPH0576639B2 (en) 1993-10-25

Similar Documents

Publication Publication Date Title
JPS6170599A (en) Voice reproduction system
JPH0237600B2 (en)
Bertini et al. Voice transformation algorithms with real time DSP rapid prototyping tools
JPH05500573A (en) Digital audio decoder with post filter with reduced spectral distortion
JPS5898793A (en) Voice synthesizer
JPS59102297A (en) Voice synthesizer
JPS5837697A (en) Voice memory reproducer
JPS6170600A (en) Voice reproduction system
JPH11327598A (en) Helium voice restoring device
JPH02245800A (en) Voice reproduction system
JPH0339320B2 (en)
JPH06161460A (en) Sound source device for musical sound signal
JPS63244100A (en) Voice analyzer and voice synthesizer
JPH0690638B2 (en) Speech analysis method
JPH0876798A (en) Wide band voice signal restoration method
JPS5876897A (en) Voice synthesizer
JPS58215697A (en) Voice coding/decoding system
JPS6143797A (en) Voice editing output system
JPS5831393A (en) Voice synthesizer
JPS60500A (en) Voice analyzer/synthesizer
JPS5849879B2 (en) Audio information compression recording and playback method
JPS5849878B2 (en) Speech analysis/synthesis method
JPS62150397A (en) Voice information encoding system
JPS62159198A (en) Voice synthesization system
JPS61219999A (en) Multipulse encoder