JPS61238099A - Word voice recognition equipment - Google Patents

Word voice recognition equipment

Info

Publication number
JPS61238099A
JPS61238099A JP60080030A JP8003085A JPS61238099A JP S61238099 A JPS61238099 A JP S61238099A JP 60080030 A JP60080030 A JP 60080030A JP 8003085 A JP8003085 A JP 8003085A JP S61238099 A JPS61238099 A JP S61238099A
Authority
JP
Japan
Prior art keywords
word
phoneme
recognition
phonemes
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP60080030A
Other languages
Japanese (ja)
Other versions
JPH0567040B2 (en
Inventor
昭一 松永
清宏 鹿野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP60080030A priority Critical patent/JPS61238099A/en
Publication of JPS61238099A publication Critical patent/JPS61238099A/en
Publication of JPH0567040B2 publication Critical patent/JPH0567040B2/ja
Granted legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/62Hybrid vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/60Other road transportation technologies with climate change mitigation effect
    • Y02T10/64Electric machine technologies in electromobility

Abstract

(57)【要約】本公報は電子出願前の出願データであるた
め要約のデータは記録されません。
(57) [Summary] This bulletin contains application data before electronic filing, so abstract data is not recorded.

Description

【発明の詳細な説明】 「産業上の利用分野」 この発明は、音韻単位の認識に基づく単語音声認識装置
に関するものである。
DETAILED DESCRIPTION OF THE INVENTION "Field of Industrial Application" The present invention relates to a word speech recognition device based on recognition of phoneme units.

「従来の技術」 従来、この種の音韻単位の認識に基づく単語音声認識装
置においては、入力音声の特徴パラメータ時系列を、単
語辞書部の音韻記号の系列で表現した単語の類似度を求
めその類似度の最も高いものを認識結果としていた。そ
の場合に単語辞書部からの候補単語選択は音韻のみを用
いていた。
``Prior Art'' Conventionally, in word speech recognition devices based on this type of phoneme unit recognition, the feature parameter time series of input speech is expressed by a series of phonetic symbols in a word dictionary, and the similarity of words is calculated. The recognition result was the one with the highest degree of similarity. In that case, candidate words were selected from the word dictionary using only phonemes.

(例えば板橋他「単語中の音素系列の指定による誘電の
減少効果」電子通信学会論文誌、Vol、J67−D、
 1IkL8 (1984−8) ;沢井他「人語!単
語音声認識のための予備選択の検討」日本音響学会音声
研究会資料、  384−14(1984−6);)つ
まりこれらの方式は部分音韻系列の音韻順序関係のみを
考慮した選択方式であり、−音韻間の接続関係、つまり
音韻が直接接続されているか間に不明の音韻があるかが
考慮されていなかった。このために、単語選択の能力は
充分とは言えず、多くの候補単語を必要とした。
(For example, Itabashi et al., “Dielectric reduction effect by specifying phoneme sequences in words,” Journal of the Institute of Electronics and Communication Engineers, Vol. J67-D,
1IkL8 (1984-8); Sawai et al., "Human language! Examination of preliminary selection for word speech recognition," Acoustical Society of Japan Speech Study Group Materials, 384-14 (1984-6);) In other words, these methods are partial phonological sequences. This is a selection method that only considers the phoneme order relationship, and does not take into account the connection relationship between -phonemes, that is, whether the phonemes are directly connected or whether there is an unknown phoneme in between. For this reason, the word selection ability was not sufficient and a large number of candidate words were required.

またより明確な発声の場合には選択する単語数が少なく
なり、逆により曖昧な発声の場合には、選択する単語数
が多くなるというような発声の状態に応じて単語選択の
能力が変わるという考慮を働かせることができなかった
。さらに音韻の検出誤りには充分な訂正措置がとられて
いなかった。
Furthermore, the ability to select words changes depending on the state of the vocalization, such that when the vocalization is clearer, fewer words are selected, and when the vocalization is more ambiguous, the number of words selected is increased. I couldn't give it any consideration. Furthermore, sufficient corrective measures were not taken for phoneme detection errors.

これらのために単語認識部で類似度を求める候補単語数
が多くなり、処理時間が長くなり、候補単語数を少なく
すると認識率が低下する問題があった・ 「問題点を解決するための手段」 この発明によれば、入力音声の特徴パラメータ時系列か
ら確実に音韻が存在する区間を音韻単位で検出し、つま
り入力音声を音韻単位にセグメンテーションを行い、そ
のセグメンテーションにより得た音声の確からしい部分
(区間)の音韻を検出し、その検出した音韻と接続関係
も考慮して同一のものを単語辞書部から候補単語として
選出し、この選出した候補単語についてのみ入力音声特
徴パラメータ時系列との類似度を求める。このようにし
て少ない候補単語との類似度演算で高い認識率を得る。
As a result, the number of candidate words for which similarity is determined in the word recognition unit increases, which increases the processing time, and reducing the number of candidate words causes the recognition rate to decrease. According to the present invention, a section in which a phoneme definitely exists is detected from a time series of characteristic parameters of an input speech in units of phonemes, that is, the input speech is segmented in units of phonemes, and a likely part of the speech obtained by the segmentation is detected. (section) phoneme is detected, the same phoneme as the detected phoneme and the connection relationship are selected as a candidate word from the word dictionary section, and only the selected candidate word is similar to the input speech feature parameter time series. Find the degree. In this way, a high recognition rate can be obtained by calculating the similarity with a small number of candidate words.

なお必要に応じて、候補単語の選出の際に音韻又は音韻
連鎖に対応した音韻検出誤りを訂正しながら行う。
If necessary, candidate words are selected while correcting phoneme detection errors corresponding to phonemes or phoneme chains.

「実施例」 図はこの発明の実施例を示す。入力端子1から入力され
た音声は、特徴抽出部2においてディジタル信号に変換
され、更にLPG分析された後、1フレーム(例えば8
ミリ秒)ごとに特徴パラメータに変換される。この特徴
パラメータは入力音声の正規化対数パワー、雑音からの
レベルやスペクトルの距離、パワーディップ(2次曲線
近偵の2次微係数)、短時間(例えば16ミリ秒)スペ
クトル変化、長時間(例えば48ミリ秒)スペクトル変
化9周波数の低域と高域とのパワー比、5母音及び撥音
(N)の標準パターンからWLR尺度値(スペクトル距
離の近さの尺度(11)などである。
"Embodiment" The figure shows an embodiment of the invention. The audio input from the input terminal 1 is converted into a digital signal by the feature extraction unit 2, and then subjected to LPG analysis.
milliseconds) are converted into feature parameters. These feature parameters include normalized logarithmic power of the input speech, level and spectral distance from noise, power dip (second derivative of quadratic curve), short-term (e.g. 16 ms) spectral changes, and long-term (e.g. 16 ms) spectral changes. For example, 48 milliseconds) Spectral change 9 Frequency low-to-high power ratio, 5 Vowels and a standard pattern of pellicle (N) to WLR scale value (Spectral distance closeness scale (11), etc.).

この変換された入力音声の特徴パラメータ時系列はセグ
メンテーション部3に入力されて、音韻単位で確実にセ
グメンテーションができる区間、つまり確かに音韻が存
在している区間が検出される。この確実にセグメンテー
ションができる区間は、複数、好ましくは三つ以上の特
徴パラメータ、例えばパワー、パワーディップ及び短時
間スペクトル変化について、それぞれしきい値を設定し
、その1つのしきい値の組を用いて、人力音声の特徴ハ
ラメータをセグメンテーションを行い、つまりしきい値
を同時に越えるか否かを行い、また他のしきい値の組を
用いてセグメンテーションを行い、その両しきい値を越
えた区間、つまり両セグメンテーション区間の違いが小
さいもの、例えば2.3フレーム以下のものを、確実に
セグメンテーションができる区間とする。
The feature parameter time series of the converted input speech is input to the segmentation unit 3, and a section in which segmentation can be reliably performed on a phoneme basis, that is, a section in which a phoneme is definitely present, is detected. This interval in which segmentation can be reliably performed is determined by setting thresholds for each of multiple, preferably three or more feature parameters, such as power, power dip, and short-term spectral change, and using one set of thresholds. Then, segmentation is performed on the characteristic harameter of the human voice, that is, it is determined whether or not the thresholds are exceeded at the same time, and segmentation is performed using another set of thresholds, and the section in which both thresholds are exceeded, In other words, a segment in which the difference between the two segmentation sections is small, for example, 2.3 frames or less, is determined as a section in which segmentation can be reliably performed.

このように確実にセグメンテーションされた区間につい
て、入力音声特徴パラメータ時系列に対し、確からしい
音韻認識部4で音韻を検出する。
For the sections that have been reliably segmented in this way, the probable phoneme recognition unit 4 detects phonemes with respect to the input speech feature parameter time series.

この音韻検出は同一の特徴パラメータの系列した標準の
音韻との類似度を求めることにより従来と同様の手法で
求めることができる。この例では音声のパワーとその継
続時間とを基にして、検出した確からしい音韻が単語の
語頭又は、語尾のものであるか否かも検出した場合であ
る。
This phoneme detection can be performed in the same manner as in the past by determining the degree of similarity with a standard phoneme that has the same feature parameter sequence. In this example, it is also detected whether the detected probable phoneme is at the beginning or end of a word, based on the power of the voice and its duration.

候補単語の選出部5では検出した確からしい音韻を用い
、しかもその連続性との順番を保持し、つまり接続関係
を保持し、同一の接続関係の音韻をもつ単語を単語辞書
部6から候補単語として選択する。
The candidate word selection unit 5 uses the detected probable phonemes, maintains their continuity and order, that is, maintains the connection relationship, and selects words with phonemes with the same connection relationship from the word dictionary unit 6 as candidate words. Select as.

この選択め際に、必要に応じて確からしい音韻の認識結
果の誤りを訂正しながら行う。例えばこのために音韻i
!!識結果訂正規則部7が設けられる。
When making this selection, errors in the recognition results of probable phonemes are corrected as necessary. For example, for this the phoneme i
! ! A recognition result correction rule section 7 is provided.

誤り易い音韻認識の関係がある程度知られているが、こ
の関係を音韻認識結果訂正規則部7に予め訂正規則とし
て記憶しておく。この訂正規則としては例えば次のもの
が考え・られる。
Although it is known to some extent that there is a relationship in phoneme recognition that is prone to errors, this relationship is stored in the phoneme recognition result correction rule section 7 in advance as a correction rule. For example, the following correction rules can be considered.

(a)  連続母音に対する誤り、例えばAIとA、E
とは誤り易い。
(a) Errors for continuous vowels, such as AI and A, E
It is easy to make a mistake.

山) 半母音、拗音に対する誤り、 (C1語尾のセグメンテーションの誤り、最後の音韻が
消えてその前の音韻を語尾と誤認識する、その消え易い
音韻が知られている。
(Yama) Errors for semi-vowels and persistent consonants, (Errors in segmentation of C1 word endings, and phonology that easily disappears, where the final phonology disappears and the previous phonology is mistakenly recognized as the end of the word) are known.

+dl  無声化に対する誤り、無声化し易い音韻が知
られている。
+dl It is known that there are errors in devoicing, and phonemes that are easily devoiced.

検出した確からしい音韻を用いて単語辞書部6から候補
単語を選択する際に該当する候補単語がない時に、音韻
認識結果訂正規則部7を参照して検出した確からしい音
韻中の誤りらしいものを訂正して単語辞書部6から候補
単語を選択する。
When selecting a candidate word from the word dictionary unit 6 using the detected probable phoneme, if there is no corresponding candidate word, the phoneme recognition result correction rule unit 7 is referred to to select a likely error in the detected probable phoneme. After correction, a candidate word is selected from the word dictionary section 6.

このようにして選択された候補単語を単語認識部日へ送
る。単語認識部8では特徴抽出部2からの入力音声特徴
パラメータ時系列と各候補単語との類似度が求められる
。この類似度を求めるのは従来用いられている手法と同
様に行えばよい、求める類似度の最も高い候補単語を認
識結果として認識結果出力部9から出力する。
The candidate words selected in this manner are sent to the word recognition section. The word recognition unit 8 determines the degree of similarity between the input speech feature parameter time series from the feature extraction unit 2 and each candidate word. This degree of similarity can be determined in the same manner as in conventional methods, and the candidate word with the highest degree of similarity to be determined is outputted from the recognition result output section 9 as a recognition result.

次にこの発明の要部である単語候補の具体例を示す。い
ま単語辞書部6に1)SAKATA  2)MITAK
A3)TAKE)10 4)KITAKATA  5)
TAKEDAなる単語が存在するとする。
Next, a specific example of word candidates, which is the main part of this invention, will be shown. Now in the word dictionary section 6 1) SAKATA 2) MITAK
A3) TAKE) 10 4) KITAKATA 5)
Assume that the word TAKEDA exists.

確からしい音H認識部4で (場合1)  1個の音IEのみが検出された場合、3
)、5)の単語が選択される。
When the probable sound H recognition unit 4 detects only one sound IE (case 1), 3
), 5) are selected.

(場合2)KAなる連続した二つの音韻が検出された場
合、1)、2)、4)の単語が選択される。従来ではセ
グメンテーションを行っておらず、従って音韻の連続性
を検出していなく、順番のみを考慮していたため、例え
ば単語5)の音声が入力され、そのEDを音韻として検
出せず、音韻に、Aを検出した場合は5)の単語も候補
としてしまう。
(Case 2) When two consecutive phonemes KA are detected, words 1), 2), and 4) are selected. Conventionally, segmentation was not performed, and therefore continuity of phonemes was not detected, and only the order was taken into consideration.For example, when the sound of word 5) is input, the ED is not detected as a phoneme, and the ED is not detected as a phoneme. If A is detected, the word 5) is also considered as a candidate.

(場合3)TAなる連続した二つの音韻が検出され、さ
らにそれが語尾の島である場合、単語1)、4)が選択
される。従来ではセグメンテーションを行っていないた
め、TAの後にKAが明確に出ていないと2)の単語も
候補としている。
(Case 3) If two consecutive phonemes TA are detected and they are word-final islands, words 1) and 4) are selected. Conventionally, segmentation is not performed, so if KA does not clearly appear after TA, the word 2) is also considered as a candidate.

(場合4)  TAKAなる4つの連続した音韻が検出
された場合、単語2)、4)が選択される。従来は前述
と同様な理由から5)の単語も選択することがある。
(Case 4) If four consecutive phonemes such as TAKA are detected, words 2) and 4) are selected. Conventionally, the word 5) may also be selected for the same reason as mentioned above.

(場合5) 旧なる二つの連続した音韻とKなる一つの
音韻とがその順で検出された場合、2)の単語が選択さ
れる。
(Case 5) When two consecutive old phonemes and one phoneme K are detected in that order, the word 2) is selected.

(場合6)  Tなる音韻と、これと連続しないAなる
音韻とがその順に検出された場合、2)。
(Case 6) When the phoneme T and the phoneme A, which is not continuous, are detected in that order, 2).

4)、5)の単語が選択される。従来ではセグメンテー
ションを行わず順番のみを見ているため、1)、 2)
、 3)、 4)、 5)の単語を選択する。
Words 4) and 5) are selected. Conventionally, segmentation is not performed and only the order is looked at, so 1), 2)
, 3), 4), and 5).

(場合7) 単語辞書部6にYA、 MAがあり、検出
した確からしい音韻が連続したVANである場合に、V
ANを含む単語を単語辞書部6から選択してゆく途中で
該当単語がなく選択できなくなり、訂正規則部7を参照
して、VANをYAMと訂正して、単語YAMAを候補
として選択する。
(Case 7) When the word dictionary section 6 has YA and MA, and the detected probable phonemes are continuous VAN, V
While selecting a word containing AN from the word dictionary section 6, the word cannot be selected because there is no corresponding word, so the user refers to the correction rule section 7, corrects VAN to YAM, and selects the word YAMA as a candidate.

「発明の効果」 以上説明したように、この発明によればセグメンテーシ
ョンを行って確からしい音韻を認識し、これを用いて単
語候補を予備選択しているため、認識性能を落とさずに
、候補単語を削減でき、認識処理時間を削減できる。
"Effects of the Invention" As explained above, according to the present invention, segmentation is performed to recognize probable phonemes, and this is used to preliminarily select word candidates. can be reduced, and the recognition processing time can be reduced.

例えばトップ−ダウン・アンドボトム−アップ音声認識
システム(松永他、r Top −Down処理とBo
 t toIm−υp処理を融合した音声認識」日本音
響学会音声研究会資料383−49(1983−12)
)を単語認識部8に用いた場合において、50名の発声
した100都市名の音声データに対して、100都市名
の単語辞書部6を用いた場合、認識率95.5%で、従
来技術に対し、候補単語数を平均21.1%に、処理時
間を62.8%にそれぞれ削減でき、643都市名を用
いた場合認識率82.0%で従来技術に対し、候補単語
数を平均17.2%に、処理時間を53.8%にそれぞ
れ削減できた。
For example, top-down and bottom-up speech recognition systems (Matsunaga et al., Top-Down processing and Bo
"Speech Recognition Combining t toIm-υp Processing" Acoustical Society of Japan Speech Research Group Material 383-49 (1983-12)
) is used in the word recognition unit 8, and when the word dictionary unit 6 with 100 city names is used for audio data of 100 city names uttered by 50 people, the recognition rate is 95.5%, which is higher than the conventional technology. In contrast, the number of candidate words can be reduced to an average of 21.1% and the processing time to 62.8%, respectively, and when 643 city names are used, the recognition rate is 82.0%, and the average number of candidate words can be reduced compared to the conventional technology. The processing time was reduced to 17.2% and the processing time was reduced to 53.8%, respectively.

なお上述において各部は一般には専用又は兼用のマイク
ロプロセッサにより処理される。
In the above description, each part is generally processed by a dedicated or dual-purpose microprocessor.

【図面の簡単な説明】[Brief explanation of the drawing]

図はこの発明による音声認識装置の一例を示すブロック
閲である。 1:音声信号入力端子、2:特徴抽出部、3;セグメン
テーション部、4:確からしい音韻認識部、5:候補単
語選択部、6:音声認識用単語辞書、7:音韻認識結果
訂正規則、8:単語認識部、9:認識結果出力部。
The figure is a block diagram showing an example of a speech recognition device according to the present invention. 1: Speech signal input terminal, 2: Feature extraction unit, 3: Segmentation unit, 4: Probable phoneme recognition unit, 5: Candidate word selection unit, 6: Word dictionary for voice recognition, 7: Phoneme recognition result correction rule, 8 : word recognition section, 9: recognition result output section.

Claims (1)

【特許請求の範囲】[Claims] (1)入力音声を特徴パラメータの時系列とし、その特
徴パラメータ時系列と、単語辞書部からの音韻記号の系
列で表現した単語とからその単語に対する類似度を単語
認識部で求め、類似度の高い単語を認識結果とする単語
音声認識装置において、 上記入力音声について確実に音韻が存在する区間を音韻
単位で検出してセグメンテーションを行う手段と、 そのセグメンテーションされた区間が何れの音韻である
かを検出する手段と、 その検出された音韻をもった単語を上記単語辞書部から
選択して上記単語認識部へ、入力音声の特徴パラメータ
との類似度を求めるために送る手段とを具備することを
特徴とする単語音声認識装置。
(1) The input speech is a time series of feature parameters, and the word recognition unit calculates the degree of similarity to that word from the time series of feature parameters and the word expressed as a series of phonetic symbols from the word dictionary. In a word speech recognition device that recognizes words with a high level of recognition, there is provided a means for segmenting the input speech by detecting sections in which phonemes definitely exist in units of phonemes, and identifying which phoneme the segmented section belongs to. and means for selecting a word with the detected phoneme from the word dictionary section and sending it to the word recognition section for determining the degree of similarity with the characteristic parameters of the input speech. Characteristic word speech recognition device.
JP60080030A 1985-04-15 1985-04-15 Word voice recognition equipment Granted JPS61238099A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60080030A JPS61238099A (en) 1985-04-15 1985-04-15 Word voice recognition equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60080030A JPS61238099A (en) 1985-04-15 1985-04-15 Word voice recognition equipment

Publications (2)

Publication Number Publication Date
JPS61238099A true JPS61238099A (en) 1986-10-23
JPH0567040B2 JPH0567040B2 (en) 1993-09-24

Family

ID=13706869

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60080030A Granted JPS61238099A (en) 1985-04-15 1985-04-15 Word voice recognition equipment

Country Status (1)

Country Link
JP (1) JPS61238099A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63220298A (en) * 1987-03-10 1988-09-13 富士通株式会社 Word candidate curtailing apparatus for voice recognition
JPH01224799A (en) * 1988-03-04 1989-09-07 Fujitsu Ltd Clause candidate reducing system for voice recognition
JP2001249684A (en) * 2000-03-02 2001-09-14 Sony Corp Device and method for recognizing speech, and recording medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63220298A (en) * 1987-03-10 1988-09-13 富士通株式会社 Word candidate curtailing apparatus for voice recognition
JPH01224799A (en) * 1988-03-04 1989-09-07 Fujitsu Ltd Clause candidate reducing system for voice recognition
JP2001249684A (en) * 2000-03-02 2001-09-14 Sony Corp Device and method for recognizing speech, and recording medium

Also Published As

Publication number Publication date
JPH0567040B2 (en) 1993-09-24

Similar Documents

Publication Publication Date Title
Miller Pitch detection by data reduction
JPS62217295A (en) Voice recognition system
JP3069531B2 (en) Voice recognition method
JPS61238099A (en) Word voice recognition equipment
JPH0558553B2 (en)
Villing et al. Performance limits for envelope based automatic syllable segmentation
JPS5939760B2 (en) voice recognition device
Niederjohn et al. Computer recognition of the continuant phonemes in connected English speech
JPS5936759B2 (en) Voice recognition method
Lienard Speech characterization from a rough spectral analysis
Elghonemy et al. Speaker independent isolated Arabic word recognition system
JPH0682275B2 (en) Voice recognizer
JPS599080B2 (en) Voice recognition method
JPS6344699A (en) Voice recognition equipment
JP2712586B2 (en) Pattern matching method for word speech recognition device
JPH01185599A (en) Speech recognizing circuit
JPH0567036B2 (en)
JP2891259B2 (en) Voice section detection device
JP2901976B2 (en) Pattern matching preliminary selection method
JPH03145167A (en) Voice recognition system
JPS59211098A (en) Voice recognition equipment
JPS63798B2 (en)
JPS5969798A (en) Extraction of pitch
JPS6147994A (en) Voice recognition system
JPS6250800A (en) Voice recognition equipment

Legal Events

Date Code Title Description
EXPY Cancellation because of completion of term