US4592086A - Continuous speech recognition system - Google Patents

Continuous speech recognition system Download PDF

Info

Publication number
US4592086A
US4592086A US06447829 US44782982A US4592086A US 4592086 A US4592086 A US 4592086A US 06447829 US06447829 US 06447829 US 44782982 A US44782982 A US 44782982A US 4592086 A US4592086 A US 4592086A
Authority
US
Grant status
Grant
Patent type
Prior art keywords
digit
similarity
measure
point
pattern
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US06447829
Inventor
Masao Watari
Hiroaki Sakoe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NEC Corp
Original Assignee
NEC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/12Speech classification or search using dynamic programming techniques, e.g. dynamic time warping [DTW]

Abstract

A continuous speech recognition system determines the similarity between input patterns and reference patterns over time such that similarities between previously spoken speech patterns and reference patterns are determined while speech continues to be spoken. Degrees of dissimilarity at arbitrary reference pattern word times are determined asymptotically and are recorded. The minimum degree of dissimilarity is determined and the corresponding word is categorized. Recognition decisions are ultimately made in reverse chronological order.

Description

BACKGROUND OF THE INVENTION

This invention relates to a system for automatically recognizing continuous speech composed of a plurality of words spoken continuously.

Various methods have been tried hitherto for voice recognition. A simple pattern matching method which is both available and effective will be described below. This method measures the degree of dissimilarity (hereinafter called "similarity measure") between a reference pattern (hereinafter called "reference word pattern") prepared for each word to be recognized and an inputted unknown voice pattern (hereinafter called "input pattern"), thereby recognizing the input pattern as a word of the reference work pattern when the similarity measure is minimized.

A continuous speech recognition system operating according to the above-mentioned pattern matching method has been proposed in the U.S. Pat. No. 4,059,725. This system operates by matching a reference pattern of a continuous voice (hereinafter called "reference continuous voice pattern") obtained through connecting several reference word patterns in every order with the whole input pattern. The recognition is performed by specifying the number and order of the reference word patterns so that the whole similarity measure will be minimized. The above-mentioned minimization is divided practically into two stages those which minimized at a word unit level and those which minimize at a whole pattern level, and each minimization is carried out according to a dynamic programming (the matching using dynamic programming being called "DP matching" hereinafter.)

In minimization at a word unit level, the system divides the input pattern at every conceivable word unit and then performs DP matching with the reference word pattern for all of them. Assuming here that the length of the input pattern is M and that the number of reference word patterns is V, DP matching will be required M.V times.

A technique for reducing the above number of DP matchings to Lmax.V with one word being one digit and the maximum available digit number being Lmax has been proposed by Cory S. Myers and Lawrence R. Rabinar. Reference is made to a paper "A Level Building Dynamic Time Warping Algorithm for Connected Word Recognition" IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND PROCESSING, VOL. ASSP-29, No. 2, APRIL 1981, pp. 284-297. According to this technique, the similarity measure between the input pattern, given in a time series of feature vectors, and the reference continuous voice pattern, given in every combination of pluralities of words, each comprising a time series of feature vectors, will be obtained as follows. A time point m of the input pattern and a time point n of the reference continuous voice pattern are made to correspond with each other by a well-known optimum monotony increased nonlinear function (hereinafter called "time normalized function") n=n(m), and the accumulated value of the distance d(m, n) between feature vectors at the time is thus made corresponding along the time normalized function as the minimum similarity measure. A minimum value of the whole similarity measure obtainable along all matching routes passing a given point is given generally by the sum of minimum values of a partial similarity measure from the start to the given point and a partial similarity measure from the given point to the end. Now, therefore, regarding an end point on each digit of the reference continuous voice pattern as the foregoing given point, a minimum partial similarity measure may be obtained on each digit, and the mimimum whole similarity measure may be obtained by summing the minimum partial similarity measure for all digits. Namely, each reference word pattern on the first digit of the reference continuous voice pattern is subjected first to a matching with the input pattern to obtain a minimum value of the similarity measure, and then the result works as an initial value for the matching of the second digit to carry out a matching of each reference word pattern on the second digit with the input pattern. After matching as far as the Lmax-th digit, a minimum value of the similarity measure on each digit at an end point M of the input pattern is obtained, thus obtaining an optimum digit number L. A recognition category for each digit is obtained successively by following backwardly the matching path from a point of a similarity measure on the L-th digit.

Minimization is effected on each digit of the reference continuous voice pattern, as described above, according to the technique given by Myers et al., therefore the number of DP matching process for each digit is equal to the reference word number V, thereby reducing the number of the whole DP matching process to Lmax.V.

In a speech recognition system, a recognition response takes place in the time from the end of speech being detected to a recognized result being outputted. Then, according to the technique by Myers et al., a matching on the first digit is commenced after the input pattern necessary for matching on the first digit is obtained, and the matching follows successively up to the Lmax-th digit, thus obtaining a recognized result. In other words, a calculation for DP matching proceeds in the input pattern axis direction in the above technique, therefore a major part of the calculation cannot be commenced until the input voice comes to an end. For example, assuming the upper boundary of a range for matching (or well-known matching window) specified by a straight line of inclination 2, the lower boundary specified by a straight line of inclination 1/2 and the maximum length or reference pattern doubled as an average length of reference pattern, the calculation for the matching process only proceeds to 1/4 of the digits at the time point when the input voice comes to end, and the calculation of the remaining 3/4 of the digits will be left for processing after the voice is uttered. The processing time for 3/4 of the digits is the recognition response time with a large time lag, which is problematical is real time. To settle the problem, a complicated and expensive high-speed processor capable of parallel processing and pipeline processing will be required.

SUMMARY OF THE INVENTION

It is, therefore, an object of this invention to provide a continuous speech recognition system which is simple in construction and capable of shortening recognition response time.

Another object of this invention is to provide a continuous speech recognition system capable of reducing the number of calculations and amount of memory capacity required in achieving the above features.

A further object of this invention is to provide a continuous speech recognition system capable of removing the limitation on number of words (digits) that an input voice comprises in achieving the above features.

According to the present invention, there is provided a continuous speech recognition system for recognizing input speech composed of a plurality of continuously spoken words comprising: a speech analyzing means for analyzing an input signal at every given frame time point m and outputting an input pattern expressed as a time series of a feature vector consisting of a predetermined number of feature parameters, an input pattern memory to store the input pattern, a reference pattern memory to store a reference pattern consisting of a feature vector with the same format as the input pattern for each of a plurality (V) of predetermined words to be recognized, a distance calculating means to calculate the distance between the feature vector at a time point m (varied from start point to end point M) of said input pattern and the feature vector at a time point n of the reference pattern of the v-th word at every time point m under a predetermined distance formula by changing the time point n of the reference pattern of the v-th word in an arbitrary order from the start point to end point Nv, an asymptotic calculating means to calculate a similarity measure D(v, n) given by the cumulative sum of the distance at the time points n and path information F(v, n) indicating a time point of the input pattern at the start point of the v-th reference pattern on a path through which the similarity measure D(v, n) has been obtained by a predetermined asymptotic expression according to a dynamic programming process at every time point m based on the similarity measure and path information obtained at a time point (m-1), a digit similarity measure and digit path information calculating means to select a minimum similarity measure from among the similarity measures at the end time points of reference patterns of all the words obtained through the asymptotic calculating means, and to provide said minimum similarity measure as a digit similarity measure DB(m), a category to which the word corresponding to the minimum similarity measure belongs as a digit recognition category W(m)=v, and path information corresponding to the minimum similarity measure as digit path information FB(m) at the time point m, an initializing means to give the digit similarity measure DB(m-1) at a time point (m-1) as the initial value of the similarity measure and to give a time point as an initial value of the path information in the asymptotic calculating means, a decision making means to obtain a recognized result at a final digit from said digit recognition category W(M) at the end point M of said input pattern, to obtain an end point of said input pattern on the digit previous to the final digit by one from the digit path information at the end point M, obtain a recognized result on the digit previous to the final digit by one from the digit recognition category at the end point, and to obtain a recognized result on each digit sequentially toward the start point of the input pattern.

According to another aspect of this invention, the probability of erroneous recognition can be decreased by making one word correspond to one digit and thus limiting the number of input digits. In this case, a value on each digit will be obtained for similarity measure, path information, digit similarity measure, digit path information, digit recognition category, etc.

According to a further aspect of this invention, moreover, memory capacity can be curtailed by performing the asymptotic calculation in the direction to decrease time points of the reference pattern.

According to further aspect of this invention, the probability of erroneous recognition can further be decreased by selecting a suitable asymptotic calculation.

Other objects and features of the invention will be clarified from the following description with reference to the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a drawing for explaining the fundamental principle of this invention;

FIG. 2 is a drawing for explaining an allowable matching path for the similarity measure calculation;

FIG. 3 is a drawing representing how to store a similarity measure D(l, v, n) and path information F(l, v, n) in a memory in an embodiment of this invention;

FIG. 4 is a drawing representing how to store a digit similarity measure DB(l, m), digit path information FB(l, m) and a digit recognition category W(l, m) in a memory in an embodiment of this invention;

FIG. 5 is a block diagram of a continuous speech recognition system given in one preferred embodiment of this invention;

FIG. 6 is a time chart of a signal at each part in FIG. 5;

FIG. 7 is a block diagram representing the composition of a distance calculating unit 15 in FIG. 5;

FIG. 8 is a block diagram representing the composition of an asymptotic calculating unit 17 in FIG. 5;

FIG. 9 is a drawing representing the composition of a digit similarity measure calculating unit 20 in FIG. 5;

FIG. 10 is a drawing representing the composition of a decision unit 24 in FIG. 5;

FIGS. 11A and 11B are flowcharts showing processing procedures of the above one embodiment of the invention;

FIG. 12 is a drawing for explaining the principle of another embodiment of this invention;

FIG. 13 is a block diagram of a continuous speech recognition system given in a further embodiment of this invention;

FIG. 14 is a time chart of a signal at each part in FIG. 13;

FIG. 15 is a drawing representing how to store a similarity measure D(v, n) and path information F(v, n) in FIG. 13 in a memory;

FIG. 16 is a drawing representing how to store a digit similarity measure DB(m), digit path information FB(m) and a digit recognition category W(m) in FIG. 13 in a memory;

FIG. 17 is a block diagram representing the composition of an asymptotic calculating unit 17' in FIG. 13;

FIG. 18 is a block diagram representing the composition of a digit similarity measure calculating unit 20' in FIG. 13;

FIG. 19 is a block diagram representing the composition of a decision unit 24' in FIG. 13;

FIGS. 20A and 20B are flowcharts showing processing procedures of the further embodiment of this invention.

DESCRIPTION OF THE INVENTION

A drawing illustrative of processing procedures according to the present invention is given in FIG. 1. As in the case of the paper by Myers et al., U(m) and L(m) denote an upper bound and a lower bound of a matching window domain for DP matching. The axis of the abscissa indicates a time point m of an input pattern A, and a feature vector (consisting of feature parameters of R) am at the time point m from a start 1 of speech to an end M is expressed by

am=am1, am2, . . . amr, . . . , amR                        (1)

where amr denotes the r-th feature parameter constituting am. Then, the axis of ordinate on the left side indicates a time point n of a reference continuous voice pattern B, and that on the right side indicates a digit number. Each digit corresponds to each reference word of the words to be recognized, and a time length of each digit varies according to the length of the reference word. Assuming the number of reference words is V and the v-th reference word is denoted as Bv, the time points of the reference word Bv include time points from a start point n=1 of the digit to an end point determined on the length of the reference word Bv. Therefore, the feature vector (consisting of feature parameters of R) at the time point n of the reference word Bv of a digit is expressed, as in the case of an, by the following:

b.sub.n.sup.v =b.sub.n1.sup.v, b.sub.n2.sup.v, . . . , b.sub.nr.sup.v, . . . , b.sub.nR.sup.v                                        (2)

Then, a distance d(m, n) between a feature vector am at the time point m of an input pattern and a feature vector bn v at the time point n of the reference pattern of the v-th word on a digit is defined by ##EQU1##

The time points m and n are further made to correspond to each other by a time normalized function, and the cumulative sum of the distance d(m, n) between feature vectors at the time points is thus made corresponding along the time normalized function as defined by the similarity measure D(m, n). The similarity measure is calculated asymptotically as

D(m, n)=d(m, n)+D(m-1, n)                                  (4)

where n=argmin D(m-1, n')                                  (5)

n-2≦n'≦n

where argmin Y means x with y minimized under the condition xεX, and n refers to n' with D(m-1, n') minimized under n-2≦n'≦n. In other words, the expression (4) indicates that a path in which the similarity measure will be minimized is selected from among the three paths from each point of (m-1, n), (m-1, n-1) and (m-1, n-2) to point (m, n) as shown in FIG. 2. The path in which the minimum value D(m-1, n) used for obtaining the similarity measure D(m, n) is selected is called a matching path, and the path information F(m, n) indicating this path is defined by

F(m, n)=F(m-1, n)                                          (6)

According to the technique by Myers et al., the similarity measure is first calculated between each reference word pattern and an input pattern, and then the similar calculation is made for the next digit. Namely, the loop calculation order for DP matching is time point m, time point n, V indicating a number of the reference word pattern and digit number l. In consideration of the foregoing and FIG. 1, it is apparent that there may arise a time lag for recognition response as mentioned hereinbefore.

In noticing that the similarity measure at the time point m of the input pattern is ready for calculation if a similarity measure at the time point m-1 has been calculated, since a path for DP matching is incremental monotonously the present invention proposes a continuous speech recognition system capable of reducing the recognition response time lag and processing synchronously with a voice input by carrying out the calculation of the similarity measure in a string vertically along the time axis of the reference pattern at each time point of the input pattern instead of carrying out in the input pattern direction like the technique proposed by Myers et al.

In this invention, the similarity measure is calculated in a vertical string including each digit and parallel with an axis as shown in the oblique-lined portion of FIG. 1, however, the following parameters will be defined prior to giving a description thereof.

D(l, v, n) is an accumulated distance as of the time point n of a reference pattern of the v-th word on the l-th digit, which is called the similarity measure; F(l, v, n) is the start point of the path that gives the similarity measure D(l, v, n) at the time point n of the reference pattern on the v-th word on the l-th digit is obtained, indicating a time point of the input pattern at time point 1 of the reference pattern on the l-th digit, it is called path information; DB(l, m) is a minimum value of similarity measures D(l, v, Nv) obtained through calculations as far as an end point Nv for each of the reference patterns of all the words at the time point m of the input pattern and is called a digit similarity measure; FB(l, m) indicates path information corresponding to the similarity measure D(l, v, Nv) of the digit similarity measure DB(l, m) and is called digit path information; W(l, m) indicates a category to which a word of the reference pattern used when the digit similarity measure DB(l, m) is obtained belongs and is called a digit recognition category; R(l) is a recognized result of the l-th digit.

As initial values, a digit similarity measure DB(l-1, m-1) and a similarity measure D(m-1, n) at a point m-1 on each digit being necessary for calculation of the similarity measure in a vertical string including all digits are obtained through calculation at the point m-1. It is noted here that the similarity measure D(m-1, n) and the path information F(m-1, n) at the point m-1 for each digit l and each word v should be stored. The similarity measure D(m-1, n) and the path information F(m-1, n) of the word v on the digit l are given by D(l, v, n) and F(l, v, n) respectively. FIG. 3 shows how D(l, v, n) and F(l, v, n) are stored in a memory and FIG. 4 shows how DB(l, m), FB(l, m) and W(l, m) are stored.

Next, the fundamental principle of the continuous speech recognition system according to the present invention will be described. For similarity measure calculations, an asymptotic calculation for dynamic programming will be performed in order at each time point m along the time axis of an input pattern in the matching window domain between the upper bound U(m) and the lower bound L(m) as shown in FIG. 1. Initial conditions for the similarity measure calculation will be given by

D(l, v, n)=∞                                         (7)

l=1˜Lmax, v=1˜V, n=1˜Nv

DB(l, m)=∞                                           (8)

l=0˜Lmax, m=0˜M

DB(0, 0)=0                                                 (9)

The similarity measure calculation in a string vertically and parallel with the n axis at the time point m of the input pattern will be performed as follows. First, a vector distance between the feature vector am at the time point m of the input pattern and a reference word pattern bn v on the v-th digit is calculated according to the expression (1). Then follows a calculation of the similarity measure in a vertical string on each digit. With the values initialized at

D(l, v, 0)=DB(l-1, m-1)                                    (10)

F(l, v, 0)=m-1                                             (11)

the similarity measure calculation is carried out through calculating expressions (12) and (13) so as to decrease the n between U(m) and L(m).

D(l, v, n)=d(n)+D(l, v, n)                                 (12)

F(l, v, n)=F(l, v, n)                                      (13)

where

n=argmin D(l, v, n')                                       (14)

n-2≦n'≦n

As shown in FIG. 2 and expressions (12) and (14), the calculation at the point (m, n) is obtainable from the similarity measure of the three points (m-1, n), (m-1, n-1), (m-1, n-2). Then, the calculation at the point (m, n-1) can be obtained from the similarity measure of the three points (m-1, n-1), (m-1, n-2), (m-1, n-3) and, the similarity at the point (m-1, n) not being used therefor, no influence will be exerted on a calculation at the point (m, n-1) by storing a result calculated at the point (m, n) in the point (m-1, n). Therefore the calculation of the similarity measure in the direction of decreasing n, makes possible the use of a storage area in common for the similarity measure at point m-1 and the similarity measure at point m, thus saving memory. After carrying out the above calculations in a string vertically, the similarity measure D(l, v, Nv) at an end Nv of the reference word pattern of each word v on each digit is compared with the digit similarity measure DB(l, m) which is a minimum word similarity measure on the digit calculated so far, and when D(l, v, Nv) is less than DB(l, m): the similarity measure D(l, v, Nv) is made to be the digit similarity measure DB(l, m), a category v to which the reference word pattern belongs; a digit recognition category W(l, m), and matching path information F(l, v, Nv) through which the similarity measure D(l, v, Nv) is obtained; and digit path information FB(l, m) is determined.

Namely, where

DB(l, m) D(l, v, N.sup.v),

then

DB(l, m)=D(l, v, N.sup.v)                                  (15)

W(l, m)=V                                                  (16)

FB(l, m)=F(l, v, N.sup.v)                                  (17)

The similarity measure thus obtained in a vertical string is calculated for V reference word patterns.

The calculation of the similarity measure in a vertical string is carried out similarly for each of the reference word patterns of V with the time point m of the input pattern increased by one, which proceeds as far as the end point M of the input pattern.

Finally, a decision on the input pattern is made according to the digit path information FB(l, m) and the digit recognition category W(l, m). The method of this decision comprises, as described in the paper by Myers et al., obtaining a minimum value of the digit similarity measure DB(l, M) on each digit at the end point M of the input pattern in the digits permitted, i.e., from Lmin-th digit to the Lmax-th digit, and a digit L whereat the minimum value obtained is a digit number of the input pattern. Further, a recognized result R(L) on the L-th digit is obtained from W(L, M), and an end point of the (L-1)th digit is obtained from digit path information FB(L, M). A recognized result R(l) is then obtainable at each digit through repeating the above operation by turns.

Namely, the digit L of the input pattern is obtained from the following:

L=argmin (DB(l, M))                                        (18)

Lmin≦l≦Lmax

Then, the recognized result R(L) on the L-th digit is obtained from

R(L)=W(L, m)                                               (19)

and the end point m of the (L-1)th digit is obtained from

m=FB(L, M)                                                 (20)

Generally, the recognized result R(l) on the l-th digit and the end point m of the (l-1)-th digit are obtained from

R(l)=W(l, m)                                               (21)

m=FB(l, M)                                                 (22)

and words of all the digits are ultimately recognized.

According to the present invention, the calculation of the similarity measure can be carried out in the time axis direction of the input pattern, as described above. Upon detection of the input of speech, the calculation is commenced right away, and since the calculation is already underway synchronously with the speech input, processings for decision given in the expressions (18)˜(22) can be ready concurrently with the end of the speech. The recognition response time can therefore be remarkably shortened as compared with the above-mentioned technique by Myers et al.

Then, while the number of times for distance calculation is required at Nv ·M·V·Lmax in the case of technique by Myers et al., the invention is effective to decrease the number to Nv,V·M times, which will be apparent from the description given hereinabove, and thus distance calculations can be saved considerably to 1/Lmax in quantity as compared with the former number.

Now, an embodiment of the system according to the present invention will be described with reference to the accompanying drawings. FIG. 5 is a block diagram representing the composition of one embodiment of this invention; FIG. 6 is a time chart of control command signals for each part given in FIG. 5. A control unit 10 functions to control other units by control command signals Cl1, DST, m1, v1, n1, r, Cl2, l1, n3, n2, n21, n22, m3, l2, etc. as shown in FIG. 6, and a detailed description thereof will be given correlatively with the operation of other units in each occasion.

An input unit 11 analyzes an input speech given by a signal SPEECH IN and outputs a feature vector am consisting of a time series of R feature parameters shown in the expression (1) at a constant interval (frame). Speech analysis occurs by, for example, frequency analysis by a filter back constituted of a multi-channel (R-channel) filter. Then, the input unit 11 monitors the level of the input speech, detects the start and the end of the speech, and sends a signal ST indicating the start and a signal EN indicating the end as a signal SP to the control unit 10 and an input pattern memory 12.

After receipt of the SP signal in the input pattern memory 12, the feature vector am given by the input unit 11 in accordance with the signal m1 (ranging from 1 to the end time point M) which indicates a time point of the input pattern supplied from the control unit 10 is stored.

Reference words of V predetermined as words to be recognized, are analyzed to obtain a feature vector consisting of feature parameters of R shown in the expression (2) at each time point (frame). Thus obtained 1st to V-th reference word patterns B1, B2, . . . Bv (each pattern being given in a time series of feature vectors) are stored in a reference pattern memory 13. A length Nv of the reference patterns Bv of the v-th word is stored in a reference pattern length memory 14.

A signal v1 from the control unit 10 specifies the v-th reference word and indicates a category to which the reference word belongs. The length Nv of the reference word pattern Bv of the specified reference word is read out of the reference pattern length memory 14 in response to the signal v1. After reception of Nv signal, the control unit 10 generates a signal (1˜Nv) corresponding to the time point n of the reference word pattern.

From the input pattern memory 12 the r-th feature parameter amr of the feature vector am1 corresponding to the time point of the signal m1 is supplied to a distance calculating unit 15 in response to signals m1 and r from the control unit 10. On the other hand, the r-th feature parameter bnr v of the feature vector bn v (n=1˜Nv) at a time point n1 of the v-th reference word pattern is read out of the reference pattern memory 13 having received signals v1, n1 and r and is thereafter sent to the distance calculating unit 15.

Upon receipt of amr and bnr v, the distance calculating unit 15 calculates the distance d(m, n) defined by the expression (3). Since the calculation according to the present invention is carried out in a vertical string as illustrated in FIG. 1, m is handled as fixed, d(m, n) can be expressed as d(n), and thus d(n) is obtained at the time points n=1, 2, . . . , Nv on each digit and stored in a distance memory 16. An example of the composition of a distance calculating unit 15 is shown in FIG. 7. After reception of the signal SP indicating the start time point of the input speech the contents stored in an accumulator 153 are cleared according to a clear signal Cl2 generated from the control unit 10 for each n at m. An absolute value circuit 151 provides an absolute value |amr -bnr v | of the difference between feature parameters amr and bnr v sent from the input pattern memory 12 and the reference pattern memory 13, and the result is supplied to one input terminal of an adder 152. An adder output is stored in the accumulator 153. An output terminal of the accumulator 153 is connected to other input terminal of the adder 152, and d(n) of the expression (3) is obtained finally as an output of the accumulator 153 by changing the signal r from 1 to R. The distance d(n) thus obtained is stored in the distance memory 16 with its address specified at n1.

Initialization of the similarity measure and the digit similarity measure which is necessary for the asymptotic calculation of similarity measures is carried out by the signal Cl1 from the control unit 10 before the speech is inputted, and the values given by the expressions (7), (8) and (9) are set in a similarity measure memory 18 and a digit similarity measure memory 21.

An asymptotic calculating unit 17 computes the similarity measure D(l, v, n) and the path information F(l, v, n) through the computation of the expressions (12), (13) and (14). To save memory capacity for the similarity measure and the path information, as described hereinbefore, the time point of a reference pattern is decreased by one from the upper bound U(m) of the matching window to the lower bound L(m). A signal n2 is used for this control of the time point. The distance stored at an address n2 is read out of the distance memory 16 in response to a signal n2 from the control unit 10. The asymptotic calculating unit 17 is comprised of three similarity measure registers 173, 174, 175, a comparator 171, an adder 172, and three path registers 176, 177, 178, as shown in FIG. 8. Similarity measures D(l, v, n), D(l, v, n-1), D(l, v, n-2) and path information F(l, v, n), F(l, v, n-1), and F(l, v, n-2), specified by the signals n2, n21 and n22 indicating the time point of a reference pattern, and the two time points previous to the time point of the signal n2, are stored in the similarity measure registers 173˜175 and the path registers 176˜178, respectively. The comparator 171 detects a minimum value from the three similarity measure registers 173, 174, 175 and issues a gate signal n for selecting a path register corresponding to a similarity measure register from which the minimum value has been obtained. A content of the path register selected by the gate signal n is stored in F(l, v, m) of a path memory 19. Then, the minimum value D(l, v, n) of the similarity measure outputted from the comparator 171 is added to the distance d(n) read out of the distance memory 16 through the adder 172 and stored in the similarity measure memory 18 as D(l, v, n).

The asymptotic calculation is performed with the time point from U(m) to L(m) in response to the signal n2, and the word similarity measure D(l, v, Nv) is computed for each v on each l.

A digit similarity measure calculating unit 20 performs the processes of the expressions (15), (16), (17) and obtains, one after another, minimum values of word similarity measures D(l, v, Nv) of V which are obtained for each of words of V on each digit. As shown in FIG. 9, the digit similarity calculating unit 20 is comprised of a comparator 201, a register 202 to hold the word similarity measure D(l, v, Nv), a register 203 to hold the signal v1 indicating the category v to which a reference word pattern belongs, and a register 204 to hold the path information F(l, v, Nv). The signal l1 specifies the digit of a reference continuous speech pattern, ranging to Lmax for each of the signals v1. The word similarity measure D(l, v, Nv) and the word path information F(l, v, Nv) are read out of the similarity measure memory 18 and the path memory 19 according to the signal l1 generated from the control unit 10, stored in the registers 202 and 204 respectively, and the category v to which the reference word pattern belongs is stored in the register 203. The comparator 201 compares the above word similarity measure D(l, v, Nv) with the digit similarity measure DB(l, m) read out of the digit similarity measure memory 21, and when D(l, v, Nv) is less than DB(l, m), generates, a gate signal v. The word similarity measure D(l, v, Nv), the category v and the word path information F(l, v, Nv) held in the registers 202, 203, 204, respectively, are stored in the digit similarity measure memory 21 as DB(l, m), in the digit recognition category memory 22 as W(l, m) and in the digit path memory 23 as FB(l, m) respectively in response to the gate signal v.

Further, signals n3, m3 and l2 indicating the time point 1 of a reference pattern, the time point m-1 one previous to the time point m of the input pattern specified by a signal m1, and the digit one previous to that specified by the signal l1 respectively, are generated from the control unit 10. The initialization for the similarity measure calculation in a vertical string as shown in expressions (10) and (11) is carried out according to those signals. Namely, a digit similarity measure DB(l-1, m-1) specified by the signals l2 and m3 is read out of the digit similarity measure memory 21 and stored in the similarity measure memory 18 at the address specified by signals l, v1, n3 as D(l, v, o). Then, a signal md indicating an address specified by the signal m3 is supplied to the path information memory 19 from the control unit 10. And a value (m-1) specified by the signal md is stored in the path memory 19 as F(l, v, o) at the address specified by the signals l1, v1, n3.

A decision unit 24 carries out the decision processing shown in the expressions (18)˜(22) and outputs a recognized result R(l) on each digit of the input pattern based on the digit path information FB(l, m) and the digital recognition category W(l, m). In detail, as shown in FIG. 10, the decision unit 24 is comprised of a comparator 241, a register 242 to hold a minimum digit similarity measure, a register 243 to hold a digit number, a register 244 to hold the digit path information F(l, m), a register 245 to hold a recognized result, and a decision control unit 246. When the end of the speech is detected by the input unit 11, in response to the signal SP, the control unit 10 supplies a signal DST for starting the above decision processing to the decision unit 24. After receipt of the signal DST, the decision control unit 246 issues a signal l3 indicating the digit to the digit similarity measure memory 21. The digit similarity measure DB(l, M) on each digit of the first to the Lmax-th digit at the end point M of the input pattern is read out sequentially of the digit similarity measure memory 21 according to the signal l3 and compared with a value stored in the register 242 by the comparator 241. The least value from the comparator 241 is stored in the register 242, and a digit number l then obtained is stored in the register 243. After the digit similarity measures of Lmax are read according to the signal l3, the content of the register 243 represents a digit number of the input pattern. From the digit path memory 23 and the digit recognition category memory 22, FB(L, M) and W(L, M) are read and stored in the register 244 and the register 245 in response to address signals l4 and m2 corresponding to l=L, m=M from the decision control unit 246. The content of the register 245 is generated as a recognized result. Further, the decision control unit 246 issues l=l-1, m=(value stored in the register 244) to the digit path memory 23 and the digit recognition category memory 22 as address signals l4 and m2, and FB(l, m) and W(l, m) of the (L-1)-th digit are read and stored in the register 244 and the register 245. Recognized results on the L digits are outputted from a register 245 by repeating the above processing from L to 1 sequentially.

A flowchart for procedures in the processing of the continuous speech recognition system according to the present invention is as shown in FIG. 11A and FIG. 11B.

Next, a continuous speech recognition system given in another embodiment of the present invention capable of reducing the memory capacity requirement and quantity or calculations will be described.

In the embodiment described above, since addresses, l, v, and n are necessary for the similarity measure and the path information, the required memory capacity becomes large in proportion to the digit number l. Further, the number of asymptotic calculations given in the expressions (12), (13), (14) increase in proportion to the digit number l. However, the digit number is not necessarily limited for recognition of a continuously spoken, numeric string in most cases.

This embodiment proposes a continuous speech recognition system for which a limitation on the digit number included in an input speech can be removed and further both memory capacity required and the number of calculations can be decreased to 1/Lmax as compared with the above-mentioned embodiment.

Described first is the feasibility of the digit limitation being removed, according to this invention. Now, in a point (m1, n1) on a matching path (m, n(m)) where a whole minimum similarity measure is obtained, a partial similarity measure obtained from the start to the point (m1, n1) along the matching path represents a minimum value of the partial similarity measure obtained along all the matching paths passing the point (m1, n1). Namely, the minimum value of a whole similarity measure obtained along all the matching paths passing point (m1, n1) is given as the sum of each minimum value of the partial similarity measure from the start to the point (m1, n1) and the partial similarity measure from the point (m1, n1) to the end. This is given by the following expression: ##EQU2## where (m1, n1) is point on n(m).

Thus a minimization from point (m1, n1) to the end can be performed independently from the minimization from the start to the point (m1, n1).

Taken up here is the minimization from a point Xl corresponding to a time point n(=Nv) of the v-th reference pattern on the l-th digit at a time point m1 of the input pattern to a point El+L corresponding to a time point n(=Nv) of the reference pattern on the (l+L)-th digit at an end time point M of the input pattern as shown in FIG. 12. The partial similarity measure SL l (A', CL l) is obtainable from a partial input pattern A'=(am1 +1, am1 +2, . . . aM) and a partial pattern of the reference continuous speech pattern CL l =Bvl, Bvl+1, . . . , Bvl+L, as indicated by the following: ##EQU3## Now, if the digit number is free from limitation, a digit number L which at a minimum partial similarity measure is acquired through changing L is obtained as the digit number of the partial input pattern A'. It is therefore given by ##EQU4##

Then, the minimization from a point Xl+1 to an end El+1+L will be taken up. The partial similarity measure SL l+1 (A1, CL l+1) is obtained from a partial input pattern (am.sbsb.1+1, am.sbsb.1+2, . . . , aM) and a reference continuous speech pattern CL l+1 =Bvl+1, Bvl+2, . . . , Bvl+1+L, where CL l+1 is that of being the same as the CL connecting reference word patterns of L in very order. Therefore,

S.sub.L.sup.l+1 (A', C.sub.L.sup.l+1)=S.sub.L.sup.l+1 (A', C.sub.L.sup.l)=S.sub.L.sup.l (A', C.sub.L.sup.l)          (28)

and thus the minimization from the point Xl+1 to the end El+1+L has the same result as the minimization from the point Xl to the end El+L.

If there is no limitation effective on the digit number, as described, partial similarity measures from each point X1, X2, . . . , X, . . . to an end are all the same and given by the expressions (24), (25), (26), a reference pattern series than may be obtained by CL l of the expression (27). Meantime, the whole similarity measure is given, as shown in the expression (23), by the sum of the partial similarity measure from the start to the point Xl and the partial similarity measure from the point Xl to the end: ##EQU5## Namely, it is indicated that a minimum value of the similarity measure between a partial input pattern A" (a1, a2, . . . , am-1) from the start to a time point m1 -1 and a reference pattern on the first digit, the second digit or the l-th digit is obtained, and with the minimum value as an initial value, a similarity measure between the remaining partial input pattern A' and a reference pattern CL l can be obtained. In other words, the calculation of a similarity measure on the latter part is identified with that for which the ensuing digit is regarded as the first digit to be newly calculated with the minimum digit similarity measure of the former part as an initial value therefor. From applying the law that "since the similarity measure of the latter part takes the same path irrespective of starting from any point of X1, X2 . . . , X, . . . , the calculation is ready with the minimum value of the former part as an initial value for the first digit to be newly calculated" to all the time points m, the calculation is given only on the first digit at m=1, then a digit similarity measure at the time point m-1 may be employed as an initial value for the first digit at a time point m, and thus the calculation will be carried out only on the first digit. As a result, the similarity measure is obtainable through a calculation for only one digit at each time point m.

Next, a process representing the fundamental principle of this embodiment will be described. The procedure is basically the same as that of the system given in FIG. 5, without digit parameter L.

The similarity measure calculation is obtained by using an asymptotic expression for dynamic programming in the order of the time axis m of an input pattern under initialization.

The initialization comes in

D(v, n)=∞                                            (30)

v=1˜V, n=1˜Nv

DB(o)=0                                                    (31)

DB(m)=∞                                              (32)

m=1˜M

A similarity measure calculation in a string vertically and parallel to an axis n at the time point m of an input pattern is performed as follows. With initial values as

D(v, o)=DB(m-1)                                            (33)

F(v, o)=m-1                                                (34)

the following expressions (35), (36), (37) will be calculated in the direction to decrease n.

D(v, n)=d(am, bn.sup.v)+D(v, n)                            (35)

F(v, n)=F(v, n)                                            (36)

where,

n=argmin [D(v, n')]                                        (37)

n-2≦n'≦n

After execution of the above asymptotic calculation in a string vertically, a similarity measure D(v, Nv) at the end Nv of the reference word pattern is compared with a digit similarity measure DB(m) which is the minimum word similarity measure calculated so far. When D(v, Nv) is less than DB(m), the similarity measure D(v, Nv) is regarded as a new digit similarity measure DB(m), the category v to which the reference word pattern belongs is regarded as a digit recognition category W(m), and the matching path information F(v, Nv) whereby the similarity measure D(v, Nv) is obtained is regarded as a digit path information FB(m). Namely, when DB(m)>D(v, Nv), the following processing will be carried out.

DB(m)=D(v, n.sup.v)                                        (38)

W(m)=v                                                     (39)

PB(m)=F(v, N.sup.v)                                        (40)

The similarity measure calculations in a vertical string which are carried out as above will be executed for reference word patterns of V.

Next, a similar calculation in a vertical string is executed for each of reference word patterns of V at the time point m of the input pattern increased by one, thus obtaining the similarity measure as far as the end point M of the input pattern.

Finally, a decision on the input pattern will be made according to the digit path information FB(m) and the digit recognition category W(m). The method of decision comprises obtaining first a recognized result R(L) from W(M) at the end M of the input pattern and then obtaining an end point of the (L-1)-th digit from a digit path information FB(M). A recognized result W(mL-1) at the point FB(M) which is an end mL-1 of the (L-1)-th digit represents R(L-1) on the (L-1)-th digit. In brief, the decision will be obtained by the following processes.

R(l)=W(m)                                                  (41)

m=FB(m)                                                    (42)

The recognized result R(l) on each digit l is obtained by repeating the above processing.

As described, according to the present invention, the similarity measure can be calculated collectively on one digit instead of being carried out on each digit by removing the limitation of the digit number of an input pattern, thus decreasing both required memory capacity and calculation quantity to 1/Lmax (Lmax being a maximum digit number of input speech) of the first embodiment.

The composition of the continuous speech recognition system according to this embodiment is shown in FIG. 13. The embodiment is basically the same as that of FIG. 5, except that the system is free from control by a signal indicating the digit l and the distance memory 16 is not required. FIG. 14 shows a time chart of signals to control the embodiment of FIG. 13, and like signals are identified with the signals given in FIG. 6 by the same symbols. A signal n4 to decrease by one sequentially from an end Nv of the v-th reference word pattern is used instead of the signal n1. Construction and operation of the input unit 11, the input pattern memory 12, the reference pattern memory 13, the reference pattern length memory 14 and the distance calculating unit 15 are the same as those of FIG. 5, and hence no further description will be given here.

A similarity measure D(v, n) and path information F(v, n) are stored in a similarity measure memory 18' and path information memory 19' respectively in a two-dimensional embodiment as shown in FIG. 15. The information DB(m), FB(m) and W(m) which are stored in a digit similarity measure memory 21', a digit recognition category memory 22' and a digit path information memory 23' are of the one-dimensional construction shown in FIG. 16.

An initialization for the asymptotic calculation is performed by a signal Cl1 from the control unit 10 before speech is inputted, and values given in the expressions (30), (31), (32) are set in the similarity measure memory 18' and the digit similarity measure memory 21'.

An asymptotic calculating unit 17' executes asymptotic expressions (35), (36), (37). In detail, the asymptotic calculating unit 17' is similar to the construction shown in FIG. 8 and is comprised as shown in FIG. 17, of three similarity measure registers 173', 174', 175', a comparator 171', an adder 172', and three path registers 176', 177', 178'. Three similarly measure D(v, n), D(V, n-1), D(v, n-2) and three path informations F(v, n), F(v, n-1), F(v, n-2) are read out of the similarity measure memory 18' and the path memory 19' according to the signal n4 indicating a time point n of the reference pattern and signals n41 and n42 indicating time points (n-1) and (n-2) which are issued from the control unit 10, and they are stored in the similarity measure registers 173'˜175' and the path registers 176'˜178' respectively. The comparator 171' detects a minimum value from the three similarity measure registers 173'˜175' and issues a gate signal n for selecting a path register corresponding to a similarity measure register whereby the minimum value is obtained. A content of the path register selected according to the gate signal n is stored in the path memory 19' as F(v, m). Then, a minimum similarity measure value D(v, n) from the comparator 171' is added to a distance d(am, bn v) from the distance calculating unit 15 through the adder 172' and stored in the similarity measure memory 18'.

The asymptotic calculation is carried out from n=Nv to 1, and the similarity measure D(v, Nv) is computed for each v.

A digit similarity measure calculating unit 20' performs processing of the expressions (38), (39), (40), to obtain minimum values of V from the similarity measures D(v, Nv) one after another. Namely, as shown in FIG. 18, the digit similarity measure calculating unit 20' is comprised of a comparator 201, a register 202' to hold the similarity measure D(v, Nv), a register 203' to hold a category v indicated by the signal v1 to which a reference word pattern belongs, and a register 204' to hold the path information F(v, Nv). The similarity measure D(v, Nv) and the path information F(v, Nv) read out of the similarity measure memory 18' and the path memory 19' according to a signal n1 generated from the control unit 10 are stored in the registers 202' and 204' respectively, and the category v to which the reference work pattern belongs is stored in the register 203'. The comparator 201' compares the similarity measure D(v, Nv) with a digit similarity measure DB(m) read out of the digit similarity measure memory 21' and then generates a gate signal v if D(v, Nv) is less than DB(m). The similarity measure D(v, Nv), the category v, and the path information F(v, Nv) held in the registers 202', 203', and 204' are stored in the digit similarity measure memory 21' as DB(m), the digit recognition category memory 22' as W(m), and the digit path memory 23' as FB(m) respectively according to the gate signal v.

Further, an initialization shown in the expressions (33), (34) for the similarity measure calculation in a vertical string is carried out according to a signal n5 indicating a time point 0 of the reference pattern which is issued from the control unit 10 and a signal m4 specifying a time point (m-1) one previous time point m of the input pattern. DB(m-1) is read out of the digit similarity measure memory 21' and stored in the similarity measure memory 18' as D(v, o). Then, a signal md indicating the address value specified by the signal m4 is supplied to the path information memory 19' from the control unit 10. A value (m-1) indicated by the signal md as F(v, o) is stored in the path memory 19' at the address specified by the signals v1, n4.

A decision unit 24' carries out processing given in the expressions (41), (42), and generates a recognized result R(l) on each digit of the input pattern based on the digit path information FB(m) and the digit recognition category W(m). The decision unit 24' is comprised, as shown in FIG. 19, of a register 244' to hold a digit path information F(m), a register 245' to hold a recognized result, and a decision control unit 246'. Upon detection of the end of a speech in the input unit 11, the control unit 10 issues a signal DST for starting the above processing on decision to the decision unit 24' in response to the signal SP. After receipt of the signal DST, a decision control unit 246' issues an address signal m2 as m=M to the digit path memory 23' and the digit recognition category memory 22' to read out FB(M) and W(M), and they are stored in the register 244' and the register 245'. The contents of the register 245' are outputted as a recognized result. Further, the decision control unit 246' issues the address signal m2 as m=(value stored in the register 244') to the digit path memory 23' and the digit recognition category memory 22', FB(m) and W(m) are read and then stored in the register 244' and the register 245'. The recognized results are outputted from the register 245' by repeating the processing sequentially from m=M to m=0.

Procedures for the above processing of a system according to this embodiment are shown in flowchart in FIG. 20A and FIG. 20B.

The principle and its concrete examples are described as above, however, the description refers only to examples of the present invention and hence the invention may be practiced variously otherwise than as specifically illustrated and described. For example, it is apparent that the order of a calculating loop for recognition processing can be arranged as l, n, v, m instead of n, l, v, m so given according to this disclosure.

Further, while the distance between the input vector am and the reference bn v has been described by means of Chebyshev distance scale like the expression (3), Euclid distance like the expression (43) or inner product like the expression (44) can be used instead. ##EQU6##

Then, the expression (4) is a fundamental asymptotic expression for DP matching, however, the asymptotic expression proposed in a paper "Dynamic Programming Algorithm Optimization for Spoken Word Recognition," IEEE Translation on Acoustics, Speech and Signal Processing Vol. ASSP-26, No. 1, Feb. 1978, pp. 43-49 is also effective for prevention of erroneous recognitions. In this paper there are provided techniques to remove a probability of unnatural deformation of the matching path by providing an inclination limitation on the matching path. An asymptotic expression given in the following expression (45) as example is that for removing the probability of a horizontal matching path. ##EQU7## The path information in this case can be expressed by

F.sub.m (l, v, n)=F.sub.m (l, v, n)                        (46)

An asymptotic expression given in the expressions (45) and (46) whereby an appreciated value of the distance between time points (m, n) and (m-1, n-2) will be obtained more precisely with a value midway along the path taken into consideration can be applied as another example. ##EQU8## The path information in this case can be expressed by the expression (48)

F.sub.m (l, v, n)=F.sub.m (l, v, n)                        (48)

It goes without saying that the asymptotic expression can be applied to the embodiment of this invention shown in FIG. 13 by removing the digit information l in the above each example.

Claims (23)

What is claimed is:
1. A continuous speech recognition system for recognizing an input speech composed of a plurality of continuously spoken words comprising:
a speech analyzing means for analyzing an input signal at every given frame time point m and outputting an input pattern expressed in a time series of a feature vector consisting of a predetermined number of feature parameters;
an input pattern memory to store said input pattern;
a reference pattern memory to store a reference pattern consisting of said feature vector in the same format as said input pattern for each of plurality (V) of predetermined words to be recognized;
a distance calculating means to calculate the distance between the feature vector of said input pattern at a time point m and the feature vector of the reference pattern of the v-th words at a time point n under a predetermined distance formula by changing the time point n of the reference pattern of the v-th word in an arbitrary order from the start point to end point Nv while changing the reference pattern v from the first word to end word V for each time point m of the input pattern, said each time point m changing from the start point to an end point M;
an asymptotic calculating means to calculate a similarity measure D(v, n) given by the cumulative sum of said distances at said time points n and path information F(v, n) indicating a time point of the input pattern at the start point of said v-th reference pattern on a path through which said similarity measure D(v, n) has been obtained by a predetermined asymptotic expression according to a dynamic programming process while changing the time point n of the reference pattern of the v-th word in an arbitrary order from the start point to end point Nv and while changing the reference pattern v from the first word to end word for each time point m of the input pattern, said each time point m changing from the start point to end point M;
a digit similarity measure and digit path information calculating means to select a minimum similarity measure from among the similarity measures at the end time points of reference patterns of all the words obtained through said asymptotic calculating means, and to provide said minimum similarity measure as a digit similarity measure DB(m), a category to which the word corresponding to said minimum similarity measure belongs as a digit recognition category W(m)=v, and path information corresponding to said minimum similarity measure as a digit path information FB(m) at said time point m, for each time point m of the input pattern, said time point m changing from the start point to end point M;
an initializing means to give digit similarity measure DB(m-1) as an initial value of similarity measure and to give a time point (m-1) as an initial value of said path information at a time point m while changing the time point m of the input pattern from the start point to end point M;
a decision means to obtain a recognized result at a final digit from said digit recognition category W(M) at the end point M of said input pattern, to obtain an end point of said input pattern on the digit previous to the final digit by one from said digit path information at the end point M, to obtain a recognized result on the digit previous to said final digit by one from said digit recognition category at the end point, and to obtain a recognized result at each digit sequentially toward the start point of said input pattern.
2. A continuous speech recognition system for recognizing an input speech composed of a plurality of continuously spoken words comprising:
a speech analyzing means for analyzing an input signal at every given frame time point m and outputting an input pattern expressed in a time series of a feature vector comprising a predetermined number of feature parameters;
an input pattern memory to store said input pattern;
a reference pattern memory to store a reference pattern comprising a feature vector in the same format as said input pattern for each of a plurality (V) of predetermined words to be recognized;
a distance calculating means to calculate the distance between the feature vector of said input pattern at a time point m and the feature vector of the reference pattern of the v-th word at a time point n at every time point n under a predetermined distance formula by changing the time point n of the reference pattern of the v-th word in an arbitrary order from the start point to end point Nv while changing the reference pattern V from the first word to end word V for each time point to end point M;
an asymptotic calculating means to calculate a similarity measure D(l, v, n) given by the cumulative sum of said distances on the l-th digit at said time points n and a path information F(l, v, n) indicating a time point of the input pattern at the start point of said v-th reference pattern on a path through which said similarity measure D(l, v, n) has been obtained by a predetermined asymptotic expression according to a dynamic programming process while changing the time point n of the reference pattern of the v-th word in an arbitrary order from the start point to end point Nv, while changing digit number l from one to L, and while changing the reference pattern v from the first word to the end word V for each time point m of the input pattern, said time point m changing from the start point to end point M;
a digit similarity measure and digit path information calculating means to select a minimum similarity measure from among the similarity measures at the end time points of the reference patterns of all the words on the l-th digit obtained through said asymptotic calculating means, and to provide said minimum similarity measure as a digit similarity measure DB(l, m), a category to which the word corresponding to said minimum similarity measure belongs as a digit recognition category W(l, m)=v, and path information corresponding to said minimum similarity measure as a digit path information FB(l, m) on said digit l at said time point m while changing digit number l from one to L for each time point m of the input pattern, said each time point m changing from the start point to end point M;
an initializing means to give said digit similarity measure DB (l-1, m-1) as an initial value of said similarity measure at a time point (m-1) and to give time point (m-1) as an initial value of said path information at a time point m while changing the line point m of the input pattern from the start point to end point M;
a decision means to obtain a final digit from said similarity measure DB(l, m), to obtain a recognized result at said final digit from said digit recognition category W(l, M), at the end point M of said input pattern, to obtain an end point of said input pattern on the digit previous to the final digit by one from said digit path information at the end point M, to obtain a recognized result of the digit prior to said final digit by one from said digit recognition category at the end point, and to obtain a recognized result at each digit sequentially toward the start point of said input pattern.
3. A continuous speech recognition system according to claim 1, wherein said arbitrary order in the asymptotic calculating means is an order in which said calculation is carried out by decreasing by one sequentially from Nv.
4. A continuous speech recognition system according to claim 1, wherein said distance formula is any of Chebyshev distance, Euclid distance or inner product distance.
5. A continuous speech recognition system according to claim 1, wherein said asymptotic expression is ##EQU9## where Dm-1 (v, n) denotes the similarity measure between a reference pattern of the v-th word at a time point n and the input pattern at a time point (m-1).
6. A continuous speech recognition system according to claim 1, wherein said predetermined asymptotic expression is ##EQU10##
7. A continuous speech recognition system according to claim 1, wherein said asymptotic expression is ##EQU11##
8. A continuous speech recognition system according to claim 2, wherein a limitation is given to the time points of said reference pattern and input pattern by a window function.
9. A continuous speech recognition system according to claim 2, wherein said distance calculation in said distance calculating means is carried out on one digit only.
10. A continuous speech recognition system according to claim 1, wherein said distance calculating means comprises an absolute value circuit to output the absolute value of the difference between the feature parameter comprising said input pattern and the feature parameter comprising said reference pattern, an adder circuit with an input being an output of the absolute value circuit, and a register to store the output of said adder circuit and also to output the stored contents to the other input of said adder circuit.
11. A continuous speech recognition system according to claim 1, wherein said asymptotic calculating means is provided with a plurality of similarity measure registers to store similarity measures at predetermined plural time points n, a plurality of path information registers to store path information at said predetermined plural time points n, a comparator to select and output a minimum value from among the values stored in said plurality of similarity measure registers and also to output a signal indicating a time point n corresponding to said selected similarity measure stored in the similarity measure register, an adder to output an added result as a similarity measure obtained newly with said minimum similarity measure as one input and the distance information obtained through said distance calculating means as another input, and means to output the contents stored in said path information register corresponding to said time point n as new path information.
12. A continuous speech recognition system according to claim 1, wherein said digit similarity measure and digit path information calculating means comprises a first register to hold a word similarity measure at the end of said reference word pattern, a second register to hold the category to which the word corresponding to the word similarity measure held in the first register belongs, a third register to hold the path information corresponding to the word similarity measure held in said first register, and a comparator to compare the word similarity measure read out of said first register with a word similarity measure obtained currently, to select the smaller similarity measure and write it in said first register, and to write the category and the path information coresponding to said selected similarity measure in said second and third registers respectively.
13. A continuous speech recognition system according to claim 1, wherein said decision means is provided with a first register to hold said digit path information, a second register to hold a recognized result, and a decision control means to output an address signal indicating the time point of said input pattern by changing the time point sequentially in a decreasing direction from M after the end point M of said input pattern has been detected, to store the digit path information and the recognized result at the time point of said corresponding input pattern in said first and second registers, and repeating the operation to output a signal indicating the time point of the input pattern held in said first register as said address signal.
14. A continuous speech recognition system according to claim 2, wherein said decision means is provided with a comparator to compare one input of the digit similarity measure on each digit at the end point M of said input pattern sequentially with another input, and to output the lesser similarity measure, a first register to hold the output of said comparator and also to output the contents held therein to the other input of said comparator, a second register to hold a digit of said digit similarity measure, a third register to hold said digit path information, a fourth register to hold a recognized result, and a decision control means to output an address signal indicating the time point of said input pattern by changing it sequentially in a decreasing direction from M after the end point M of said input pattern has been detected, to store the digit path information and the recognized result corresponding to the time point of said input pattern in said third and fourth registers, and to an operation to output a signal indicating the time point of the input pattern held in said third register as said address signal and another signal indicating one previous digit.
15. A continuous speech recognition system according to claim 2, wherein said arbitrary order in the asymptotic calculating means is an order in which said calculation is carried out by decreasing by one sequentially from Nv.
16. A continuous speech recognition system according to claim 2, wherein said distance formula is any of Chebyshev distance, Euclid distance or inner product distance.
17. A continuous speech recognition system according to claim 2, wherein said asymptotic expression is ##EQU12## wherein Dm-1 (v, n) denotes the similarity measure between the reference pattern of the v-th word at a time point n and the input pattern at a time point (m-1).
18. A continuous speech recognition system according to claim 2, wherein said predetermined asymptotic expression is ##EQU13##
19. A continuous speech recognition system according to claim 2, wherein said asymptotic expression is ##EQU14##
20. A continuous speech recognition system according to claim 2, wherein said distance calculating means comprises an absolute value circuit to output the absolute value of the difference between the feature parameter comprising said input pattern and the feature parameter comprising said reference pattern, an adder circuit with an input being an output of the absolute value circuit, and a register to store the output of said adder circuit and also to output the stored contents to the other input of said adder circuit.
21. A continuous speech recognition system according to claim 2, wherein said asymptotic calculating means is provided with a plurality of similarity measure registers to store similarity measures at predetermined plural time points n, a plurality of path information registers to store path information at said predetermined plural time points n, a comparator to select and output a minimum value from among the values stored in said plurality of similarity measure registers and also to output a signal indicating a time point n corresponding to said selected similarity measure stored in the similarity measure register, an adder to output an added result as a newly obtained similarity measure with said minimum similarity measure as one input and the distance information obtained through said distance calculating means as another input, and means to output the contents stored in said path information register corresponding to said time point n as new path information.
22. A continuous speech recognition system according to claim 2, wherein said digit similarity measure and digit path information calculating means comprises a first register to hold a word similarity measure at the end of said reference word pattern, a second register to hold the category to which the word corresponding to the word similarity measure held in the first register belongs, a third register to hold the path information corresponding to the word similarity measure held in said first register, and a comparator to compare the word similarity measure read out of said first register with a word similarity measure obtained currently, to select the smaller similarity measure and write it in said first register, and to write the category and the path information corresponding to said selected similarity measure in said second and third registers respectively.
23. A continuous speech recognition system according to claim 2, wherein said decision means is provided with a first register to hold said digit path information, a second register to hold a recognized result, and a decision control means to output an address signal indicating the time point of said input pattern by changing the time point sequentially in a decreasing direction from M after the end point M of said input pattern has been detected, to store the digit path information and the recognized result at the time point of said corresponding input pattern in said first and second register, and to repeat the operation to output a signal indicating the time point of the input pattern held in said first register as said address signal.
US06447829 1981-12-09 1982-12-08 Continuous speech recognition system Expired - Lifetime US4592086A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP56-197841 1981-12-09
JP19784181A JPH0134399B2 (en) 1981-12-09 1981-12-09
JP56-208791 1981-12-23
JP20879181A JPH0134400B2 (en) 1981-12-23 1981-12-23

Publications (1)

Publication Number Publication Date
US4592086A true US4592086A (en) 1986-05-27

Family

ID=26510605

Family Applications (1)

Application Number Title Priority Date Filing Date
US06447829 Expired - Lifetime US4592086A (en) 1981-12-09 1982-12-08 Continuous speech recognition system

Country Status (4)

Country Link
US (1) US4592086A (en)
EP (1) EP0081390B1 (en)
CA (1) CA1193013A (en)
DE (1) DE3267835D1 (en)

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4813076A (en) * 1985-10-30 1989-03-14 Central Institute For The Deaf Speech processing apparatus and methods
US4820059A (en) * 1985-10-30 1989-04-11 Central Institute For The Deaf Speech processing apparatus and methods
US4901352A (en) * 1984-04-05 1990-02-13 Nec Corporation Pattern matching method using restricted matching paths and apparatus therefor
US4918733A (en) * 1986-07-30 1990-04-17 At&T Bell Laboratories Dynamic time warping using a digital signal processor
US5131043A (en) * 1983-09-05 1992-07-14 Matsushita Electric Industrial Co., Ltd. Method of and apparatus for speech recognition wherein decisions are made based on phonemes
US5410635A (en) * 1987-11-25 1995-04-25 Nec Corporation Connected word recognition system including neural networks arranged along a signal time axis
US5446884A (en) * 1992-02-13 1995-08-29 International Business Machines Corporation Database recovery apparatus and method
WO1996030893A1 (en) * 1995-03-30 1996-10-03 Advanced Recognition Technologies, Inc. Pattern recognition system
US5642444A (en) * 1994-07-28 1997-06-24 Univ North Carolina Specialized image processing system architecture and method for image data arrays
EP0789349A2 (en) 1996-02-09 1997-08-13 Canon Kabushiki Kaisha Pattern matching method and apparatus and telephone system
EP0810583A2 (en) * 1996-05-30 1997-12-03 Nec Corporation Speech recognition system
US5799274A (en) * 1995-10-09 1998-08-25 Ricoh Company, Ltd. Speech recognition system and method for properly recognizing a compound word composed of a plurality of words
US5812739A (en) * 1994-09-20 1998-09-22 Nec Corporation Speech recognition system and speech recognition method with reduced response time for recognition
US5819219A (en) * 1995-12-11 1998-10-06 Siemens Aktiengesellschaft Digital signal processor arrangement and method for comparing feature vectors
EP0908868A2 (en) * 1997-10-10 1999-04-14 Philips Electronics N.V. Computer arrangement for speech recognition
US6062474A (en) * 1997-10-02 2000-05-16 Kroll; Mark William ATM signature security system
US6151571A (en) * 1999-08-31 2000-11-21 Andersen Consulting System, method and article of manufacture for detecting emotion in voice signals through analysis of a plurality of voice signal parameters
US6195638B1 (en) * 1995-03-30 2001-02-27 Art-Advanced Recognition Technologies Inc. Pattern recognition system
US6226610B1 (en) 1998-02-10 2001-05-01 Canon Kabushiki Kaisha DP Pattern matching which determines current path propagation using the amount of path overlap to the subsequent time point
US6353810B1 (en) 1999-08-31 2002-03-05 Accenture Llp System, method and article of manufacture for an emotion detection system improving emotion recognition
EP1189200A1 (en) * 2000-09-12 2002-03-20 Pioneer Corporation Voice recognition system
US6427137B2 (en) 1999-08-31 2002-07-30 Accenture Llp System, method and article of manufacture for a voice analysis system that detects nervousness for preventing fraud
US6463415B2 (en) 1999-08-31 2002-10-08 Accenture Llp 69voice authentication system and method for regulating border crossing
US20020153416A1 (en) * 1997-10-02 2002-10-24 Kroll Mark W. Magnetic card swipe signature security system
US20020194002A1 (en) * 1999-08-31 2002-12-19 Accenture Llp Detecting emotions using voice signal analysis
US20030023444A1 (en) * 1999-08-31 2003-01-30 Vicki St. John A voice recognition system for navigating on the internet
US6697457B2 (en) 1999-08-31 2004-02-24 Accenture Llp Voice messaging system that organizes voice messages based on detected emotion
US20060084992A1 (en) * 1988-06-13 2006-04-20 Michelson Gary K Tubular member having a passage and opposed bone contacting extensions
US20110178803A1 (en) * 1999-08-31 2011-07-21 Accenture Global Services Limited Detecting emotion in voice signals in a call center
US9047871B2 (en) 2012-12-12 2015-06-02 At&T Intellectual Property I, L.P. Real—time emotion tracking system
US9514747B1 (en) * 2013-08-28 2016-12-06 Amazon Technologies, Inc. Reducing speech recognition latency

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0466039B2 (en) * 1983-10-27 1992-10-21 Nippon Electric Co

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4286115A (en) * 1978-07-18 1981-08-25 Nippon Electric Co., Ltd. System for recognizing words continuously spoken according to a format

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1557286A (en) * 1975-10-31 1979-12-05 Nippon Electric Co Speech recognition

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4286115A (en) * 1978-07-18 1981-08-25 Nippon Electric Co., Ltd. System for recognizing words continuously spoken according to a format

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Myers et al., "Level Building Dynamic Time Warping Algorithm etc.," IEEE Trans. on Acoustics, etc., Apr. 1981, pp. 284-297.
Myers et al., Level Building Dynamic Time Warping Algorithm etc., IEEE Trans. on Acoustics, etc., Apr. 1981, pp. 284 297. *

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5131043A (en) * 1983-09-05 1992-07-14 Matsushita Electric Industrial Co., Ltd. Method of and apparatus for speech recognition wherein decisions are made based on phonemes
US4901352A (en) * 1984-04-05 1990-02-13 Nec Corporation Pattern matching method using restricted matching paths and apparatus therefor
US4813076A (en) * 1985-10-30 1989-03-14 Central Institute For The Deaf Speech processing apparatus and methods
US4820059A (en) * 1985-10-30 1989-04-11 Central Institute For The Deaf Speech processing apparatus and methods
US4918733A (en) * 1986-07-30 1990-04-17 At&T Bell Laboratories Dynamic time warping using a digital signal processor
US5410635A (en) * 1987-11-25 1995-04-25 Nec Corporation Connected word recognition system including neural networks arranged along a signal time axis
US20060084992A1 (en) * 1988-06-13 2006-04-20 Michelson Gary K Tubular member having a passage and opposed bone contacting extensions
US5446884A (en) * 1992-02-13 1995-08-29 International Business Machines Corporation Database recovery apparatus and method
US5642444A (en) * 1994-07-28 1997-06-24 Univ North Carolina Specialized image processing system architecture and method for image data arrays
US5812739A (en) * 1994-09-20 1998-09-22 Nec Corporation Speech recognition system and speech recognition method with reduced response time for recognition
US5809465A (en) * 1995-03-30 1998-09-15 Advanced Recognition Technology Pattern recognition system
WO1996030893A1 (en) * 1995-03-30 1996-10-03 Advanced Recognition Technologies, Inc. Pattern recognition system
US6195638B1 (en) * 1995-03-30 2001-02-27 Art-Advanced Recognition Technologies Inc. Pattern recognition system
US5799274A (en) * 1995-10-09 1998-08-25 Ricoh Company, Ltd. Speech recognition system and method for properly recognizing a compound word composed of a plurality of words
US5819219A (en) * 1995-12-11 1998-10-06 Siemens Aktiengesellschaft Digital signal processor arrangement and method for comparing feature vectors
US20020032566A1 (en) * 1996-02-09 2002-03-14 Eli Tzirkel-Hancock Apparatus, method and computer readable memory medium for speech recogniton using dynamic programming
EP0789349A2 (en) 1996-02-09 1997-08-13 Canon Kabushiki Kaisha Pattern matching method and apparatus and telephone system
US5960395A (en) * 1996-02-09 1999-09-28 Canon Kabushiki Kaisha Pattern matching method, apparatus and computer readable memory medium for speech recognition using dynamic programming
US7062435B2 (en) 1996-02-09 2006-06-13 Canon Kabushiki Kaisha Apparatus, method and computer readable memory medium for speech recognition using dynamic programming
US5909665A (en) * 1996-05-30 1999-06-01 Nec Corporation Speech recognition system
EP0810583A2 (en) * 1996-05-30 1997-12-03 Nec Corporation Speech recognition system
EP0810583A3 (en) * 1996-05-30 1998-10-07 Nec Corporation Speech recognition system
US20020153416A1 (en) * 1997-10-02 2002-10-24 Kroll Mark W. Magnetic card swipe signature security system
US6062474A (en) * 1997-10-02 2000-05-16 Kroll; Mark William ATM signature security system
US6817520B2 (en) 1997-10-02 2004-11-16 Kroll Family Trust Magnetic card swipe signature security system
US6405922B1 (en) 1997-10-02 2002-06-18 Kroll Family Trust Keyboard signature security system
EP0908868A3 (en) * 1997-10-10 2001-04-18 Philips Electronics N.V. Computer arrangement for speech recognition
EP0908868A2 (en) * 1997-10-10 1999-04-14 Philips Electronics N.V. Computer arrangement for speech recognition
US6226610B1 (en) 1998-02-10 2001-05-01 Canon Kabushiki Kaisha DP Pattern matching which determines current path propagation using the amount of path overlap to the subsequent time point
US20070162283A1 (en) * 1999-08-31 2007-07-12 Accenture Llp: Detecting emotions using voice signal analysis
US6463415B2 (en) 1999-08-31 2002-10-08 Accenture Llp 69voice authentication system and method for regulating border crossing
US6427137B2 (en) 1999-08-31 2002-07-30 Accenture Llp System, method and article of manufacture for a voice analysis system that detects nervousness for preventing fraud
US20020194002A1 (en) * 1999-08-31 2002-12-19 Accenture Llp Detecting emotions using voice signal analysis
US20030023444A1 (en) * 1999-08-31 2003-01-30 Vicki St. John A voice recognition system for navigating on the internet
US6697457B2 (en) 1999-08-31 2004-02-24 Accenture Llp Voice messaging system that organizes voice messages based on detected emotion
US20110178803A1 (en) * 1999-08-31 2011-07-21 Accenture Global Services Limited Detecting emotion in voice signals in a call center
US7627475B2 (en) 1999-08-31 2009-12-01 Accenture Llp Detecting emotions using voice signal analysis
US8965770B2 (en) 1999-08-31 2015-02-24 Accenture Global Services Limited Detecting emotion in voice signals in a call center
US6151571A (en) * 1999-08-31 2000-11-21 Andersen Consulting System, method and article of manufacture for detecting emotion in voice signals through analysis of a plurality of voice signal parameters
US7222075B2 (en) 1999-08-31 2007-05-22 Accenture Llp Detecting emotions using voice signal analysis
US6353810B1 (en) 1999-08-31 2002-03-05 Accenture Llp System, method and article of manufacture for an emotion detection system improving emotion recognition
US7590538B2 (en) 1999-08-31 2009-09-15 Accenture Llp Voice recognition system for navigating on the internet
US20050091053A1 (en) * 2000-09-12 2005-04-28 Pioneer Corporation Voice recognition system
US20020049592A1 (en) * 2000-09-12 2002-04-25 Pioneer Corporation Voice recognition system
EP1189200A1 (en) * 2000-09-12 2002-03-20 Pioneer Corporation Voice recognition system
US9047871B2 (en) 2012-12-12 2015-06-02 At&T Intellectual Property I, L.P. Real—time emotion tracking system
US9355650B2 (en) 2012-12-12 2016-05-31 At&T Intellectual Property I, L.P. Real-time emotion tracking system
US9570092B2 (en) 2012-12-12 2017-02-14 At&T Intellectual Property I, L.P. Real-time emotion tracking system
US9514747B1 (en) * 2013-08-28 2016-12-06 Amazon Technologies, Inc. Reducing speech recognition latency

Also Published As

Publication number Publication date Type
EP0081390B1 (en) 1985-12-04 grant
CA1193013A1 (en) grant
DE3267835D1 (en) 1986-01-16 grant
CA1193013A (en) 1985-09-03 grant
EP0081390A1 (en) 1983-06-15 application

Similar Documents

Publication Publication Date Title
US5148489A (en) Method for spectral estimation to improve noise robustness for speech recognition
US5315689A (en) Speech recognition system having word-based and phoneme-based recognition means
US5983180A (en) Recognition of sequential data using finite state sequence models organized in a tree structure
US5822730A (en) Lexical tree pre-filtering in speech recognition
US5734791A (en) Rapid tree-based method for vector quantization
US6014618A (en) LPAS speech coder using vector quantized, multi-codebook, multi-tap pitch predictor and optimized ternary source excitation codebook derivation
US5794196A (en) Speech recognition system distinguishing dictation from commands by arbitration between continuous speech and isolated word modules
US6038535A (en) Speech classifier and method using delay elements
US5956675A (en) Method and apparatus for word counting in continuous speech recognition useful for reliable barge-in and early end of speech detection
US6167377A (en) Speech recognition language models
US5010574A (en) Vector quantizer search arrangement
US5077798A (en) Method and system for voice coding based on vector quantization
US5255342A (en) Pattern recognition system and method using neural network
US5146539A (en) Method for utilizing formant frequencies in speech recognition
US4467437A (en) Pattern matching device with a DP technique applied to feature vectors of two information compressed patterns
US4956865A (en) Speech recognition
US5220640A (en) Neural net architecture for rate-varying inputs
US4837831A (en) Method for creating and using multiple-word sound models in speech recognition
US6226610B1 (en) DP Pattern matching which determines current path propagation using the amount of path overlap to the subsequent time point
US4400788A (en) Continuous speech pattern recognizer
US4624008A (en) Apparatus for automatic speech recognition
US4489434A (en) Speech recognition method and apparatus
US6421641B1 (en) Methods and apparatus for fast adaptation of a band-quantized speech decoding system
US4905287A (en) Pattern recognition system
US4489435A (en) Method and apparatus for continuous word string recognition

Legal Events

Date Code Title Description
AS Assignment

Owner name: NIPPON ELECTRIC CO., LTD., 33-1, SHIBA GOCHOME, MI

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:WATARI, MASAO;SAKOE, HIROAKI;REEL/FRAME:004520/0469

Effective date: 19821203

Owner name: NIPPON ELECTRIC CO., LTD.,JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WATARI, MASAO;SAKOE, HIROAKI;REEL/FRAME:004520/0469

Effective date: 19821203

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12