US5369728A - Method and apparatus for detecting words in input speech data - Google Patents

Method and apparatus for detecting words in input speech data Download PDF

Info

Publication number
US5369728A
US5369728A US07/895,813 US89581392A US5369728A US 5369728 A US5369728 A US 5369728A US 89581392 A US89581392 A US 89581392A US 5369728 A US5369728 A US 5369728A
Authority
US
United States
Prior art keywords
reference pattern
data
input speech
word
silence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US07/895,813
Inventor
Tetsuo Kosaka
Atsushi Sakurai
Junichi Tamura
Hiroshi Matsuo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: KOSAKA, TETSUO, MATSUO, HIROSHI, SAKURAI, ATSUSHI, TAMURA, JUNICHI
Application granted granted Critical
Publication of US5369728A publication Critical patent/US5369728A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/10Speech classification or search using distance or distortion measures between unknown speech and reference templates

Definitions

  • the present invention relates to a speech recognition method and to a speech recognition apparatus capable of simultaneously performing the detection of a section of input speech containing a word and recognizing a word based on a comparison with a reference pattern of the word.
  • One known speech recognition method is the word spotting method which can simultaneously detect a section of input speech containing a word and recognize a word contained in input speech information.
  • the word spotting method calculates the difference between a reference pattern of a word and a parameter obtained from input speech to be recognized, while shifting the reference pattern by one frame with respect to a power time series of the input speech parameter. This difference is called the distance between the reference pattern and the section of the input speech containing a word.
  • the word spotting method recognizes that a word corresponding to the reference pattern exists in a portion of the input speech when the distance falls below a predetermined threshold value.
  • the method which achieves these objectives relates to a method to be used as a part of a speech recognition method comprising the steps of storing data representing a reference pattern comprising the combination of a word reference pattern and a silence pattern, and calculating the differences between data representing the reference pattern and data representing input speech.
  • the calculating step calculates the differences between data representing the reference pattern and data representing input speech while shifting the data representing the reference pattern with respect to the data representing the input speech.
  • the storing step can comprise the step of storing a non-word portion of the data representing input speech as the silence pattern.
  • the method can further comprise the step of sequentially updating the non-word portion of the data representing input speech as the silence pattern.
  • the method further comprises the step of determining the value of speech parameters of data representing input speech and the reference pattern before the calculating step.
  • the present invention which achieves these objectives relates to a subsystem of the speech recognition apparatus comprising input means for inputting speech data, word reference pattern storing means for storing at least one word reference pattern, reference pattern synthesizing means for synthesizing a reference pattern from a silence reference pattern and at least one word reference pattern, and calculating means for calculating the differences between the input speech data and the synthesized reference pattern.
  • the calculating means calculates the differences between the input speech data and the synthesized reference pattern while shifting the synthesized reference pattern with respect to the input speech data.
  • the system further comprises non-word portion storing means for storing a non-word portion of the input speech representing a portion of the input speech in which a word is absent.
  • the reference pattern synthesizing means synthesizes the reference pattern by using the non-word portion of the input speech as the silence reference pattern.
  • the apparatus can further comprise means for sequentially updating the non-word portion of the input speech data stored in the non-word portion storing means.
  • the calculating means can determine the values of speech parameters of the input speech data and the reference pattern.
  • Such a speech recognition apparatus and method are particularly advantageous when using the word spotting method for simultaneously detecting a word section and recognizing a word, since a reference pattern composed of a word reference pattern and a silence pattern is used. Moreover, since the reference pattern includes a silence pattern added thereto, the recognition rate for speech independently input by an input unit is improved and word selection is more precisely achieved. Furthermore, the use of such a silence pattern permits the apparatus to distinguish a word such as "ku" from another word including this word, such as "roku", and thus, both word patterns can be correctly recognized. In addition, by using the embodiment in which the silence pattern is recognizing from the input speech information, it is always possible to create an appropriate reference pattern despite any changes in the input speech, and it is also possible to achieve more precise word spotting. Further, because word recognition can be carried out using the silence pattern and excluding the silence pattern, speech information can be precisely recognized without being influenced by the length and characteristics of the silence pattern added to the reference pattern.
  • FIG. 1 is a schematic block diagram of the present invention
  • FIG. 2 is a schematic block diagram of a speech recognition apparatus of the present invention
  • FIG. 3 is a control flow chart of the present invention
  • FIG. 4 is a view explaining detection of the minimum point
  • FIG. 5 is a view explaining a distance recalculation operation
  • FIG. 6 is a view explaining incorrect recognition in the connected DP
  • FIG. 7 is a schematic view explaining detection of a start point of a word section
  • FIG. 8 is a control flow chart of speech analysis according to the present invention.
  • FIGS. 9(1) and 9(2) are explanatory views of buffers for storing speech parameters.
  • FIG. 10 is an explanatory view of a ring buffer.
  • FIG. 1 is a block diagram showing the construction of an embodiment of the present invention.
  • the speech recognition apparatus of the present invention comprises a speech analysis unit 1 for analyzing input speech, a word section selection unit 2 for selecting a section including a word, a word spotting unit 3 for executing word spotting by using a registered reference pattern, a word reference pattern storing unit 4 for storing a word reference pattern expressed in a parameter time series, a silence reference pattern storing unit 5 for executing matching with a silence, a reference pattern synthesizing unit 6 for synthesizing a word reference pattern stored in the word reference pattern storing unit 4 and a silence reference pattern stored in the silence reference pattern storing unit 5, and a word recognition unit 7 recognizing a word from the input speech.
  • a speech analysis unit 1 for analyzing input speech
  • a word section selection unit 2 for selecting a section including a word
  • a word spotting unit 3 for executing word spotting by using a registered reference pattern
  • a word reference pattern storing unit 4
  • FIG. 2 is a constructional block diagram of a specific speech recognition apparatus to realize a recognition method of the present invention.
  • the speech recognition apparatus comprises an input unit 8 for inputting speech to be recognized, a disk 9, such as a hard disk or a floppy disk, for storing various kinds of data, and a control unit 10 for controlling the speech recognition apparatus including a read-only memory (ROM) for storing a control program shown in FIG. 3.
  • the control unit 10 determines the processes to be performed and performs a control operation according to the control program in the ROM.
  • Reference numerals 11 and 12 denote a random access memory (RAM) for storing various kinds of data from the units shown in FIG. 1, and an output unit composed of, for example, a CRT display or a printer, respectively.
  • the units shown in FIG. 1 each may have a CPU, a RAM and a ROM.
  • Speech input from the input unit 8 is analyzed into parameters suitable for speech recognition, such as LPC cepstrums by the speech analysis unit 1, and power time series of the input speech are simultaneously found (Step S1).
  • the power time series found by the speech analysis unit 1 are monitored by the word section selection unit 2, a point where the power of the speech exceeds a predetermined threshold value is recognized as a portion where a word is likely to exist, and a section long enough to contain a word including the point is selected as a word section from the time series (Step S2).
  • the word section selection is not performed so strictly.
  • Word spotting is conducted by the word spotting unit 3 on the parameter series found by the word section selection unit 2 by using a reference pattern obtained by synthesizing silence patterns in the silence reference pattern storing unit 5 before and after a word reference pattern stored in the word reference pattern storing unit 4 by the reference pattern synthesizing unit 6.
  • the frame length of the silence reference pattern added to the word reference pattern in the reference pattern synthesizing unit 6 should be sufficiently long to take into account silent sections of speech arising from a geminated consonant and a silent plosive consonant before and after the silence reference pattern.
  • a dynamic programing method with inclination control of 1/2 ⁇ 2
  • frames which number more than twice the number of silent sections to be added by a geminated consonant and a silent plosive consonant are added to a word reference pattern. It is thereby possible to prevent incorrect detection of a word even if silent sections are made by a geminated consonant and a silent plosive consonant before and after the silence reference pattern.
  • the detailed operation of the word spotting unit 3 will be described with reference to the flow chart shown in FIG. 3.
  • a distance calculation using a word spotting operation conducted by the word spotting unit 3 is made for each frame of input speech in Step S3.
  • a dynamic programming (DP) value, D(i), or distance can be obtained for each frame from the expressions listed below, in the case where the DP method is used as follows:
  • d(i,j) distance between input vector of i-th frame and reference pattern of j-th frame
  • the DP value, D(i), is the distance between each frame of a word section of input speech and a reference pattern synthesized from unit 6.
  • a DP matching operation is executed between the word section of input speech and the standard patterns stored in the dictionary, while shifting the reference patterns in a frame-by-frame fashion with respect to the input speech.
  • the distance between the i-th frame of the input speech and the j-th frame of the reference pattern is expressed by d(i.j) in formula (1), and the cumulative distance between these frames is represented by P(i.j) in the same formula.
  • the formula (2) shown above is intended for calculating the optimal path length for each frame of the input speech and the reference pattern.
  • the DP value (D(i)) appearing in the formula 3 is a value which is obtained, when the standard pattern length is represented by J, by dividing P(i,J) by C(i,J) so as to normalize any fluctuation caused by length variation.
  • FIG. 5 shows, by way of example, the manner in which data is obtained during matching with a certain reference pattern.
  • Step S4 a DP value D(i) shown in the expression (3) is compared with a preset threshold value. If D(i) is less than the threshold value, Step S6 is executed, and Step S5 is executed if D(i) is greater than or equal to the threshold value.
  • Step S5 the word spotting unit checks whether or not the calculation in Step S3 is performed for all frames of a selected section. If the calculation for all frames of a selected section is completed, Step S8A is executed, and if the calculation is not completed, the distances of the remaining frames are calculated by returning to Step S3.
  • Step S6 as shown in FIG. 4, the word spotting unit 3 determines the point at which the DP value in a section which is less than the threshold value is a minimum.
  • DP pass as used in FIGS. 5 and 6 is used to mean a route or path traced back along points where C(i,j) is minimized from a terminal end of the reference pattern at which the DP value as determined by the formula (3) takes a minimum value.
  • Step S7 as shown in FIG. 5, back tracking of DP pass is performed from the point of the minimum value found in Step S6, the distance of only a word section indicated by a thick line of DP pass shown in FIG. 5 is calculated again, and the found distance is temporarily stored in a buffer as the distance of an input word.
  • Step S8B the word spotting unit 3 checks whether matching of the selected word section with reference patterns of all registered words is completed.
  • Step S10 is executed, and if the matching is not completed, calculation of the distance between the selected word section and another reference pattern representing the next registered word, the pattern for which is stored in unit 4 is started by returning to Step S3.
  • the distances between the selected word section of input speech and the reference patterns obtained by conducting word spotting on the patterns synthesized by the reference pattern synthesizing unit 6 are compared in the word recognition unit 7, and the word associated with the minimum distance is output as a recognized word (Step S10). If D(i) is greater than the threshold value in Step S4 for every word in storage unit 4 which is matched with the selected word section of input speech, it is determined by word recognition unit 7 that there is no recognized word for the selected word section (Step S9).
  • the apparatus checks whether the distance between the reference patterns and the inputted selected word section falls below the threshold value and a recognition operation is executed when the distance falls below the threshold value, recognition can be achieved without selecting a word section beforehand.
  • the silent reference patterns may be added before or after the word reference pattern as necessity requires.
  • a silence reference pattern to be added is not a prepared reference pattern, but a reference pattern found by calculation based on each silence portion in an input signal occurring before a selected word section. It is possible, by the use of this embodiment, to perform word spotting which is unlikely to be influenced by background noise.
  • FIG. 7 schematically shows the speech power of the input speech "roku".
  • P 0 designates a threshold value used to select a word section of the input speech.
  • the apparatus recognizes that a word section starts at a time t when the speech power exceeds the threshold value P 0 , and the control unit 10 calculates a silence reference pattern with respect to a silent section of the input speech that occurs before the word section.
  • the silence reference pattern has a duration l. At this time, if l is set to approximately twice the continuous length of a geminated consonant, it is possible to cope with a silent section l 0 before the silent plosive consonant "k" shown in FIG. 7.
  • FIG. 8 is a detailed flow chart of the speech analysis operation in Step S1 of FIG. 3 according to this embodiment. Analyzed parameters are stored in a buffer in the RAM 11 shown in FIG. 2.
  • a speech parameter C t related to input speech for one frame is calculated in Step S11. Then, a speech power P t for one frame is calculated (Step S12), and compared with a preset threshold value P 0 (Step S13). If the speech power P t is less than the threshold value P 0 , the speech analysis unit 1 determines that a silent section of the input speech continues, and an address pointer a for a buffer is incremented by one (Step S14). Then, the calculated speech parameter C t is stored at an address designated by the address pointer a in the buffer in Step S15.
  • FIG. 9(1) shows the state of the buffer in Step S12 at the time t.
  • a speech parameter C t-1 for one frame calculated at a time t-1 is stored at an address designated by the address pointer a.
  • speech parameters C t-2 , C t-3 , . . . , C t-U corresponding to times t-2, t-3, . . . , t-l are stored. If it is determined in Step S13 that the speech power P t is less than the threshold value P 0 at the time t, since Steps 14 and 15 are executed, the state of the buffer becomes as shown in FIG. 9(2).
  • the address to which the address pointer a points is advanced by one, and the speech parameter C t at the time t is stored at the advanced address.
  • the Steps S11 to S15 are repeated until the speech power P t exceeds or is equal to the threshold value P 0 .
  • Step S13 If the speech power P t has a value equal to or greater than the threshold value P 0 in Step S13, the speech analysis unit 1 determines that a word section starts at a point when the speech power P t exceeds or is equal to the threshold value P 0 , and Step S16 is executed.
  • Step S16 a silence reference pattern is calculated by using speech parameters stored in the buffer.
  • speech parameters corresponding to a silent section occurring before a word section in an input signal are stored in the buffer, since background noise is also included in the input signal in the buffer, precise word spotting can be achieved regardless of the strength and kind of the background noise by using a silence reference pattern calculated from the speech parameters.
  • the number of speech parameters in the buffer used to calculate the silence reference pattern depends on how the reference pattern synthesizing unit 6 shown in FIG. 1 adds the silence reference pattern to the word reference pattern. The addition is roughly performed by the following two methods.
  • Silence reference patterns for l frames are stored in the silence reference pattern storing unit 5, and added to the word reference pattern.
  • Silence reference patterns for several frames are stored in the silence reference pattern storing unit 5, and repeatedly used so as to be added to the word reference pattern as a silence reference pattern for the l frames.
  • silence reference patterns for l frames are created by using l speech parameters from C t-U+1 to C t in the buffer.
  • method b) several typical speech parameters are selected from the buffer so as to create silence reference patterns for several frames.
  • the thus created silence reference patterns are stored in the silence reference pattern storing unit 5, and speech analysis is completed (Step S17).
  • FIGS. 9(1) and 9(2) schematically show that the buffer can have an infinite length, if a ring buffer shown in FIG. 10 is used, the length of the buffer may be at most l+1.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Abstract

An apparatus and method for recognizing speech includes a memory for storing data representing a reference pattern composed of the combination of a word reference pattern and a silence pattern, and a calculator for calculating the differences between data representing the reference pattern and data representing input speech. The use of such a silence pattern in the reference pattern permits a word such as "other" to be distinguished from the word "mother".

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a speech recognition method and to a speech recognition apparatus capable of simultaneously performing the detection of a section of input speech containing a word and recognizing a word based on a comparison with a reference pattern of the word.
2. Description of the Related Art
One known speech recognition method is the word spotting method which can simultaneously detect a section of input speech containing a word and recognize a word contained in input speech information.
The word spotting method calculates the difference between a reference pattern of a word and a parameter obtained from input speech to be recognized, while shifting the reference pattern by one frame with respect to a power time series of the input speech parameter. This difference is called the distance between the reference pattern and the section of the input speech containing a word. The word spotting method recognizes that a word corresponding to the reference pattern exists in a portion of the input speech when the distance falls below a predetermined threshold value.
However, such a conventional word spotting method cannot correctly perform recognition in the case in which a certain word includes another word as a part thereof, for example, "roku" (six) includes "ku" (nine) or "sici" (seven) includes "ici" (one).
In one example of "roku" and "ku" shown in FIG. 6, a reference pattern for "ku" is matched with two input sounds: the word "ku" and the syllable "ku" which is a part of the input word "roku", thereby resulting in incorrect detection.
SUMMARY OF THE INVENTION
It is an object of the present invention to overcome the problems of the prior art.
It is another object of the present invention to provide a speech recognition method for sequentially calculating the differences between a reference pattern and input speech while shifting the reference pattern with respect to the input speech, where the reference pattern is a combination of a word reference pattern and a silence reference pattern.
It is still further another object of the present invention to provide a speech recognition method which uses a silence pattern as part of the reference pattern, which is generated from input speech information.
It is still a further object of the present invention to provide a speech recognition apparatus and method which determines the value of speech parameters of the input speech and the reference pattern when calculating the differences therebetween.
It is a further object of the present invention to provide an apparatus and method which can simultaneously detect a word section of and recognizes a word in input speech, in which the recognition rate is improved and in which word selection is more precisely achieved.
It is another object of the present invention to provide a speech recognition apparatus and method which can distinguish a first word from a second word, where the second word includes the first word therein.
It is still another object of the present invention to provide a speech recognition apparatus and method which always creates the appropriate reference pattern despite changes in the input speech, and which can achieve more precise word spotting.
According to one aspect, the method which achieves these objectives relates to a method to be used as a part of a speech recognition method comprising the steps of storing data representing a reference pattern comprising the combination of a word reference pattern and a silence pattern, and calculating the differences between data representing the reference pattern and data representing input speech. The calculating step calculates the differences between data representing the reference pattern and data representing input speech while shifting the data representing the reference pattern with respect to the data representing the input speech. The storing step can comprise the step of storing a non-word portion of the data representing input speech as the silence pattern. In addition, the method can further comprise the step of sequentially updating the non-word portion of the data representing input speech as the silence pattern. The method further comprises the step of determining the value of speech parameters of data representing input speech and the reference pattern before the calculating step.
According to another aspect, the present invention which achieves these objectives relates to a subsystem of the speech recognition apparatus comprising input means for inputting speech data, word reference pattern storing means for storing at least one word reference pattern, reference pattern synthesizing means for synthesizing a reference pattern from a silence reference pattern and at least one word reference pattern, and calculating means for calculating the differences between the input speech data and the synthesized reference pattern.
The calculating means calculates the differences between the input speech data and the synthesized reference pattern while shifting the synthesized reference pattern with respect to the input speech data.
According to one embodiment, the system further comprises non-word portion storing means for storing a non-word portion of the input speech representing a portion of the input speech in which a word is absent. In this embodiment, the reference pattern synthesizing means synthesizes the reference pattern by using the non-word portion of the input speech as the silence reference pattern.
The apparatus can further comprise means for sequentially updating the non-word portion of the input speech data stored in the non-word portion storing means.
The calculating means can determine the values of speech parameters of the input speech data and the reference pattern.
Such a speech recognition apparatus and method are particularly advantageous when using the word spotting method for simultaneously detecting a word section and recognizing a word, since a reference pattern composed of a word reference pattern and a silence pattern is used. Moreover, since the reference pattern includes a silence pattern added thereto, the recognition rate for speech independently input by an input unit is improved and word selection is more precisely achieved. Furthermore, the use of such a silence pattern permits the apparatus to distinguish a word such as "ku" from another word including this word, such as "roku", and thus, both word patterns can be correctly recognized. In addition, by using the embodiment in which the silence pattern is recognizing from the input speech information, it is always possible to create an appropriate reference pattern despite any changes in the input speech, and it is also possible to achieve more precise word spotting. Further, because word recognition can be carried out using the silence pattern and excluding the silence pattern, speech information can be precisely recognized without being influenced by the length and characteristics of the silence pattern added to the reference pattern.
These and other objects, advantages and features of the present invention will become more apparent from the following detailed description of the preferred embodiments when taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a schematic block diagram of the present invention;
FIG. 2 is a schematic block diagram of a speech recognition apparatus of the present invention;
FIG. 3 is a control flow chart of the present invention;
FIG. 4 is a view explaining detection of the minimum point;
FIG. 5 is a view explaining a distance recalculation operation;
FIG. 6 is a view explaining incorrect recognition in the connected DP;
FIG. 7 is a schematic view explaining detection of a start point of a word section;
FIG. 8 is a control flow chart of speech analysis according to the present invention;
FIGS. 9(1) and 9(2) are explanatory views of buffers for storing speech parameters; and
FIG. 10 is an explanatory view of a ring buffer.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
FIG. 1 is a block diagram showing the construction of an embodiment of the present invention. The speech recognition apparatus of the present invention comprises a speech analysis unit 1 for analyzing input speech, a word section selection unit 2 for selecting a section including a word, a word spotting unit 3 for executing word spotting by using a registered reference pattern, a word reference pattern storing unit 4 for storing a word reference pattern expressed in a parameter time series, a silence reference pattern storing unit 5 for executing matching with a silence, a reference pattern synthesizing unit 6 for synthesizing a word reference pattern stored in the word reference pattern storing unit 4 and a silence reference pattern stored in the silence reference pattern storing unit 5, and a word recognition unit 7 recognizing a word from the input speech.
FIG. 2 is a constructional block diagram of a specific speech recognition apparatus to realize a recognition method of the present invention. Referring to FIG. 2, the speech recognition apparatus comprises an input unit 8 for inputting speech to be recognized, a disk 9, such as a hard disk or a floppy disk, for storing various kinds of data, and a control unit 10 for controlling the speech recognition apparatus including a read-only memory (ROM) for storing a control program shown in FIG. 3. The control unit 10 determines the processes to be performed and performs a control operation according to the control program in the ROM. Reference numerals 11 and 12 denote a random access memory (RAM) for storing various kinds of data from the units shown in FIG. 1, and an output unit composed of, for example, a CRT display or a printer, respectively. The units shown in FIG. 1 each may have a CPU, a RAM and a ROM.
Process operations of the present invention will now be explained with reference to FIG. 3. Speech input from the input unit 8 is analyzed into parameters suitable for speech recognition, such as LPC cepstrums by the speech analysis unit 1, and power time series of the input speech are simultaneously found (Step S1). The power time series found by the speech analysis unit 1 are monitored by the word section selection unit 2, a point where the power of the speech exceeds a predetermined threshold value is recognized as a portion where a word is likely to exist, and a section long enough to contain a word including the point is selected as a word section from the time series (Step S2). However, the word section selection is not performed so strictly. Word spotting is conducted by the word spotting unit 3 on the parameter series found by the word section selection unit 2 by using a reference pattern obtained by synthesizing silence patterns in the silence reference pattern storing unit 5 before and after a word reference pattern stored in the word reference pattern storing unit 4 by the reference pattern synthesizing unit 6. The frame length of the silence reference pattern added to the word reference pattern in the reference pattern synthesizing unit 6 should be sufficiently long to take into account silent sections of speech arising from a geminated consonant and a silent plosive consonant before and after the silence reference pattern. For example, if a dynamic programing method (DP) with inclination control of 1/2˜2 is used in matching input speech to stored speech patterns, frames which number more than twice the number of silent sections to be added by a geminated consonant and a silent plosive consonant are added to a word reference pattern. It is thereby possible to prevent incorrect detection of a word even if silent sections are made by a geminated consonant and a silent plosive consonant before and after the silence reference pattern. The detailed operation of the word spotting unit 3 will be described with reference to the flow chart shown in FIG. 3. First, a distance calculation using a word spotting operation conducted by the word spotting unit 3 is made for each frame of input speech in Step S3. A dynamic programming (DP) value, D(i), or distance can be obtained for each frame from the expressions listed below, in the case where the DP method is used as follows:
a) Cumulative Distance ##EQU1## b) Optimal Path Length ##EQU2## c) DP Value
D(i)=P(i,J)/C(i,J)                                         (3)
i: i-th frame of input parameter
j: j-th frame of reference pattern
d(i,j): distance between input vector of i-th frame and reference pattern of j-th frame
P(i,j): cumulative distance at point (i,j)
C(i,j): optimal path length at point (i,j)
J: length of reference pattern
The DP value, D(i), is the distance between each frame of a word section of input speech and a reference pattern synthesized from unit 6.
A brief description will now be given of the spotting operation performed in Step S3, with reference to the formulae (1) to (3).
A DP matching operation is executed between the word section of input speech and the standard patterns stored in the dictionary, while shifting the reference patterns in a frame-by-frame fashion with respect to the input speech. The distance between the i-th frame of the input speech and the j-th frame of the reference pattern is expressed by d(i.j) in formula (1), and the cumulative distance between these frames is represented by P(i.j) in the same formula.
The formula (2) shown above is intended for calculating the optimal path length for each frame of the input speech and the reference pattern.
The DP value (D(i)) appearing in the formula 3 is a value which is obtained, when the standard pattern length is represented by J, by dividing P(i,J) by C(i,J) so as to normalize any fluctuation caused by length variation.
By monitoring the DP value while shifting the reference pattern with respect to the input speech in a frame-by-frame fashion, it is possible to detect the moment at which the DP value is a minimum. Detecting the word and recognition of this word can simultaneously be performed on the basis of the position at which the minimum of the DP value is detected and the reference pattern with which the minimum of DP value is obtained.
FIG. 5 shows, by way of example, the manner in which data is obtained during matching with a certain reference pattern.
In Step S4, a DP value D(i) shown in the expression (3) is compared with a preset threshold value. If D(i) is less than the threshold value, Step S6 is executed, and Step S5 is executed if D(i) is greater than or equal to the threshold value. In Step S5, the word spotting unit checks whether or not the calculation in Step S3 is performed for all frames of a selected section. If the calculation for all frames of a selected section is completed, Step S8A is executed, and if the calculation is not completed, the distances of the remaining frames are calculated by returning to Step S3. In Step S6, as shown in FIG. 4, the word spotting unit 3 determines the point at which the DP value in a section which is less than the threshold value is a minimum. The term "DP pass," as used in FIGS. 5 and 6 is used to mean a route or path traced back along points where C(i,j) is minimized from a terminal end of the reference pattern at which the DP value as determined by the formula (3) takes a minimum value. In Step S7, as shown in FIG. 5, back tracking of DP pass is performed from the point of the minimum value found in Step S6, the distance of only a word section indicated by a thick line of DP pass shown in FIG. 5 is calculated again, and the found distance is temporarily stored in a buffer as the distance of an input word. In Step S8B, the word spotting unit 3 checks whether matching of the selected word section with reference patterns of all registered words is completed. If the matching is completed, Step S10 is executed, and if the matching is not completed, calculation of the distance between the selected word section and another reference pattern representing the next registered word, the pattern for which is stored in unit 4 is started by returning to Step S3. After the above word spotting is performed in the word spotting unit 3 shown in FIG. 1, the distances between the selected word section of input speech and the reference patterns obtained by conducting word spotting on the patterns synthesized by the reference pattern synthesizing unit 6 are compared in the word recognition unit 7, and the word associated with the minimum distance is output as a recognized word (Step S10). If D(i) is greater than the threshold value in Step S4 for every word in storage unit 4 which is matched with the selected word section of input speech, it is determined by word recognition unit 7 that there is no recognized word for the selected word section (Step S9).
Although a word section is roughly selected based on the power time series of input speech and then word spotting is conducted on the selected word section in this embodiment, if the distance is calculated while always shifting the reference pattern which is compared to an input pattern in the form of a selected word section by word spotting, the apparatus checks whether the distance between the reference patterns and the inputted selected word section falls below the threshold value and a recognition operation is executed when the distance falls below the threshold value, recognition can be achieved without selecting a word section beforehand.
Although word recognition is carried out by adding the silence reference patterns before and after the word reference pattern in this embodiment, the silent reference patterns may be added before or after the word reference pattern as necessity requires.
Another embodiment of the present invention will now be described.
In this embodiment, a silence reference pattern to be added is not a prepared reference pattern, but a reference pattern found by calculation based on each silence portion in an input signal occurring before a selected word section. It is possible, by the use of this embodiment, to perform word spotting which is unlikely to be influenced by background noise.
It is a well-known fact that the strength and kind of the background noise have a great influence on the recognition rate of speech. This also applies to the recognition of a word section. Therefore, if word spotting is carried out by using a silence reference pattern calculated from silence information in a quiet portion of an input speech signal, it is possible that the word spotting will not be properly performed with respect to input speech including much background noise. Accordingly, proper word spotting is achieved by using a silent section in an input speech signal occurring before a selected word section.
FIG. 7 schematically shows the speech power of the input speech "roku". P0 designates a threshold value used to select a word section of the input speech. Referring to FIG. 7, the apparatus recognizes that a word section starts at a time t when the speech power exceeds the threshold value P0, and the control unit 10 calculates a silence reference pattern with respect to a silent section of the input speech that occurs before the word section. The silence reference pattern has a duration l. At this time, if l is set to approximately twice the continuous length of a geminated consonant, it is possible to cope with a silent section l0 before the silent plosive consonant "k" shown in FIG. 7.
FIG. 8 is a detailed flow chart of the speech analysis operation in Step S1 of FIG. 3 according to this embodiment. Analyzed parameters are stored in a buffer in the RAM 11 shown in FIG. 2.
With reference to FIG. 8, the speech analysis will now be described in detail.
First, a speech parameter Ct related to input speech for one frame is calculated in Step S11. Then, a speech power Pt for one frame is calculated (Step S12), and compared with a preset threshold value P0 (Step S13). If the speech power Pt is less than the threshold value P0, the speech analysis unit 1 determines that a silent section of the input speech continues, and an address pointer a for a buffer is incremented by one (Step S14). Then, the calculated speech parameter Ct is stored at an address designated by the address pointer a in the buffer in Step S15. FIG. 9(1) shows the state of the buffer in Step S12 at the time t. A speech parameter Ct-1 for one frame calculated at a time t-1 is stored at an address designated by the address pointer a. Before the speech parameter Ct-1, speech parameters Ct-2, Ct-3, . . . , Ct-U corresponding to times t-2, t-3, . . . , t-l are stored. If it is determined in Step S13 that the speech power Pt is less than the threshold value P0 at the time t, since Steps 14 and 15 are executed, the state of the buffer becomes as shown in FIG. 9(2). The address to which the address pointer a points is advanced by one, and the speech parameter Ct at the time t is stored at the advanced address. The Steps S11 to S15 are repeated until the speech power Pt exceeds or is equal to the threshold value P0.
If the speech power Pt has a value equal to or greater than the threshold value P0 in Step S13, the speech analysis unit 1 determines that a word section starts at a point when the speech power Pt exceeds or is equal to the threshold value P0, and Step S16 is executed.
In Step S16, a silence reference pattern is calculated by using speech parameters stored in the buffer. Although speech parameters corresponding to a silent section occurring before a word section in an input signal are stored in the buffer, since background noise is also included in the input signal in the buffer, precise word spotting can be achieved regardless of the strength and kind of the background noise by using a silence reference pattern calculated from the speech parameters.
The number of speech parameters in the buffer used to calculate the silence reference pattern depends on how the reference pattern synthesizing unit 6 shown in FIG. 1 adds the silence reference pattern to the word reference pattern. The addition is roughly performed by the following two methods.
a) Silence reference patterns for l frames are stored in the silence reference pattern storing unit 5, and added to the word reference pattern.
b) Silence reference patterns for several frames are stored in the silence reference pattern storing unit 5, and repeatedly used so as to be added to the word reference pattern as a silence reference pattern for the l frames.
In method a) , silence reference patterns for l frames are created by using l speech parameters from Ct-U+1 to Ct in the buffer. In method b), several typical speech parameters are selected from the buffer so as to create silence reference patterns for several frames.
The thus created silence reference patterns are stored in the silence reference pattern storing unit 5, and speech analysis is completed (Step S17).
Although FIGS. 9(1) and 9(2) schematically show that the buffer can have an infinite length, if a ring buffer shown in FIG. 10 is used, the length of the buffer may be at most l+1.
The individual components represented by the blocks shown in FIG. 2 are well known in the voice recognition art and their specific construction and operation is not critical to the invention or the best mode for carrying out the invention. Moreover, the steps discussed in the specification and shown in FIGS. 3 and 8 can be easily programmed into well known central processing units by persons of ordinary skill in the art and since such programming per se is not part of the invention, no further description thereof is deemed necessary.

Claims (20)

What is claimed is:
1. A method of detecting words in input speech data, comprising the steps of:
storing data representing a reference pattern which comprises a combination of (i) a word reference pattern and (ii) silence reference patterns which are each sufficiently long to take into account silent sections of speech arising from a geminated consonant or a silent plosive consonant;
calculating distances between data representing the reference pattern and data representing input speech; and
detecting words in the data representing input speech based on the calculated distances between the data representing the reference pattern and the data representing input speech.
2. The method as recited in claim 1, wherein said calculating step calculates distances between data representing the reference pattern and data representing input speech while shifting the data representing the reference pattern with respect to the data representing the input speech.
3. The method as recited in claim 2, wherein said storing step comprises the step of storing a non-word portion of the data representing input speech as the silence reference pattern.
4. The method as recited in claim 3, further comprising the step of sequentially updating the non-word portion of the data representing input speech as the silence reference pattern.
5. The method as recited in claim 2, further comprising the step of determining the value of speech parameters of data representing input speech and the reference pattern before said calculating step.
6. An apparatus for detecting words in input speech data, comprising:
input means for inputting speech data;
word reference pattern storing means for storing word reference pattern data;
reference pattern synthesizing means for synthesizing reference pattern data from silence reference pattern data which is sufficiently long to take into account silent sections of speech arising from a geminated consonant or a silent plosive consonant;
calculating means for calculating distances between the input speech data and the synthesized reference pattern data; and
detecting means for detecting words in the input speech data based on the calculated distances between the synthesized reference pattern data and the input speech data.
7. The apparatus as recited in claim 6, wherein said calculating means calculates the distances between the input speech data and the synthesized reference pattern data while shifting the synthesized reference pattern data with respect to the input speech data.
8. The apparatus as recited in claim 7, further comprising a non-word portion storing means for storing a non-word portion of the input speech data representing a portion of the input speech in which a word is absent, wherein said reference pattern synthesizing means synthesizes the reference pattern data by using the non-word portion of the input speech data as the silence reference pattern data.
9. The apparatus as recited in claim 8, further comprising means for sequentially updating the non-word portion of the input speech data stored in said non-word portion storing means.
10. The apparatus as recited in claim 6, wherein said calculating means determines values of speech parameters of the input speech data and the synthesized reference pattern data.
11. The apparatus as recited in claim 6, wherein said word reference pattern storing means is a read only memory.
12. The apparatus as recited in claim 6, wherein said word reference pattern storing means is a random access memory.
13. A method of detecting words in input speech data, comprising the steps of:
inputting speech data;
synthesizing reference pattern data from stored word reference pattern data and stored silence reference pattern data which is at least twice the size of silence reference pattern data representing a germinated consonant;
shifting the reference pattern data frame by frame with respect to the input speech data;
calculating distances between the input speech data and the reference pattern data for each frame of data; and
detecting words in the input speech data based on the calculated distances between the input speech data and the reference pattern data.
14. A method according to claim 13, wherein the silence reference pattern data is determined by calculations performed on selected portions of the input speech data.
15. An apparatus for detecting words in input speech data, comprising:
input means for inputting speech data;
word reference storing means for storing word reference pattern data;
silence reference storing means for storing silence reference pattern data which is at least twice the size of silence reference pattern data representing a geminated consonant;
synthesizing means for synthesizing reference pattern data from the word reference pattern data and the silence reference pattern data;
shifting means for shifting the reference pattern data frame by frame with respect to the input speech data;
calculating means for calculating distances between the input speech data and the reference pattern data for each frame of data; and
detecting means for detecting words in the input speech data based on the calculated distances between the input speech data and the reference pattern data.
16. An apparatus according to claim 15, further comprising silence reference pattern calculating means for performing calculations on selected portions of the input speech data to determine the silence reference pattern data.
17. The apparatus as recited in claim 13, wherein said word reference storing means is a read only memory.
18. The apparatus as recited in claim 13, wherein said word reference storing means is a random access memory.
19. The apparatus as recited in claim 13, wherein said silence reference storing means is a read only memory.
20. The apparatus as recited in claim 13, wherein said silence reference storing means is a random access memory.
US07/895,813 1991-06-11 1992-06-09 Method and apparatus for detecting words in input speech data Expired - Lifetime US5369728A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP3139091A JPH04362698A (en) 1991-06-11 1991-06-11 Method and device for voice recognition
JP3-139091 1991-06-11

Publications (1)

Publication Number Publication Date
US5369728A true US5369728A (en) 1994-11-29

Family

ID=15237282

Family Applications (1)

Application Number Title Priority Date Filing Date
US07/895,813 Expired - Lifetime US5369728A (en) 1991-06-11 1992-06-09 Method and apparatus for detecting words in input speech data

Country Status (2)

Country Link
US (1) US5369728A (en)
JP (1) JPH04362698A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5465317A (en) * 1993-05-18 1995-11-07 International Business Machines Corporation Speech recognition system with improved rejection of words and sounds not in the system vocabulary
US5553192A (en) * 1992-10-12 1996-09-03 Nec Corporation Apparatus for noise removal during the silence periods in the discontinuous transmission of speech signals to a mobile unit
DE19508711A1 (en) * 1995-03-10 1996-09-12 Siemens Ag Method for recognizing a signal pause between two patterns which are present in a time-variant measurement signal
US5678177A (en) * 1992-07-14 1997-10-14 2777321 Canada Ltd. RF repeaters for time division duplex cordless telephone system
US5684925A (en) * 1995-09-08 1997-11-04 Matsushita Electric Industrial Co., Ltd. Speech representation by feature-based word prototypes comprising phoneme targets having reliable high similarity
US5692097A (en) * 1993-11-25 1997-11-25 Matsushita Electric Industrial Co., Ltd. Voice recognition method for recognizing a word in speech
US5764852A (en) * 1994-08-16 1998-06-09 International Business Machines Corporation Method and apparatus for speech recognition for distinguishing non-speech audio input events from speech audio input events
US5822728A (en) * 1995-09-08 1998-10-13 Matsushita Electric Industrial Co., Ltd. Multistage word recognizer based on reliably detected phoneme similarity regions
US5825977A (en) * 1995-09-08 1998-10-20 Morin; Philippe R. Word hypothesizer based on reliably detected phoneme similarity regions
US5909665A (en) * 1996-05-30 1999-06-01 Nec Corporation Speech recognition system
US5924067A (en) * 1996-03-25 1999-07-13 Canon Kabushiki Kaisha Speech recognition method and apparatus, a computer-readable storage medium, and a computer- readable program for obtaining the mean of the time of speech and non-speech portions of input speech in the cepstrum dimension
US6108628A (en) * 1996-09-20 2000-08-22 Canon Kabushiki Kaisha Speech recognition method and apparatus using coarse and fine output probabilities utilizing an unspecified speaker model
US6236962B1 (en) 1997-03-13 2001-05-22 Canon Kabushiki Kaisha Speech processing apparatus and method and computer readable medium encoded with a program for recognizing input speech by performing searches based on a normalized current feature parameter
US6266636B1 (en) 1997-03-13 2001-07-24 Canon Kabushiki Kaisha Single distribution and mixed distribution model conversion in speech recognition method, apparatus, and computer readable medium
US6393396B1 (en) 1998-07-29 2002-05-21 Canon Kabushiki Kaisha Method and apparatus for distinguishing speech from noise
US20020128826A1 (en) * 2001-03-08 2002-09-12 Tetsuo Kosaka Speech recognition system and method, and information processing apparatus and method used in that system
US20030097264A1 (en) * 2000-10-11 2003-05-22 Canon Kabushiki Kaisha Information processing apparatus and method, a computer readable medium storing a control program for making a computer implemented information process, and a control program for selecting a specific grammar corresponding to an active input field or for controlling selection of a grammar or comprising a code of a selection step of selecting a specific grammar
US6813606B2 (en) 2000-05-24 2004-11-02 Canon Kabushiki Kaisha Client-server speech processing system, apparatus, method, and storage medium
US20050086057A1 (en) * 2001-11-22 2005-04-21 Tetsuo Kosaka Speech recognition apparatus and its method and program
GB2482444A (en) * 2007-03-30 2012-02-01 Wolfson Microelectronics Plc Silence pattern detector for digital audio bit streams
US8331581B2 (en) 2007-03-30 2012-12-11 Wolfson Microelectronics Plc Pattern detection circuitry
US9144028B2 (en) 2012-12-31 2015-09-22 Motorola Solutions, Inc. Method and apparatus for uplink power control in a wireless communication system
US9646610B2 (en) 2012-10-30 2017-05-09 Motorola Solutions, Inc. Method and apparatus for activating a particular wireless communication device to accept speech and/or voice commands using identification data consisting of speech, voice, image recognition

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4481593A (en) * 1981-10-05 1984-11-06 Exxon Corporation Continuous speech recognition
US4489435A (en) * 1981-10-05 1984-12-18 Exxon Corporation Method and apparatus for continuous word string recognition
US4596032A (en) * 1981-12-14 1986-06-17 Canon Kabushiki Kaisha Electronic equipment with time-based correction means that maintains the frequency of the corrected signal substantially unchanged
US4627091A (en) * 1983-04-01 1986-12-02 Rca Corporation Low-energy-content voice detection apparatus
US4712243A (en) * 1983-05-09 1987-12-08 Casio Computer Co., Ltd. Speech recognition apparatus
US4718095A (en) * 1982-11-26 1988-01-05 Hitachi, Ltd. Speech recognition method
US4736429A (en) * 1983-06-07 1988-04-05 Matsushita Electric Industrial Co., Ltd. Apparatus for speech recognition
US4783807A (en) * 1984-08-27 1988-11-08 John Marley System and method for sound recognition with feature selection synchronized to voice pitch
US4802226A (en) * 1982-09-06 1989-01-31 Nec Corporation Pattern matching apparatus
US4811399A (en) * 1984-12-31 1989-03-07 Itt Defense Communications, A Division Of Itt Corporation Apparatus and method for automatic speech recognition
US4817159A (en) * 1983-06-02 1989-03-28 Matsushita Electric Industrial Co., Ltd. Method and apparatus for speech recognition
US4821325A (en) * 1984-11-08 1989-04-11 American Telephone And Telegraph Company, At&T Bell Laboratories Endpoint detector
US4856067A (en) * 1986-08-21 1989-08-08 Oki Electric Industry Co., Ltd. Speech recognition system wherein the consonantal characteristics of input utterances are extracted

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4489435A (en) * 1981-10-05 1984-12-18 Exxon Corporation Method and apparatus for continuous word string recognition
US4481593A (en) * 1981-10-05 1984-11-06 Exxon Corporation Continuous speech recognition
US4596032A (en) * 1981-12-14 1986-06-17 Canon Kabushiki Kaisha Electronic equipment with time-based correction means that maintains the frequency of the corrected signal substantially unchanged
US4802226A (en) * 1982-09-06 1989-01-31 Nec Corporation Pattern matching apparatus
US4718095A (en) * 1982-11-26 1988-01-05 Hitachi, Ltd. Speech recognition method
US4627091A (en) * 1983-04-01 1986-12-02 Rca Corporation Low-energy-content voice detection apparatus
US4712243A (en) * 1983-05-09 1987-12-08 Casio Computer Co., Ltd. Speech recognition apparatus
US4817159A (en) * 1983-06-02 1989-03-28 Matsushita Electric Industrial Co., Ltd. Method and apparatus for speech recognition
US4736429A (en) * 1983-06-07 1988-04-05 Matsushita Electric Industrial Co., Ltd. Apparatus for speech recognition
US4783807A (en) * 1984-08-27 1988-11-08 John Marley System and method for sound recognition with feature selection synchronized to voice pitch
US4821325A (en) * 1984-11-08 1989-04-11 American Telephone And Telegraph Company, At&T Bell Laboratories Endpoint detector
US4811399A (en) * 1984-12-31 1989-03-07 Itt Defense Communications, A Division Of Itt Corporation Apparatus and method for automatic speech recognition
US4856067A (en) * 1986-08-21 1989-08-08 Oki Electric Industry Co., Ltd. Speech recognition system wherein the consonantal characteristics of input utterances are extracted

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
"A Connected Spoken Recognition Method by O(n) Dynamic Programming", ICASSP '83, vol. 1, I Nakagawa (Apr. 1983), pp. 296-299.
"Automatic Organization of Word Spotting Reference Patterns", Review of the Electrical Communications Laboratories, vol. 35, No. 6, T. Kawabata et al. (Nov. 1987), pp. 681-686.
"Consonant Recognition Methods For Unspecified Speakers Using BPF Powers and Time Sequence of LPC Cepstrum Coefficients", Systems and Computers in Japan, vol. 18, No. 6, K. Niyada et al. (Jun. 1987), pp. 47-59.
"Detection of Segment Type Features for Continuous Speech Recognition", The Acoustical Society of Japan, Transaction No. S 585-53, T. Kosaka et al. (Dec. 19, 1985) pp. 405-412.
"Dynamic Time Warping and Vector Quantization in Isolated and Connected Word Recognition", European Conference on Speech Technology, vol. 2, A. Boyer et al. (Sep. 1987), pp. 436-439.
"Isolated Words Recognition Using DP Matching and Mahalanobis Distance", Journal of Electro-Communication, vol. J66-A, No. 1, T. Takara et al. (Jan. 1983), pp. 64-70.
A Connected Spoken Recognition Method by O(n) Dynamic Programming , ICASSP 83, vol. 1, I Nakagawa (Apr. 1983), pp. 296 299. *
Automatic Organization of Word Spotting Reference Patterns , Review of the Electrical Communications Laboratories, vol. 35, No. 6, T. Kawabata et al. (Nov. 1987), pp. 681 686. *
Consonant Recognition Methods For Unspecified Speakers Using BPF Powers and Time Sequence of LPC Cepstrum Coefficients , Systems and Computers in Japan, vol. 18, No. 6, K. Niyada et al. (Jun. 1987), pp. 47 59. *
Detection of Segment Type Features for Continuous Speech Recognition , The Acoustical Society of Japan, Transaction No. S 585 53, T. Kosaka et al. (Dec. 19, 1985) pp. 405 412. *
Dynamic Time Warping and Vector Quantization in Isolated and Connected Word Recognition , European Conference on Speech Technology, vol. 2, A. Boyer et al. (Sep. 1987), pp. 436 439. *
Isolated Words Recognition Using DP Matching and Mahalanobis Distance , Journal of Electro Communication, vol. J66 A, No. 1, T. Takara et al. (Jan. 1983), pp. 64 70. *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5678177A (en) * 1992-07-14 1997-10-14 2777321 Canada Ltd. RF repeaters for time division duplex cordless telephone system
US5553192A (en) * 1992-10-12 1996-09-03 Nec Corporation Apparatus for noise removal during the silence periods in the discontinuous transmission of speech signals to a mobile unit
US5465317A (en) * 1993-05-18 1995-11-07 International Business Machines Corporation Speech recognition system with improved rejection of words and sounds not in the system vocabulary
US5692097A (en) * 1993-11-25 1997-11-25 Matsushita Electric Industrial Co., Ltd. Voice recognition method for recognizing a word in speech
US5764852A (en) * 1994-08-16 1998-06-09 International Business Machines Corporation Method and apparatus for speech recognition for distinguishing non-speech audio input events from speech audio input events
DE19508711A1 (en) * 1995-03-10 1996-09-12 Siemens Ag Method for recognizing a signal pause between two patterns which are present in a time-variant measurement signal
US5970452A (en) * 1995-03-10 1999-10-19 Siemens Aktiengesellschaft Method for detecting a signal pause between two patterns which are present on a time-variant measurement signal using hidden Markov models
US5684925A (en) * 1995-09-08 1997-11-04 Matsushita Electric Industrial Co., Ltd. Speech representation by feature-based word prototypes comprising phoneme targets having reliable high similarity
US5822728A (en) * 1995-09-08 1998-10-13 Matsushita Electric Industrial Co., Ltd. Multistage word recognizer based on reliably detected phoneme similarity regions
US5825977A (en) * 1995-09-08 1998-10-20 Morin; Philippe R. Word hypothesizer based on reliably detected phoneme similarity regions
US5924067A (en) * 1996-03-25 1999-07-13 Canon Kabushiki Kaisha Speech recognition method and apparatus, a computer-readable storage medium, and a computer- readable program for obtaining the mean of the time of speech and non-speech portions of input speech in the cepstrum dimension
US5909665A (en) * 1996-05-30 1999-06-01 Nec Corporation Speech recognition system
US6108628A (en) * 1996-09-20 2000-08-22 Canon Kabushiki Kaisha Speech recognition method and apparatus using coarse and fine output probabilities utilizing an unspecified speaker model
US6266636B1 (en) 1997-03-13 2001-07-24 Canon Kabushiki Kaisha Single distribution and mixed distribution model conversion in speech recognition method, apparatus, and computer readable medium
US6236962B1 (en) 1997-03-13 2001-05-22 Canon Kabushiki Kaisha Speech processing apparatus and method and computer readable medium encoded with a program for recognizing input speech by performing searches based on a normalized current feature parameter
US6393396B1 (en) 1998-07-29 2002-05-21 Canon Kabushiki Kaisha Method and apparatus for distinguishing speech from noise
US7058580B2 (en) * 2000-05-24 2006-06-06 Canon Kabushiki Kaisha Client-server speech processing system, apparatus, method, and storage medium
US6813606B2 (en) 2000-05-24 2004-11-02 Canon Kabushiki Kaisha Client-server speech processing system, apparatus, method, and storage medium
US20050043946A1 (en) * 2000-05-24 2005-02-24 Canon Kabushiki Kaisha Client-server speech processing system, apparatus, method, and storage medium
US20030097264A1 (en) * 2000-10-11 2003-05-22 Canon Kabushiki Kaisha Information processing apparatus and method, a computer readable medium storing a control program for making a computer implemented information process, and a control program for selecting a specific grammar corresponding to an active input field or for controlling selection of a grammar or comprising a code of a selection step of selecting a specific grammar
US6587820B2 (en) 2000-10-11 2003-07-01 Canon Kabushiki Kaisha Information processing apparatus and method, a computer readable medium storing a control program for making a computer implemented information process, and a control program for selecting a specific grammar corresponding to an active input field or for controlling selection of a grammar or comprising a code of a selection step of selecting a specific grammar
US7024361B2 (en) 2000-10-11 2006-04-04 Canon Kabushiki Kaisha Information processing apparatus and method, a computer readable medium storing a control program for making a computer implemented information process, and a control program for selecting a specific grammar corresponding to an active input field or for controlling selection of a grammar or comprising a code of a selection step of selecting a specific grammar
US20020128826A1 (en) * 2001-03-08 2002-09-12 Tetsuo Kosaka Speech recognition system and method, and information processing apparatus and method used in that system
US20050086057A1 (en) * 2001-11-22 2005-04-21 Tetsuo Kosaka Speech recognition apparatus and its method and program
GB2482444A (en) * 2007-03-30 2012-02-01 Wolfson Microelectronics Plc Silence pattern detector for digital audio bit streams
GB2482444B (en) * 2007-03-30 2012-08-01 Wolfson Microelectronics Plc Pattern detection circuitry
US8331581B2 (en) 2007-03-30 2012-12-11 Wolfson Microelectronics Plc Pattern detection circuitry
US9646610B2 (en) 2012-10-30 2017-05-09 Motorola Solutions, Inc. Method and apparatus for activating a particular wireless communication device to accept speech and/or voice commands using identification data consisting of speech, voice, image recognition
US9144028B2 (en) 2012-12-31 2015-09-22 Motorola Solutions, Inc. Method and apparatus for uplink power control in a wireless communication system

Also Published As

Publication number Publication date
JPH04362698A (en) 1992-12-15

Similar Documents

Publication Publication Date Title
US5369728A (en) Method and apparatus for detecting words in input speech data
US6185530B1 (en) Apparatus and methods for identifying potential acoustic confusibility among words in a speech recognition system
US5899971A (en) Computer unit for speech recognition and method for computer-supported imaging of a digitalized voice signal onto phonemes
EP0706171A1 (en) Speech recognition method and apparatus
US5621849A (en) Voice recognizing method and apparatus
EP0282272B1 (en) Voice recognition system
JPH0416800B2 (en)
EP0903730B1 (en) Search and rescoring method for a speech recognition system
US20070203700A1 (en) Speech Recognition Apparatus And Speech Recognition Method
US4868879A (en) Apparatus and method for recognizing speech
US6195638B1 (en) Pattern recognition system
US4872201A (en) Pattern matching apparatus employing compensation for pattern deformation
EP0987681B1 (en) Speech recognition method and apparatus
JP2003345388A (en) Method, device, and program for voice recognition
EP1369847B1 (en) Speech recognition method and system
JPH09114482A (en) Speaker adaptation method for voice recognition
JP3009962B2 (en) Voice recognition device
JP3090204B2 (en) Speech model learning device and speech recognition device
JP2574242B2 (en) Voice input device
JPH0247757B2 (en)
JPH10143190A (en) Speech recognition device
JPH0627992A (en) Speech recognizing device
JPH0552516B2 (en)
JPH05197397A (en) Speech recognizing method and its device
WO1992006469A1 (en) Boundary relaxation for speech pattern recognition

Legal Events

Date Code Title Description
AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:KOSAKA, TETSUO;SAKURAI, ATSUSHI;TAMURA, JUNICHI;AND OTHERS;REEL/FRAME:006260/0721

Effective date: 19920825

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12