WO2013125203A1 - Speech recognition device, speech recognition method, and computer program - Google Patents
Speech recognition device, speech recognition method, and computer program Download PDFInfo
- Publication number
- WO2013125203A1 WO2013125203A1 PCT/JP2013/000876 JP2013000876W WO2013125203A1 WO 2013125203 A1 WO2013125203 A1 WO 2013125203A1 JP 2013000876 W JP2013000876 W JP 2013000876W WO 2013125203 A1 WO2013125203 A1 WO 2013125203A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hypothesis
- recognition result
- score
- result candidate
- speech
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 18
- 238000004590 computer program Methods 0.000 title description 5
- 238000004364 calculation method Methods 0.000 claims abstract description 17
- 238000010586 diagram Methods 0.000 description 15
- 238000000605 extraction Methods 0.000 description 13
- 238000004891 communication Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
Definitions
- the present invention relates to a speech recognition device, a speech recognition method, and a computer program, and more particularly to a technique for searching for a speech recognition result candidate.
- Patent Document 1 discloses a word search device that outputs an N-best recognition result as a speech recognition candidate after utterance is completed.
- Patent Document 2 discloses a speech recognition system that can sequentially output a plurality of speech recognition result candidates during utterance.
- Patent Document 1 since the technique described in Patent Document 1 outputs a speech recognition result candidate after the user finishes pronunciation, there is a waiting time for the user before the user confirms the speech recognition result.
- Patent Document 2 Since the technique described in Patent Document 2 sequentially outputs a plurality of recognition result candidates, many recognition result candidates may be output. In this case, the user is required to make a selection effort.
- an object of the present invention is to provide a speech recognition apparatus that can determine an appropriate number of recognition result candidates during speech.
- the speech recognition apparatus calculates a hypothesis score of a recognition result candidate based on speech feature values input in frame units in time order, and calculates the hypothesis of the calculated recognition result candidate hypothesis.
- a hypothesis search unit that searches for the hypothesis by referring to a score, and a plurality of hypotheses searched at a certain time by the hypothesis search unit, based on the distribution of scores accumulated up to the time,
- a recognition result candidate determination unit that determines a hypothesis as the recognition result candidate.
- the speech recognition method calculates a hypothesis score of a recognition result candidate based on speech feature values input in frame units in time order, and refers to the calculated hypothesis score of the recognition result candidate.
- the hypothesis is searched, and a hypothesis higher in the score is determined as the recognition result candidate from a plurality of hypotheses searched at a certain time based on the distribution of scores accumulated up to the time.
- the computer program calculates a hypothesis score of a recognition result candidate based on a speech feature amount input in frame units in time order in a speech recognition apparatus including the computer, and calculates the recognition result candidate hypothesis A hypothesis search step for searching for the hypothesis by referring to the score of the score, and a plurality of hypotheses searched at a certain time, based on the distribution of scores accumulated up to the time, the recognition of the hypothesis of the higher score And causing the computer to execute a recognition result candidate determination step of determining as a result candidate.
- an appropriate number of recognition result candidates can be determined during utterance.
- each embodiment of the present invention described below handles Japanese grammatical expressions as an example. Therefore, each embodiment is “phonetic-based Japanese katakana notation (the Japanese romaji notation: the Japanese other-language notation)” for the convenience of examination in other languages after the transition to each country. Includes a description format.
- Japanese romaji notation the Japanese other-language notation
- FIG. 1 is a block diagram showing a hardware configuration example of an information processing apparatus (computer) capable of realizing the speech recognition apparatus 1 according to the first embodiment of the present invention.
- a speech recognition apparatus 1 includes a CPU 10, a memory 12, an HDD (hard disk drive) 14, a communication IF (interface) 16 that performs data communication via a communication network (not shown), and a display device 18 such as a display.
- a voice input device 20 such as a microphone for inputting voice and outputting a voice signal
- an input device 22 including a pointing device such as a keyboard and a mouse.
- the input device 22 includes a reader / writer that can read information stored in a computer-readable storage medium such as a CD (compact disk).
- the speech recognition apparatus 1 is realized by the CPU 10 executing a program (computer program) stored in the memory 12 or the HDD 14.
- the hardware configuration example of the speech recognition apparatus 1 shown in FIG. 1 can be applied to the embodiments described later.
- FIG. 2 is a block diagram showing a configuration example of the speech recognition apparatus 1 according to the first embodiment of the present invention.
- the speech recognition apparatus 1 includes a speech input unit 100, a feature amount extraction unit 102, a hypothesis search unit 104, a reliability calculation unit 106, a recognition result candidate determination unit 108, and a result output unit 110.
- the configuration of the speech recognition apparatus 1 is realized by the CPU 10 (FIG. 1) reading a program stored in the memory 12 or the HDD 14 into the memory 12 and executing the program on the CPU 10.
- the configuration of the speech recognition device 1 may be realized by the CPU 10 (FIG. 1) executing a program acquired from the outside by the input device 22 such as the communication IF 16 or a reader / writer. Note that all or some of the functions of the voice recognition device 1 may be realized by hardware provided in the voice recognition device 1.
- the speech input unit 100 inputs a user's utterance from the speech input device 20 (FIG. 1), and outputs the input speech to the feature amount extraction unit 102 as a speech signal.
- the voice input unit 100 detects the voice start edge, the voice input unit 100 starts voice input.
- the voice input unit 100 may detect the voice start point based on an instruction from the outside, for example, when a “start” button provided in the input device 22 is pressed.
- the voice input unit 100 may feed back the start of voice input to the user by a volume indicator of the display device 18 or a display on the display.
- the voice input unit 100 may start recording together with the start of voice input. Furthermore, the voice input unit 100 may input recorded voice data.
- the feature quantity extraction unit 102 converts the voice signal output from the voice input unit 100 into a voice feature quantity such as MFCC (Mel Frequency Cepstral Coefficient) or power in units of a fixed section (frame), and a hypothesis search unit Output to 104.
- MFCC Mel Frequency Cepstral Coefficient
- the hypothesis search unit 104 calculates the hypothesis score of the recognition result candidate based on the speech feature amount input in frame units in time order, and searches for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate. Specifically, the hypothesis search unit 104 accepts the speech feature amount output from the feature amount extraction unit 102 in frame order in time order, and searches for a hypothesis based on the score of each hypothesis of the recognition result candidate calculated from the speech feature amount. The searched hypothesis and the calculated score are output to the recognition result candidate determination unit 106.
- the hypothesis search unit 104 calculates a hypothetical acoustic score.
- the hypothesis search unit 104 adds the feature value in the t frame and the likelihood of the acoustic model to the acoustic score (accumulated acoustic score) accumulated up to the time of (t-1) frame, where t is the time.
- the calculated value is calculated as an acoustic score.
- the hypothesis search unit 104 may use the sum of the acoustic score and the language score as the score.
- the hypothesis search unit 104 uses the language model prefetch score, which is the best language score among the words that can be reached from the hypothesis, as the language score. For example, when the user utters “onse” (oNse), the hypothesis search unit 104 searches for a hypothesis at the stage of / oNse // e /. In this case, for example, the hypothesis search unit 104 uses words such as “onsei (oNsei: voice)”, “onseininshiki (voice recognition)”, “onset (oNsetsu: syllable)”, “onsen (oNseN: hot spring)”, and the like. The highest language score is taken as the language model look-ahead score.
- the recognition result candidate determination unit 106 determines a hypothesis having a higher score as a recognition result candidate from a plurality of hypotheses searched at a certain time by the hypothesis search unit 104 based on the accumulated score distribution. Specifically, the recognition result candidate determination unit 106 compares the accumulated scores of hypotheses searched at a certain time by the hypothesis search unit 104. Then, the recognition result candidate determination unit 106 determines the hypothesis having the highest score as the recognition result candidate in descending order of score when the difference in scores exceeds a predetermined threshold (first threshold). The recognition result candidate is output to the result output unit 108. For example, when the difference between the scores of two hypotheses exceeds the first threshold, the recognition result candidate determination unit 106 determines a hypothesis having a higher score as a recognition result candidate.
- first threshold predetermined threshold
- the first threshold value is a value obtained by multiplying the sum of scores of a plurality of hypotheses searched by the hypothesis searching unit 104 by a predetermined ratio.
- the maximum value of the number of candidates is finitely determined, the number of candidates does not increase too much.
- the first threshold value is 10% of the total score of a plurality of hypotheses searched at a certain time by the hypothesis searching unit 104, the hypothesis having a score equal to or higher than the threshold value is selected.
- the number is 10 or less. In this case, the maximum value of recognition result candidates is 10.
- the number of hypotheses having a score equal to or higher than the threshold is 5 or less. Become. In this case, the maximum value of recognition result candidates is 5.
- the result output unit 108 outputs the recognition result candidate determined by the recognition result candidate determination unit 106.
- the result output unit 110 displays the recognition result candidates on the display device 18.
- FIG. 3 is a flowchart showing the operation of the speech recognition apparatus 1 according to the first embodiment.
- step 100 when the voice input unit 100 detects a voice start end, the voice input unit 100 starts voice input.
- step 102 the feature amount extraction unit 102 converts the speech signal input by the speech input unit 100 into speech feature amounts such as MFCC and power in units of frames.
- step 104 the hypothesis search unit 104 receives the speech feature amount converted by the feature amount extraction unit 102 in frame order in time order, calculates the score of each hypothesis in frame unit, and searches for the hypothesis.
- step 106 (S106) and step 108 (S108) the recognition result candidate determination unit 106 recognizes a hypothesis having a higher score from a plurality of hypotheses searched at a certain time by the hypothesis search unit 104 according to the score distribution. Determine as a candidate. Specifically, in step 106 (S106), the recognition result candidate determination unit 106 compares the scores of the respective hypotheses searched by the hypothesis search unit 104, and the difference between the two hypotheses exceeds the first threshold value. If YES in step 108, the flow advances to step 108 (S108). Otherwise, the flow returns to step S102. For example, the recognition result candidate determination unit 106 arranges hypotheses in descending order of score, and when the difference between the score of a certain hypothesis and the score of the hypothesis of the next rank exceeds the first threshold value, the processing of S108 is performed. move on.
- step 108 the recognition result candidate determination unit 106 determines a hypothesis having a higher score as a recognition result candidate. Specifically, the recognition result candidate determination unit 106 determines up to hypotheses whose difference from the hypothesis score of the next rank exceeds the first threshold value as recognition result candidates.
- FIG. 4 is a sequence diagram showing the operation of the speech recognition apparatus 1 according to the first embodiment.
- the speech input unit 100 inputs speech
- the feature amount extraction unit 102 extracts feature amounts in units of frames
- the hypothesis search unit 104 A hypothesis is searched from the feature amount
- the recognition result candidate determination unit 106 determines a recognition result candidate from the searched hypotheses
- the result output unit 108 outputs the recognition result candidate.
- the speech recognition apparatus 1 according to the first embodiment of the present invention can output the result earlier by the time ⁇ shown in FIG. 4 than the technique described in Patent Document 1.
- an appropriate number of recognition result candidates can be output during utterance.
- the speech recognition apparatus 1 determines the hypothesis having the highest score as a recognition result candidate.
- the speech recognition apparatus 1 determines a plurality of hypotheses with a high score as recognition result candidates. For this reason, the speech recognition apparatus 1 can output an appropriate number of recognition result candidates according to the situation.
- the speech recognition apparatus 1 may output a number of recognition result candidates according to the set threshold value. For this reason, according to the speech recognition apparatus 1, it can adapt easily to a user's use. For example, when the threshold value is set high, the number of recognition result candidates decreases, so according to the speech recognition apparatus 1 according to the present embodiment, the user has to select an appropriate candidate from the recognition result candidates. Can be omitted.
- FIG. 5 is a block diagram showing a configuration example of the speech recognition apparatus 2 according to the second exemplary embodiment of the present invention.
- the speech recognition apparatus 2 according to the second exemplary embodiment of the present invention includes a reliability calculation unit 110, and the recognition result candidate determination unit 106 is replaced with a recognition result candidate determination unit 112.
- the point provided with the point which has is different from 1st Embodiment.
- the reliability calculation unit 110 calculates the reliability of the hypothesis searched by the hypothesis search unit 104. Specifically, the reliability calculation unit 106 calculates, as the reliability, a normalized score of each hypothesis searched by the hypothesis search unit 104 and outputs the reliability to the recognition result candidate determination unit 112.
- the recognition result candidate determination unit 112 determines an appropriate number of recognition result candidates based on the reliability of the hypothesis calculated by the reliability calculation unit 110 in addition to the operation of the first embodiment. Specifically, the recognition result candidate determination unit 112 totals the reliability of the hypothesis having a higher score calculated by the reliability calculation unit 110, and the total reliability is determined by a predetermined threshold (second threshold value). When the threshold value is exceeded, a recognition result candidate is determined, and the recognition result candidate is output to the result output unit 110.
- a predetermined threshold second threshold value
- the recognition result candidate determination unit 112 may determine a recognition result candidate based only on the total reliability of hypotheses with higher scores calculated by the reliability calculation unit 110.
- FIG. 6 is a flowchart showing the operation of the speech recognition apparatus 2 according to the second exemplary embodiment of the present invention.
- the same reference numerals are given to substantially the same processes as those shown in FIG. 3 (the duplicate description is omitted).
- the speech input unit 100 inputs speech
- the feature amount extraction unit 102 extracts feature amounts in units of frames
- the hypothesis search unit 104 searches for hypotheses from the feature amounts
- the recognition result candidate determination unit 112 compares the scores.
- step 110 the reliability calculation unit 110 calculates the reliability based on the hypothesis score searched by the hypothesis search unit 104.
- step 112 the recognition result candidate determination unit 112, among each hypothesis calculated by the reliability calculation unit 110, the total reliability of the hypothesis of the higher score exceeds the second threshold value. If so, the process proceeds to S108; otherwise, the process returns to S102. Specifically, the recognition result candidate determination unit 112 arranges the hypotheses in descending order of the scores, adds the reliability from the hypothesis of the higher score, and if the reliability exceeds the second threshold value, S108. Proceed to the process.
- FIG. 7 is a block diagram showing a configuration example of the speech recognition apparatus 3 according to the third embodiment of the present invention.
- the speech recognition apparatus 3 according to the third exemplary embodiment of the present invention has a configuration in which the recognition result candidate determination unit 112 is replaced with a recognition result candidate determination unit 114. Different from the speech recognition apparatus 2 according to the embodiment.
- the recognition result candidate determination unit 114 has the difference between the scores of the two hypotheses searched by the hypothesis search unit 104 exceeds a predetermined threshold (third threshold).
- a recognition result candidate is determined when a predetermined time has passed.
- FIG. 8 is a flowchart showing the operation of the speech recognition apparatus 3 according to the third embodiment of the present invention.
- the same reference numerals are given to substantially the same processes as those shown in FIG. 6 (the duplicate description is omitted).
- the voice input unit 100 inputs voice
- the feature quantity extraction unit 102 extracts feature quantities in units of frames
- the hypothesis search section 104 searches for hypotheses based on the feature quantities.
- the reliability calculation unit 110 calculates the reliability of each hypothesis
- the recognition result candidate determination unit 112 compares the scores
- the recognition result candidate unit 112 calculates the total reliability of the hypothesis of the higher score.
- step 114 the recognition result candidate determination unit 114 proceeds to the processing of S108 when a state in which the difference between the scores of the two hypotheses exceeds the third threshold value has elapsed for a predetermined time, and is not so. In this case, the process returns to S102.
- the recognition result candidate determination unit 114 determines that the difference between the scores of the two hypotheses searched by the hypothesis search unit 104 exceeds the third threshold and a predetermined time has elapsed. A recognition result candidate is determined.
- the recognition result candidate determination unit 114 has a predetermined threshold (fourth threshold) that is calculated by the reliability calculation unit 110 of the higher score hypothesis. A recognition result candidate is determined when a predetermined time has passed.
- the recognition result candidate determination unit 114 calculates two criteria, ie, the difference between the scores of the two hypotheses searched by the hypothesis search unit 104 and the total reliability calculated by the hypothesis reliability calculation unit 110 of the higher score.
- the recognition result candidate may be determined when a predetermined time has passed in the satisfied state.
- FIG. 9 is a block diagram showing a configuration example of the speech recognition apparatus 4 according to the fourth embodiment of the present invention.
- the speech recognition apparatus 4 according to the third exemplary embodiment of the present invention has a recognition target vocabulary storage unit 118 and an acoustic model storage unit 120 according to the second exemplary embodiment. Different from the speech recognition device 2.
- the recognition target vocabulary storage unit 118 and the acoustic model storage unit 120 are realized by a storage device such as the memory 12 or the HDD 14, for example.
- the recognition target vocabulary storage unit 118 includes “onsei (oNsei: speech)”, “onseininshiki (voice recognition)”, “onseigosei (speech synthesis)”, “onset (oNsetsu: syllable)”, “ Phoneme strings such as “Onsen (oNseN: hot spring)” are stored.
- the phoneme string stored in the recognition target vocabulary storage unit 118 is used by the hypothesis search unit 104 so that the head phoneme portions of words having the same head phoneme are merged.
- FIG. 10 is a diagram illustrating a usage example of phoneme strings stored in the recognition target vocabulary storage unit 118.
- onsei oNsei: speech
- onseininshiki speech recognition
- onseigosei speech synthesis
- onset oNsetsu: syllable
- Phoneme strings such as “Onsen (oNsen)” are merged and used.
- the acoustic model storage unit 120 stores an acoustic model obtained by modeling an acoustic pattern corresponding to reading.
- HMM Hidden Markov Model: Hidden Markov Model
- HMM Hidden Markov Model
- the hypothesis search unit 116 reads the tree structure dictionary and the acoustic model from the recognition target vocabulary storage unit 116 and the acoustic model storage unit 118, and uses the recognition result candidate hypothesis score calculated from the speech feature values input in frame units in time order. Search for hypotheses. Specifically, the hypothesis search unit 104 receives the speech feature amount output from the feature amount extraction unit 102 in frame order in time order, expands the hypothesis in order from the first phoneme in the tree structure dictionary, and calculates a score.
- the hypothesis search unit 116 may use a language score.
- the hypothesis search unit 104 may perform pruning by beam search.
- the beam search comprehensively determines the acoustic score and the language score at each time, so that a hypothesis with a poor score is not considered to be probable, that is, subsequent hypothesis expansion is not performed.
- the hypothesis search unit 104 may end hypothesis development for a hypothesis with a poor score according to a score at a certain time.
- the hypothesis search unit 116 outputs the searched hypothesis and score to the reliability calculation unit 106.
- FIG. 11 is a block diagram showing a configuration example of the speech recognition apparatus 5 according to the fifth embodiment of the present invention.
- the speech recognition apparatus 5 according to the fifth embodiment of the present invention has a configuration common to the above-described embodiments.
- the hypothesis search unit 104 calculates the hypothesis score of the recognition result candidate based on the speech feature quantity input in frame units in time order, and searches for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate. .
- the recognition result candidate determination unit 106 determines a hypothesis having a higher score as the recognition result candidate from a plurality of hypotheses searched at a certain time by the hypothesis search unit based on the distribution of scores accumulated up to the time. To do.
- the present invention can be used for an isolated word speech recognition system used for an application launcher, personal name input, and the like.
- Voice recognition device 10 CPU 12 Memory 14 HDD 16 Communication IF 18 Display device 20 Voice input device 22 Input device 24 Bus 100 Voice input unit 102 Feature quantity extraction unit 104 Hypothesis search unit 106, 112, 114 Recognition result candidate determination unit 108 Result output unit 110 Reliability calculation unit 116 Hypothesis search unit 118 Recognition Target vocabulary storage unit 120 Acoustic model storage unit
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
[Problem] To provide a speech recognition device that can determine a suitable number of recognition result candidates during the progress of speaking.
[Solution] This speech recognition device is provided with: a hypothesis search unit (104) that carries out calculations on the basis of feature values for speech input in frame units in time order and searches for hypotheses by referencing a hypothesis score for candidates for recognition results that have been calculated; and a recognition result candidate determination unit (106) that, from a plurality of hypotheses found at a certain time by the hypothesis search unit (104), and on the basis of score distribution accumulated up to the time, determines top scoring hypotheses as recognition result candidates.
Description
本発明は、音声認識装置、音声認識方法およびコンピュータプログラムに関し、特に音声認識結果の候補を探索する技術に関する。
The present invention relates to a speech recognition device, a speech recognition method, and a computer program, and more particularly to a technique for searching for a speech recognition result candidate.
音声認識に関して、認識結果候補を探索する技術が一般的に知られている。例えば、特許文献1には、発声終了後に音声認識候補としてN-best認識結果を出力する単語探索装置が開示されている。特許文献2には、発声途中に複数の音声認識結果候補を逐次出力できる音声認識システムが開示されている。
Regarding speech recognition, a technique for searching for a recognition result candidate is generally known. For example, Patent Document 1 discloses a word search device that outputs an N-best recognition result as a speech recognition candidate after utterance is completed. Patent Document 2 discloses a speech recognition system that can sequentially output a plurality of speech recognition result candidates during utterance.
しかしながら、特許文献1に記載された技術は、ユーザが発音を終えた後に、音声認識結果候補を出力するので、ユーザが音声認識結果を確認するまでに、ユーザに待ち時間が生じてしまう。
However, since the technique described in Patent Document 1 outputs a speech recognition result candidate after the user finishes pronunciation, there is a waiting time for the user before the user confirms the speech recognition result.
特許文献2に記載された技術は、複数の認識結果候補を逐次出力するので、多くの認識結果候補が出力されることがある。この場合、ユーザに選択の手間をかけさせてしまう。
Since the technique described in Patent Document 2 sequentially outputs a plurality of recognition result candidates, many recognition result candidates may be output. In this case, the user is required to make a selection effort.
本発明は、上記課題を鑑み、適切な数の認識結果候補を発声途中に決定することができる音声認識装置などを提供することを1つの目的とする。
In view of the above problems, an object of the present invention is to provide a speech recognition apparatus that can determine an appropriate number of recognition result candidates during speech.
上記目的を達成するために、本発明に係る音声認識装置は、時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索する仮説探索部と、前記仮説探索部によってある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する認識結果候補決定部と、を備える。
In order to achieve the above object, the speech recognition apparatus according to the present invention calculates a hypothesis score of a recognition result candidate based on speech feature values input in frame units in time order, and calculates the hypothesis of the calculated recognition result candidate hypothesis. A hypothesis search unit that searches for the hypothesis by referring to a score, and a plurality of hypotheses searched at a certain time by the hypothesis search unit, based on the distribution of scores accumulated up to the time, A recognition result candidate determination unit that determines a hypothesis as the recognition result candidate.
また、本発明に係る音声認識方法は、時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索し、ある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する。
Also, the speech recognition method according to the present invention calculates a hypothesis score of a recognition result candidate based on speech feature values input in frame units in time order, and refers to the calculated hypothesis score of the recognition result candidate. The hypothesis is searched, and a hypothesis higher in the score is determined as the recognition result candidate from a plurality of hypotheses searched at a certain time based on the distribution of scores accumulated up to the time.
さらに、本発明に係るコンピュータプログラムは、コンピュータを含む音声認識装置において、時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索する仮説探索ステップと、ある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する認識結果候補決定ステップと、を前記コンピュータに実行させる。
Furthermore, the computer program according to the present invention calculates a hypothesis score of a recognition result candidate based on a speech feature amount input in frame units in time order in a speech recognition apparatus including the computer, and calculates the recognition result candidate hypothesis A hypothesis search step for searching for the hypothesis by referring to the score of the score, and a plurality of hypotheses searched at a certain time, based on the distribution of scores accumulated up to the time, the recognition of the hypothesis of the higher score And causing the computer to execute a recognition result candidate determination step of determining as a result candidate.
尚、係る同目的は、当該コンピュータプログラムを格納したコンピュータ読み取り可能な記憶媒体によっても達成されうる。
Note that the same object can be achieved by a computer-readable storage medium storing the computer program.
本発明の音声認識装置によれば、適切な数の認識結果候補を発声途中に決定することができる。
According to the speech recognition apparatus of the present invention, an appropriate number of recognition result candidates can be determined during utterance.
はじめに、以下に説明する本発明の各実施形態は、一例として、日本語の文法表現を取り扱う。このため、当該各実施形態は、各国移行後の他国言語による審査の便宜に資すべく、「音素に基づく日本語のカタカナ表記(当該日本語のローマ字表記:当該日本語の他国言語表記)」なる記載形式を含む。
<第1の実施の形態>
本発明にかかる音声認識装置の第1の実施の形態について説明する。 First, each embodiment of the present invention described below handles Japanese grammatical expressions as an example. Therefore, each embodiment is “phonetic-based Japanese katakana notation (the Japanese romaji notation: the Japanese other-language notation)” for the convenience of examination in other languages after the transition to each country. Includes a description format.
<First Embodiment>
A first embodiment of a speech recognition apparatus according to the present invention will be described.
<第1の実施の形態>
本発明にかかる音声認識装置の第1の実施の形態について説明する。 First, each embodiment of the present invention described below handles Japanese grammatical expressions as an example. Therefore, each embodiment is “phonetic-based Japanese katakana notation (the Japanese romaji notation: the Japanese other-language notation)” for the convenience of examination in other languages after the transition to each country. Includes a description format.
<First Embodiment>
A first embodiment of a speech recognition apparatus according to the present invention will be described.
図1は、本発明の第1の実施の形態にかかる音声認識装置1を実現可能な情報処理装置(コンピュータ)のハードウェア構成例を示すブロック図である。図1に示すように、音声認識装置1は、CPU10、メモリ12、HDD(ハードディスクドライブ)14、図示しない通信ネットワークを介してデータの通信を行う通信IF(インターフェース)16、ディスプレイ等の表示装置18、音声を入力して音声信号を出力するマイクロホン等の音声入力装置20およびキーボードやマウス等のポインティングデバイスを含む入力装置22を有する。入力装置22には、CD(コンパクトディスク)等のコンピュータ読み取り可能な記憶媒体に記憶された情報を読み取り可能なリーダーライター等も含まれる。これらの構成要素は、バス24を通して互いに接続されており、互いにデータの入出力を行う。本実施形態にかかる音声認識装置1は、CPU10が、メモリ12またはHDD14に記憶されているプログラム(コンピュータプログラム)を実行することにより実現される。図1に示す、音声認識装置1のハードウェア構成例は、後述する実施形態にも適用可能である。
FIG. 1 is a block diagram showing a hardware configuration example of an information processing apparatus (computer) capable of realizing the speech recognition apparatus 1 according to the first embodiment of the present invention. As shown in FIG. 1, a speech recognition apparatus 1 includes a CPU 10, a memory 12, an HDD (hard disk drive) 14, a communication IF (interface) 16 that performs data communication via a communication network (not shown), and a display device 18 such as a display. And a voice input device 20 such as a microphone for inputting voice and outputting a voice signal, and an input device 22 including a pointing device such as a keyboard and a mouse. The input device 22 includes a reader / writer that can read information stored in a computer-readable storage medium such as a CD (compact disk). These components are connected to each other through a bus 24 and input / output data to / from each other. The speech recognition apparatus 1 according to the present embodiment is realized by the CPU 10 executing a program (computer program) stored in the memory 12 or the HDD 14. The hardware configuration example of the speech recognition apparatus 1 shown in FIG. 1 can be applied to the embodiments described later.
図2は、本発明の第1の実施の形態にかかる音声認識装置1の構成例を示すブロック図である。図2に示すように、音声認識装置1は、音声入力部100、特徴量抽出部102、仮説探索部104、信頼度算出部106、認識結果候補決定部108および結果出力部110を備える。音声認識装置1の構成は、CPU10(図1)が、メモリ12またはHDD14に記憶されているプログラムをメモリ12等に読み出した後、CPU10にて実行することにより実現される。また、音声認識装置1の構成は、CPU10(図1)が、通信IF16あるいはリーダーライター等の入力装置22により、外部から取得したプログラムを実行することによって実現されてもよい。なお、音声認識装置1の全部又は一部の機能は、音声認識装置1に設けられたハードウェアにより実現されてもよい。
FIG. 2 is a block diagram showing a configuration example of the speech recognition apparatus 1 according to the first embodiment of the present invention. As shown in FIG. 2, the speech recognition apparatus 1 includes a speech input unit 100, a feature amount extraction unit 102, a hypothesis search unit 104, a reliability calculation unit 106, a recognition result candidate determination unit 108, and a result output unit 110. The configuration of the speech recognition apparatus 1 is realized by the CPU 10 (FIG. 1) reading a program stored in the memory 12 or the HDD 14 into the memory 12 and executing the program on the CPU 10. The configuration of the speech recognition device 1 may be realized by the CPU 10 (FIG. 1) executing a program acquired from the outside by the input device 22 such as the communication IF 16 or a reader / writer. Note that all or some of the functions of the voice recognition device 1 may be realized by hardware provided in the voice recognition device 1.
音声認識装置1において、音声入力部100は、音声入力装置20(図1)からユーザの発声を入力し、入力された音声を音声信号として特徴量抽出部102に対して出力する。音声入力部100は、音声始端を検出すると音声入力を開始する。
In the speech recognition device 1, the speech input unit 100 inputs a user's utterance from the speech input device 20 (FIG. 1), and outputs the input speech to the feature amount extraction unit 102 as a speech signal. When the voice input unit 100 detects the voice start edge, the voice input unit 100 starts voice input.
なお、音声入力部100は、例えば、入力装置22に設けられた「スタート」ボタンが押下される等、外部からの指示に基づいて音声始端を検出してもよい。音声入力部100は、表示装置18のボリュームインジケーターやディスプレイの表示によって、音声入力の開始をユーザにフィードバックしてもよい。また、音声入力部100は、音声入力の開始とともに、録音を開始してもよい。さらに、音声入力部100は、録音された音声データを入力してもよい。
The voice input unit 100 may detect the voice start point based on an instruction from the outside, for example, when a “start” button provided in the input device 22 is pressed. The voice input unit 100 may feed back the start of voice input to the user by a volume indicator of the display device 18 or a display on the display. The voice input unit 100 may start recording together with the start of voice input. Furthermore, the voice input unit 100 may input recorded voice data.
特徴量抽出部102は、音声入力部100から出力された音声信号を一定区間(フレーム)単位でMFCC(Mel Frequency Cepstral Coefficient:メル周波数ケプストラム)やパワー等の音声特徴量に変換し、仮説探索部104に対して出力する。
The feature quantity extraction unit 102 converts the voice signal output from the voice input unit 100 into a voice feature quantity such as MFCC (Mel Frequency Cepstral Coefficient) or power in units of a fixed section (frame), and a hypothesis search unit Output to 104.
仮説探索部104は、時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより仮説を探索する。具体的には、仮説探索部104は、特徴量抽出部102から出力された音声特徴量を時刻順にフレーム単位で受け付け、音声特徴量から算出した認識結果候補の各仮説のスコアによって仮説を探索し、探索された仮説と算出されたスコアを認識結果候補決定部106に対して出力する。
The hypothesis search unit 104 calculates the hypothesis score of the recognition result candidate based on the speech feature amount input in frame units in time order, and searches for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate. Specifically, the hypothesis search unit 104 accepts the speech feature amount output from the feature amount extraction unit 102 in frame order in time order, and searches for a hypothesis based on the score of each hypothesis of the recognition result candidate calculated from the speech feature amount. The searched hypothesis and the calculated score are output to the recognition result candidate determination unit 106.
例えば、仮説探索部104は、仮説の音響スコアを算出する。この場合、仮説探索部104は、tを時刻とすると、(t-1)フレームの時点までに累積された音響スコア(累積音響スコア)にtフレームにおける特徴量と音響モデルの尤度を足し合わせた値を音響スコアとして算出する。仮説探索部104は、音響スコアと言語スコアを足し合わせたものをスコアとして用いてもよい。
For example, the hypothesis search unit 104 calculates a hypothetical acoustic score. In this case, the hypothesis search unit 104 adds the feature value in the t frame and the likelihood of the acoustic model to the acoustic score (accumulated acoustic score) accumulated up to the time of (t-1) frame, where t is the time. The calculated value is calculated as an acoustic score. The hypothesis search unit 104 may use the sum of the acoustic score and the language score as the score.
本実施形態では、仮説探索部104は、その仮説からたどり着ける単語の中において最良の言語スコアである言語モデル先読みスコアを、言語スコアとして用いる。例えば、ユーザが「オンセ(oNse)」と発声すると、仮説探索部104は/oNse/の/e/の段階における仮説を探索する。この場合、仮説探索部104は、例えば、「オンセイ(oNsei:音声)」、「オンセイニンシキ(oNseiniNsiki:音声認識)」、「オンセツ(oNsetsu:音節)」、「オンセン(oNseN:温泉)」などの単語の中において最高の言語スコアを言語モデル先読みスコアとする。
In the present embodiment, the hypothesis search unit 104 uses the language model prefetch score, which is the best language score among the words that can be reached from the hypothesis, as the language score. For example, when the user utters “onse” (oNse), the hypothesis search unit 104 searches for a hypothesis at the stage of / oNse // e /. In this case, for example, the hypothesis search unit 104 uses words such as “onsei (oNsei: voice)”, “onseininshiki (voice recognition)”, “onset (oNsetsu: syllable)”, “onsen (oNseN: hot spring)”, and the like. The highest language score is taken as the language model look-ahead score.
認識結果候補決定部106は、仮説探索部104によってある時刻に探索された複数の仮説から、累積されたスコアの分布に基づいて、スコア上位の仮説を認識結果候補として決定する。具体的には、認識結果候補決定部106は、仮説探索部104によりある時刻に探索された仮説の累積されたスコアを比較する。そして、認識結果候補決定部106は、スコアの差が予め決められたしきい値(第1のしきい値)を超えた場合に、スコア上位の仮説をスコアの高い順に認識結果候補として決定し、認識結果候補を結果出力部108に対して出力する。例えば、認識結果候補決定部106は、2つの仮説のスコアの差が第1のしきい値を超えた場合、2つのうちスコアの高い仮説を認識結果候補として決定する。
The recognition result candidate determination unit 106 determines a hypothesis having a higher score as a recognition result candidate from a plurality of hypotheses searched at a certain time by the hypothesis search unit 104 based on the accumulated score distribution. Specifically, the recognition result candidate determination unit 106 compares the accumulated scores of hypotheses searched at a certain time by the hypothesis search unit 104. Then, the recognition result candidate determination unit 106 determines the hypothesis having the highest score as the recognition result candidate in descending order of score when the difference in scores exceeds a predetermined threshold (first threshold). The recognition result candidate is output to the result output unit 108. For example, when the difference between the scores of two hypotheses exceeds the first threshold, the recognition result candidate determination unit 106 determines a hypothesis having a higher score as a recognition result candidate.
例えば、第1のしきい値は、仮説探索部104によって探索された複数の仮説のスコアの合計に所定の割合を乗じた値である。これにより、本実施形態によれば、候補数の最大値が有限に決まるため候補数が増えすぎない。具体的には、第1のしきい値が、仮説探索部104によってある時刻に探索された複数の仮説のスコアの合計の10%の値である場合、しきい値以上のスコアを持つ仮説の数は10以下となる。この場合、認識結果候補の最大値は、10である。第1のしきい値が、仮説探索部104によってある時刻に探索された複数の仮説のスコアの合計の20%の値である場合、しきい値以上のスコアを持つ仮説の数は5以下となる。この場合、認識結果候補の最大値は5である。
For example, the first threshold value is a value obtained by multiplying the sum of scores of a plurality of hypotheses searched by the hypothesis searching unit 104 by a predetermined ratio. Thereby, according to this embodiment, since the maximum value of the number of candidates is finitely determined, the number of candidates does not increase too much. Specifically, when the first threshold value is 10% of the total score of a plurality of hypotheses searched at a certain time by the hypothesis searching unit 104, the hypothesis having a score equal to or higher than the threshold value is selected. The number is 10 or less. In this case, the maximum value of recognition result candidates is 10. When the first threshold is 20% of the sum of scores of a plurality of hypotheses searched at a certain time by the hypothesis search unit 104, the number of hypotheses having a score equal to or higher than the threshold is 5 or less. Become. In this case, the maximum value of recognition result candidates is 5.
結果出力部108は、認識結果候補決定部106によって決定された認識結果候補を出力する。例えば、結果出力部110は、認識結果候補を表示装置18に表示する。
The result output unit 108 outputs the recognition result candidate determined by the recognition result candidate determination unit 106. For example, the result output unit 110 displays the recognition result candidates on the display device 18.
次に、音声認識装置1の動作を説明する。
Next, the operation of the speech recognition apparatus 1 will be described.
図3は、第1の実施の形態にかかる音声認識装置1の動作を示すフローチャートである。図3に示すように、ステップ100(S100)において、音声入力部100は、音声始端を検出すると、音声入力を開始する。
FIG. 3 is a flowchart showing the operation of the speech recognition apparatus 1 according to the first embodiment. As shown in FIG. 3, in step 100 (S100), when the voice input unit 100 detects a voice start end, the voice input unit 100 starts voice input.
ステップ102(S102)において、特徴量抽出部102は、音声入力部100によって入力された音声信号をフレーム単位でMFCCやパワー等の音声特徴量に変換する。
In step 102 (S102), the feature amount extraction unit 102 converts the speech signal input by the speech input unit 100 into speech feature amounts such as MFCC and power in units of frames.
ステップ104(S104)において、仮説探索部104は、特徴量抽出部102によって変換された音声特徴量を時刻順にフレーム単位で受け付け、フレーム単位で各仮説のスコアを算出し、仮説を探索する。
In step 104 (S104), the hypothesis search unit 104 receives the speech feature amount converted by the feature amount extraction unit 102 in frame order in time order, calculates the score of each hypothesis in frame unit, and searches for the hypothesis.
ステップ106(S106)およびステップ108(S108)において、認識結果候補決定部106は、仮説探索部104によってある時刻に探索された複数の仮説から、スコアの分布に応じてスコア上位の仮説を認識結果候補として決定する。具体的には、ステップ106(S106)において、認識結果候補決定部106は、仮説探索部104によって探索された各仮説のスコアを比較し、2つの仮説の差が第1のしきい値を超えた場合にはステップ108(S108)の処理に進み、そうでない場合にはS102の処理に戻る。例えば、認識結果候補決定部106は、スコアの高い順に仮説を並べ、ある仮説のスコアと次の順位の仮説のスコアとの差が第1のしきい値を超えた場合にはS108の処理に進む。
In step 106 (S106) and step 108 (S108), the recognition result candidate determination unit 106 recognizes a hypothesis having a higher score from a plurality of hypotheses searched at a certain time by the hypothesis search unit 104 according to the score distribution. Determine as a candidate. Specifically, in step 106 (S106), the recognition result candidate determination unit 106 compares the scores of the respective hypotheses searched by the hypothesis search unit 104, and the difference between the two hypotheses exceeds the first threshold value. If YES in step 108, the flow advances to step 108 (S108). Otherwise, the flow returns to step S102. For example, the recognition result candidate determination unit 106 arranges hypotheses in descending order of score, and when the difference between the score of a certain hypothesis and the score of the hypothesis of the next rank exceeds the first threshold value, the processing of S108 is performed. move on.
ステップ108(S108)において、認識結果候補決定部106は、スコア上位の仮説を認識結果候補として決定する。具体的には、認識結果候補決定部106は、次の順位の仮説のスコアとの差が第1のしきい値を超える仮説までを認識結果候補として決定する。
In step 108 (S108), the recognition result candidate determination unit 106 determines a hypothesis having a higher score as a recognition result candidate. Specifically, the recognition result candidate determination unit 106 determines up to hypotheses whose difference from the hypothesis score of the next rank exceeds the first threshold value as recognition result candidates.
図4は、第1の実施の形態にかかる音声認識装置1の動作を示すシーケンス図である。図4に示すように、ステップ200(S200)~ステップ212(S210)において、音声入力部100が音声を入力し、特徴量抽出部102がフレーム単位で特徴量を抽出し、仮説探索部104が特徴量から仮説を探索し、認識結果候補決定部106が探索された仮説の中から認識結果候補を決定し、結果出力部108が認識結果候補を出力する。
FIG. 4 is a sequence diagram showing the operation of the speech recognition apparatus 1 according to the first embodiment. As shown in FIG. 4, in step 200 (S200) to step 212 (S210), the speech input unit 100 inputs speech, the feature amount extraction unit 102 extracts feature amounts in units of frames, and the hypothesis search unit 104 A hypothesis is searched from the feature amount, the recognition result candidate determination unit 106 determines a recognition result candidate from the searched hypotheses, and the result output unit 108 outputs the recognition result candidate.
一方、特許文献1に記載された技術では、図4に示すように、S200~S204において、音声入力処理、特徴量抽出処理および仮説探索処理が実行される。さらに、ステップ214(S214)において発声が終了し、ステップ216(S216)において音声が終了した後、ステップ218(S218)において特徴量抽出処理が終了する。ステップ220(S220)では、仮説の探索がフレーム単位で繰り返され、仮説が最終フレームまで探索され、ステップ222(S222)において、認識結果候補が決定される。
On the other hand, in the technique described in Patent Document 1, as shown in FIG. 4, speech input processing, feature amount extraction processing, and hypothesis search processing are executed in S200 to S204. Furthermore, after the utterance is finished in step 214 (S214) and the voice is finished in step 216 (S216), the feature amount extraction process is finished in step 218 (S218). In step 220 (S220), the hypothesis search is repeated for each frame, and the hypothesis is searched up to the final frame. In step 222 (S222), recognition result candidates are determined.
したがって、本発明の第1の実施形態に係る音声認識装置1は、特許文献1に記載されている技術と比較して、図4に示された時間αだけ早く結果を出力することができる。以上説明したように、本実施の形態にかかる音声認識装置1によれば、適切な数の認識結果候補を発声途中に出力することができる。
Therefore, the speech recognition apparatus 1 according to the first embodiment of the present invention can output the result earlier by the time α shown in FIG. 4 than the technique described in Patent Document 1. As described above, according to the speech recognition apparatus 1 according to the present embodiment, an appropriate number of recognition result candidates can be output during utterance.
本発明の第1の実施形態に係る音声認識装置1は、例えば、スコアの高い仮説が1つしかない場合には、最もスコアの高い仮説を認識結果候補として決定する。また、音声認識装置1は、スコアの高い仮説が複数ある場合には、複数のスコアの高い仮説を認識結果候補として決定する。このため、音声認識装置1は、状況に応じて適切な数の認識結果候補を出力することができる。
For example, when there is only one hypothesis having a high score, the speech recognition apparatus 1 according to the first embodiment of the present invention determines the hypothesis having the highest score as a recognition result candidate. In addition, when there are a plurality of hypotheses with a high score, the speech recognition apparatus 1 determines a plurality of hypotheses with a high score as recognition result candidates. For this reason, the speech recognition apparatus 1 can output an appropriate number of recognition result candidates according to the situation.
また、音声認識装置1は、設定されているしきい値に応じた数の認識結果候補を出力してもよい。このため、音声認識装置1によれば、ユーザの用途に容易に適応させることができる。例えば、しきい値が高く設定されている場合には、認識結果候補数が少なくなるので、本実施形態にかかる音声認識装置1によれば、ユーザが認識結果候補から適切な候補を選択する手間を省くことができる。
Further, the speech recognition apparatus 1 may output a number of recognition result candidates according to the set threshold value. For this reason, according to the speech recognition apparatus 1, it can adapt easily to a user's use. For example, when the threshold value is set high, the number of recognition result candidates decreases, so according to the speech recognition apparatus 1 according to the present embodiment, the user has to select an appropriate candidate from the recognition result candidates. Can be omitted.
<第2の実施の形態>
次に、本発明にかかる音声認識装置の第2の実施の形態について説明する。 <Second Embodiment>
Next, a second embodiment of the speech recognition apparatus according to the present invention will be described.
次に、本発明にかかる音声認識装置の第2の実施の形態について説明する。 <Second Embodiment>
Next, a second embodiment of the speech recognition apparatus according to the present invention will be described.
図5は、本発明の第2の実施の形態にかかる音声認識装置2の構成例を示すブロック図である。図5に示すように、本発明の第2の実施の形態にかかる音声認識装置2は、信頼度算出部110を備え、認識結果候補決定部106が認識結果候補決定部112に置き換えられた構成を有する点を備える点が、第1の実施形態とは異なっている。
FIG. 5 is a block diagram showing a configuration example of the speech recognition apparatus 2 according to the second exemplary embodiment of the present invention. As shown in FIG. 5, the speech recognition apparatus 2 according to the second exemplary embodiment of the present invention includes a reliability calculation unit 110, and the recognition result candidate determination unit 106 is replaced with a recognition result candidate determination unit 112. The point provided with the point which has is different from 1st Embodiment.
信頼度算出部110は、仮説探索部104により探索された仮説の信頼度を算出する。具体的には、信頼度算出部106は、仮説探索部104によって探索された各仮説のスコアを正規化したものを信頼度として算出し、認識結果候補決定部112に対して出力する。
The reliability calculation unit 110 calculates the reliability of the hypothesis searched by the hypothesis search unit 104. Specifically, the reliability calculation unit 106 calculates, as the reliability, a normalized score of each hypothesis searched by the hypothesis search unit 104 and outputs the reliability to the recognition result candidate determination unit 112.
認識結果候補決定部112は、第1の実施の形態の動作に加え、信頼度算出部110により算出された仮説の信頼度によって、適切な数の認識結果候補を決定する。具体的には、認識結果候補決定部112は、信頼度算出部110により算出されたスコア上位の仮説の信頼度を合計し、信頼度の合計が予め決められたしきい値(第2のしきい値)を超えた場合、認識結果候補を決定し、認識結果候補を結果出力部110に対して出力する。
The recognition result candidate determination unit 112 determines an appropriate number of recognition result candidates based on the reliability of the hypothesis calculated by the reliability calculation unit 110 in addition to the operation of the first embodiment. Specifically, the recognition result candidate determination unit 112 totals the reliability of the hypothesis having a higher score calculated by the reliability calculation unit 110, and the total reliability is determined by a predetermined threshold (second threshold value). When the threshold value is exceeded, a recognition result candidate is determined, and the recognition result candidate is output to the result output unit 110.
認識結果候補決定部112は、信頼度算出部110により算出されたスコア上位の仮説の信頼度の合計のみによって、認識結果候補を決定してもよい。
The recognition result candidate determination unit 112 may determine a recognition result candidate based only on the total reliability of hypotheses with higher scores calculated by the reliability calculation unit 110.
次に、音声認識装置2の動作を説明する。
Next, the operation of the voice recognition device 2 will be described.
図6は、本発明の第2の実施の形態にかかる音声認識装置2の動作を示すフローチャートである。なお、図6に示された各処理のうち、図3に示された処理と実質的に同一のものには同一の符号が付されている(重複する説明は省略する)。図6に示すように、S100~S106において、音声入力部100が音声を入力し、特徴量抽出部102がフレーム単位で特徴量を抽出し、仮説探索部104が特徴量から仮説を探索し、認識結果候補決定部112がスコアを比較する。
FIG. 6 is a flowchart showing the operation of the speech recognition apparatus 2 according to the second exemplary embodiment of the present invention. Of the processes shown in FIG. 6, the same reference numerals are given to substantially the same processes as those shown in FIG. 3 (the duplicate description is omitted). As shown in FIG. 6, in S100 to S106, the speech input unit 100 inputs speech, the feature amount extraction unit 102 extracts feature amounts in units of frames, and the hypothesis search unit 104 searches for hypotheses from the feature amounts, The recognition result candidate determination unit 112 compares the scores.
ステップ110(S110)において、信頼度算出部110は、仮説探索部104によって探索された仮説のスコアに基づいて信頼度を算出する。
In step 110 (S110), the reliability calculation unit 110 calculates the reliability based on the hypothesis score searched by the hypothesis search unit 104.
ステップ112(S112)において、認識結果候補決定部112は、信頼度算出部110によって算出された各仮説の中において、上位スコアの仮説の信頼度の合計が、第2のしきい値を超えた場合にはS108の処理に進み、そうでない場合にはS102の処理に戻る。具体的には、認識結果候補決定部112は、スコアの高い順に仮説を並べ、上位スコアの仮説から信頼度を合計し、第2のしきい値を信頼度の合計が超えた場合にはS108の処理に進む。
In step 112 (S112), the recognition result candidate determination unit 112, among each hypothesis calculated by the reliability calculation unit 110, the total reliability of the hypothesis of the higher score exceeds the second threshold value. If so, the process proceeds to S108; otherwise, the process returns to S102. Specifically, the recognition result candidate determination unit 112 arranges the hypotheses in descending order of the scores, adds the reliability from the hypothesis of the higher score, and if the reliability exceeds the second threshold value, S108. Proceed to the process.
<第3の実施の形態>
次に、本発明にかかる音声認識装置の第3の実施の形態について説明する。 <Third Embodiment>
Next, a third embodiment of the speech recognition apparatus according to the present invention will be described.
次に、本発明にかかる音声認識装置の第3の実施の形態について説明する。 <Third Embodiment>
Next, a third embodiment of the speech recognition apparatus according to the present invention will be described.
図7は、本発明の第3の実施形態にかかる音声認識装置3の構成例を示すブロック図である。図7に示すように、本発明の第3の実施の形態にかかる音声認識装置3は、認識結果候補決定部112が認識結果候補決定部114に置き換えられた構成を有する点が、第2の実施の形態にかかる音声認識装置2と異なる。
FIG. 7 is a block diagram showing a configuration example of the speech recognition apparatus 3 according to the third embodiment of the present invention. As shown in FIG. 7, the speech recognition apparatus 3 according to the third exemplary embodiment of the present invention has a configuration in which the recognition result candidate determination unit 112 is replaced with a recognition result candidate determination unit 114. Different from the speech recognition apparatus 2 according to the embodiment.
第3の実施の形態において、認識結果候補決定部114は、仮説探索部104によって探索された2つの仮説のスコアの差が予め決められたしきい値(第3のしきい値)を超えた状態が所定の時間経過した場合に認識結果候補を決定する。
In the third embodiment, the recognition result candidate determination unit 114 has the difference between the scores of the two hypotheses searched by the hypothesis search unit 104 exceeds a predetermined threshold (third threshold). A recognition result candidate is determined when a predetermined time has passed.
次に、音声認識装置3の動作を説明する。
Next, the operation of the voice recognition device 3 will be described.
図8は、本発明の第3の実施の形態にかかる音声認識装置3の動作を示すフローチャートである。なお、図8に示された各処理のうち、図6に示された処理と実質的に同一のものには同一の符号が付されている(重複する説明は省略する)。図8に示すように、S100~S112において、音声入力部100が音声を入力し、特徴量抽出部102がフレーム単位で特徴量を抽出し、仮説探索部104が特徴量に基づいて仮説を探索し、信頼度算出部110が各仮説の信頼度を算出し、認識結果候補決定部112がスコアを比較し、認識結果候補部112が上位スコアの仮説の信頼度の合計を計算する。
FIG. 8 is a flowchart showing the operation of the speech recognition apparatus 3 according to the third embodiment of the present invention. Of the processes shown in FIG. 8, the same reference numerals are given to substantially the same processes as those shown in FIG. 6 (the duplicate description is omitted). As shown in FIG. 8, in S100 to S112, the voice input unit 100 inputs voice, the feature quantity extraction unit 102 extracts feature quantities in units of frames, and the hypothesis search section 104 searches for hypotheses based on the feature quantities. Then, the reliability calculation unit 110 calculates the reliability of each hypothesis, the recognition result candidate determination unit 112 compares the scores, and the recognition result candidate unit 112 calculates the total reliability of the hypothesis of the higher score.
ステップ114(S114)において、認識結果候補決定部114は、2つの仮説のスコアの差が第3のしきい値を超えた状態が所定の時間経過した場合にはS108の処理に進み、そうでない場合にはS102の処理に戻る。
In step 114 (S114), the recognition result candidate determination unit 114 proceeds to the processing of S108 when a state in which the difference between the scores of the two hypotheses exceeds the third threshold value has elapsed for a predetermined time, and is not so. In this case, the process returns to S102.
<第3の実施の形態の変形例>
次に、本発明にかかる音声認識装置の第3の実施の形態の変形例について説明する。第3の実施の形態において、認識結果候補決定部114は、仮説探索部104で探索された2つの仮説のスコアの差が第3のしきい値を超えた状態が所定の時間経過した場合に認識結果候補を決定する。これに対して、本変形例において、認識結果候補決定部114は、上位スコアの仮説の信頼度算出部110で算出された信頼度の合計が予め決められたしきい値(第4のしきい値)を超えた状態が所定の時間経過した場合に認識結果候補を決定する。 <Modification of Third Embodiment>
Next, a modification of the third embodiment of the speech recognition apparatus according to the present invention will be described. In the third embodiment, the recognition resultcandidate determination unit 114 determines that the difference between the scores of the two hypotheses searched by the hypothesis search unit 104 exceeds the third threshold and a predetermined time has elapsed. A recognition result candidate is determined. On the other hand, in this modified example, the recognition result candidate determination unit 114 has a predetermined threshold (fourth threshold) that is calculated by the reliability calculation unit 110 of the higher score hypothesis. A recognition result candidate is determined when a predetermined time has passed.
次に、本発明にかかる音声認識装置の第3の実施の形態の変形例について説明する。第3の実施の形態において、認識結果候補決定部114は、仮説探索部104で探索された2つの仮説のスコアの差が第3のしきい値を超えた状態が所定の時間経過した場合に認識結果候補を決定する。これに対して、本変形例において、認識結果候補決定部114は、上位スコアの仮説の信頼度算出部110で算出された信頼度の合計が予め決められたしきい値(第4のしきい値)を超えた状態が所定の時間経過した場合に認識結果候補を決定する。 <Modification of Third Embodiment>
Next, a modification of the third embodiment of the speech recognition apparatus according to the present invention will be described. In the third embodiment, the recognition result
また、認識結果候補決定部114は、仮説探索部104によって探索された2つの仮説のスコアの差と、上位スコアの仮説の信頼度算出部110によって算出された信頼度の合計の2つの基準を満たした状態が所定の時間経過した場合に認識結果候補を決定してもよい。
Further, the recognition result candidate determination unit 114 calculates two criteria, ie, the difference between the scores of the two hypotheses searched by the hypothesis search unit 104 and the total reliability calculated by the hypothesis reliability calculation unit 110 of the higher score. The recognition result candidate may be determined when a predetermined time has passed in the satisfied state.
<第4の実施の形態>
次に、本発明にかかる音声認識装置の第4の実施の形態について説明する。 <Fourth embodiment>
Next, a fourth embodiment of the speech recognition apparatus according to the present invention will be described.
次に、本発明にかかる音声認識装置の第4の実施の形態について説明する。 <Fourth embodiment>
Next, a fourth embodiment of the speech recognition apparatus according to the present invention will be described.
図9は、本発明の第4の実施の形態にかかる音声認識装置4の構成例を示すブロック図である。図9に示すように、本発明の第3の実施の形態にかかる音声認識装置4は、認識対象語彙記憶部118および音響モデル記憶部120をさらに有する点が、第2の実施の形態にかかる音声認識装置2とは異なる。認識対象語彙記憶部118および音響モデル記憶部120は、例えば、メモリ12、HDD14等の記憶装置により実現される。
FIG. 9 is a block diagram showing a configuration example of the speech recognition apparatus 4 according to the fourth embodiment of the present invention. As shown in FIG. 9, the speech recognition apparatus 4 according to the third exemplary embodiment of the present invention has a recognition target vocabulary storage unit 118 and an acoustic model storage unit 120 according to the second exemplary embodiment. Different from the speech recognition device 2. The recognition target vocabulary storage unit 118 and the acoustic model storage unit 120 are realized by a storage device such as the memory 12 or the HDD 14, for example.
認識対象語彙記憶部118には、「オンセイ(oNsei:音声)」、「オンセイニンシキ(oNseiniNsiki:音声認識)」、「オンセイゴウセイ(oNseigousei:音声合成)」、「オンセツ(oNsetsu:音節)」、「オンセン(oNseN:温泉)」等の音素列が記憶されている。認識対象語彙記憶部118に記憶される音素列は、仮説探索部104によって同じ先頭音素を持つ単語の先頭音素部分がマージされるようにして用いられる。
The recognition target vocabulary storage unit 118 includes “onsei (oNsei: speech)”, “onseininshiki (voice recognition)”, “onseigosei (speech synthesis)”, “onset (oNsetsu: syllable)”, “ Phoneme strings such as “Onsen (oNseN: hot spring)” are stored. The phoneme string stored in the recognition target vocabulary storage unit 118 is used by the hypothesis search unit 104 so that the head phoneme portions of words having the same head phoneme are merged.
図10は、認識対象語彙記憶部118に記憶されている音素列の利用例を示す図である。図10に示すように、例えば、「オンセイ(oNsei:音声)」、「オンセイニンシキ(oNseiniNsiki:音声認識)」、「オンセイゴウセイ(oNseigousei:音声合成)」、「オンセツ(oNsetsu:音節)」、「オンセン(oNsen:温泉)」等の音素列がマージされて用いられる。
FIG. 10 is a diagram illustrating a usage example of phoneme strings stored in the recognition target vocabulary storage unit 118. As shown in FIG. 10, for example, “onsei (oNsei: speech)”, “onseininshiki (speech recognition)”, “onseigosei (speech synthesis)”, “onset (oNsetsu: syllable)”, “ Phoneme strings such as “Onsen (oNsen)” are merged and used.
音響モデル記憶部120は、読みに対応する音響パタンをモデル化した音響モデルを記憶する。例えば、音響モデルとして、HMM(Hidden Markov Model:隠れマルコフモデル)などが用いられる。
The acoustic model storage unit 120 stores an acoustic model obtained by modeling an acoustic pattern corresponding to reading. For example, HMM (Hidden Markov Model: Hidden Markov Model) or the like is used as an acoustic model.
仮説探索部116は、認識対象語彙記憶部116と音響モデル記憶部118から木構造辞書と音響モデルを読み込み、時刻順にフレーム単位で入力される音声特徴量から算出した認識結果候補の仮説のスコアにより仮説を探索する。具体的には、仮説探索部104は、特徴量抽出部102から出力された音声特徴量を時刻順にフレーム単位で受け付け、木構造辞書の先頭音素から順に仮説を展開し、スコアを算出する。
The hypothesis search unit 116 reads the tree structure dictionary and the acoustic model from the recognition target vocabulary storage unit 116 and the acoustic model storage unit 118, and uses the recognition result candidate hypothesis score calculated from the speech feature values input in frame units in time order. Search for hypotheses. Specifically, the hypothesis search unit 104 receives the speech feature amount output from the feature amount extraction unit 102 in frame order in time order, expands the hypothesis in order from the first phoneme in the tree structure dictionary, and calculates a score.
また、仮説探索部116は、言語スコアを使用してもよい。この場合、仮説探索部104は、ビームサーチによる枝刈りを行ってもよい。ビームサーチは、各時刻における音響スコアおよび言語スコアを総合的に判断して、スコアの良くない仮説は見込みがないものとして枝刈り、つまり以降の仮説展開を行わないようにする。仮説探索部104は、ある時刻におけるスコアによって、スコアのよくない仮説について仮説展開を終了してもよい。仮説探索部116は、探索された仮説とスコアを信頼度算出部106に対して出力する。
Further, the hypothesis search unit 116 may use a language score. In this case, the hypothesis search unit 104 may perform pruning by beam search. The beam search comprehensively determines the acoustic score and the language score at each time, so that a hypothesis with a poor score is not considered to be probable, that is, subsequent hypothesis expansion is not performed. The hypothesis search unit 104 may end hypothesis development for a hypothesis with a poor score according to a score at a certain time. The hypothesis search unit 116 outputs the searched hypothesis and score to the reliability calculation unit 106.
<第5の実施の形態>
次に、本発明にかかる音声認識装置の第5の実施の形態について説明する。 <Fifth embodiment>
Next, a fifth embodiment of the speech recognition apparatus according to the present invention will be described.
次に、本発明にかかる音声認識装置の第5の実施の形態について説明する。 <Fifth embodiment>
Next, a fifth embodiment of the speech recognition apparatus according to the present invention will be described.
図11は、本発明の第5の実施の形態にかかる音声認識装置5の構成例を示すブロック図である。本発明の第5の実施の形態にかかる音声認識装置5は、上述した各実施の形態に共通する構成である。
FIG. 11 is a block diagram showing a configuration example of the speech recognition apparatus 5 according to the fifth embodiment of the present invention. The speech recognition apparatus 5 according to the fifth embodiment of the present invention has a configuration common to the above-described embodiments.
仮説探索部104は、時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索する。
The hypothesis search unit 104 calculates the hypothesis score of the recognition result candidate based on the speech feature quantity input in frame units in time order, and searches for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate. .
認識結果候補決定部106は、前記仮説探索部によってある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する。
The recognition result candidate determination unit 106 determines a hypothesis having a higher score as the recognition result candidate from a plurality of hypotheses searched at a certain time by the hypothesis search unit based on the distribution of scores accumulated up to the time. To do.
以上、実施の形態を参照して本願発明を説明したが、本願発明は上記実施の形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。
As mentioned above, although this invention was demonstrated with reference to embodiment, this invention is not limited to the said embodiment. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
この出願は、2012年2月21日に出願された日本出願特願2012-035402を基礎とする優先権を主張し、その開示の全てをここに取り込む。
This application claims priority based on Japanese Patent Application No. 2012-035402 filed on February 21, 2012, the entire disclosure of which is incorporated herein.
本発明は、アプリケーションランチャーや人名入力等に用いられる孤立単語音声認識システム等に用いることができる。
The present invention can be used for an isolated word speech recognition system used for an application launcher, personal name input, and the like.
1、2、3、4、5 音声認識装置
10 CPU
12 メモリ
14 HDD
16 通信IF
18 表示装置
20 音声入力装置
22 入力装置
24 バス
100 音声入力部
102 特徴量抽出部
104 仮説探索部
106、112、114 認識結果候補決定部
108 結果出力部
110 信頼度算出部
116 仮説探索部
118 認識対象語彙記憶部
120 音響モデル記憶部 1, 2, 3, 4, 5Voice recognition device 10 CPU
12Memory 14 HDD
16 Communication IF
18Display device 20 Voice input device 22 Input device 24 Bus 100 Voice input unit 102 Feature quantity extraction unit 104 Hypothesis search unit 106, 112, 114 Recognition result candidate determination unit 108 Result output unit 110 Reliability calculation unit 116 Hypothesis search unit 118 Recognition Target vocabulary storage unit 120 Acoustic model storage unit
10 CPU
12 メモリ
14 HDD
16 通信IF
18 表示装置
20 音声入力装置
22 入力装置
24 バス
100 音声入力部
102 特徴量抽出部
104 仮説探索部
106、112、114 認識結果候補決定部
108 結果出力部
110 信頼度算出部
116 仮説探索部
118 認識対象語彙記憶部
120 音響モデル記憶部 1, 2, 3, 4, 5
12
16 Communication IF
18
Claims (8)
- 時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索する仮説探索部と、
前記仮説探索部によってある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する認識結果候補決定部と、
を備える音声認識装置。 A hypothesis search unit that calculates a hypothesis score of a recognition result candidate based on speech feature values input in units of frames in time order, and searches for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate;
A recognition result candidate determination unit that determines a hypothesis higher in the score as the recognition result candidate based on a distribution of scores accumulated up to the time from a plurality of hypotheses searched at a certain time by the hypothesis search unit;
A speech recognition apparatus comprising: - 前記認識結果候補決定部は、前記仮説探索部により探索されたある仮説のスコアと、次の順位の仮説のスコアとの差が第1のしきい値を超えた場合、前記スコア上位の仮説をスコアの高い順に認識結果候補として決定する請求項1に記載の音声認識装置。 When the difference between the score of a certain hypothesis searched by the hypothesis search unit and the score of a hypothesis of the next rank exceeds a first threshold, the recognition result candidate determination unit determines a hypothesis having a higher score. The speech recognition apparatus according to claim 1, wherein the speech recognition apparatus is determined as a recognition result candidate in descending order of score.
- 仮説の信頼度を算出する信頼度算出部を更に備え、
前記認識結果候補決定部は、前記信頼度算出部により算出されたスコア上位の仮説の信頼度の合計が第2のしきい値を超えた場合、前記スコア上位の仮説を認識結果候補として決定する請求項1または2に記載の音声認識装置。 A reliability calculation unit for calculating the reliability of the hypothesis;
The recognition result candidate determination unit determines the higher score hypothesis as a recognition result candidate when the total reliability of the higher score hypothesis calculated by the reliability calculation unit exceeds a second threshold value. The speech recognition apparatus according to claim 1 or 2. - 前記認識結果候補決定部は、ある仮説のスコアと、次の順位の仮説のスコアとの差が第3のしきい値を超えた状態が所定の時間経過した場合、前記認識結果候補を決定する請求項2または3に記載の音声認識装置。 The recognition result candidate determination unit determines the recognition result candidate when a state in which a difference between a score of a certain hypothesis and a score of a hypothesis of the next rank exceeds a third threshold has elapsed for a predetermined time. The speech recognition apparatus according to claim 2 or 3.
- 前記認識結果候補決定部は、前記スコア上位の仮説の信頼度の合計が第4のしきい値を超える状態が所定の時間経過した場合、前記スコア上位の仮説を認識結果候補として決定する請求項3に記載の音声認識装置。 The recognition result candidate determination unit determines the hypothesis having a higher score as a recognition result candidate when a state in which a total reliability of the hypothesis having a higher score exceeds a fourth threshold has elapsed for a predetermined time. 4. The speech recognition device according to 3.
- 前記しきい値は、前記仮説探索部によって探索された複数の仮説のスコアの合計に所定の割合を乗じた値であることを特徴とする請求項2乃至5のいずれかに記載の音声認識装置。 The speech recognition apparatus according to claim 2, wherein the threshold value is a value obtained by multiplying a sum of scores of a plurality of hypotheses searched by the hypothesis search unit by a predetermined ratio. .
- 時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索し、
ある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する
音声認識方法。 Calculate the hypothesis score of the recognition result candidate based on the speech feature input in frame order in time order, and search for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate,
A speech recognition method of determining a hypothesis having a higher score from a plurality of hypotheses searched at a certain time as a recognition result candidate based on a distribution of scores accumulated up to the time. - コンピュータを含む音声認識装置において、
時刻順にフレーム単位で入力される音声特徴量を基に認識結果候補の仮説のスコアを算出し、算出した認識結果候補の仮説のスコアを参照することにより前記仮説を探索する仮説探索ステップと、
ある時刻に探索された複数の仮説から、前記時刻までに累積されたスコアの分布に基づいて、該スコア上位の仮説を前記認識結果候補として決定する認識結果候補決定ステップと、
を前記コンピュータに実行させるプログラム。 In a speech recognition device including a computer,
A hypothesis search step of calculating a hypothesis score of a recognition result candidate based on speech feature amounts input in frame units in time order, and searching for the hypothesis by referring to the calculated hypothesis score of the recognition result candidate;
A recognition result candidate determination step for determining a hypothesis higher in the score as the recognition result candidate based on the distribution of scores accumulated up to the time from a plurality of hypotheses searched at a certain time;
A program for causing the computer to execute.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012035402 | 2012-02-21 | ||
JP2012-035402 | 2012-02-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013125203A1 true WO2013125203A1 (en) | 2013-08-29 |
Family
ID=49005400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/000876 WO2013125203A1 (en) | 2012-02-21 | 2013-02-18 | Speech recognition device, speech recognition method, and computer program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JPWO2013125203A1 (en) |
WO (1) | WO2013125203A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020079921A (en) * | 2018-11-13 | 2020-05-28 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | Voice interaction realizing method, device, computer device and program |
JP2020187313A (en) * | 2019-05-17 | 2020-11-19 | 日本放送協会 | Voice recognition device, recognition result output control device, and program |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04298796A (en) * | 1991-03-28 | 1992-10-22 | Nec Corp | Voice recognition device |
JPH06282295A (en) * | 1993-03-29 | 1994-10-07 | A T R Jido Honyaku Denwa Kenkyusho:Kk | Adaptive search system |
JP2000029495A (en) * | 1998-05-07 | 2000-01-28 | Cselt Spa (Cent Stud E Lab Telecomun) | Method and device for voice recognition using recognition techniques of a neural network and a markov model |
WO2000010160A1 (en) * | 1998-08-17 | 2000-02-24 | Sony Corporation | Speech recognizing device and method, navigation device, portable telephone, and information processor |
JP2003208194A (en) * | 2002-01-11 | 2003-07-25 | Nec Corp | Voice recognition method |
JP2003208195A (en) * | 2002-01-16 | 2003-07-25 | Sharp Corp | Device, method and program for recognizing consecutive speech, and program recording medium |
JP2004012615A (en) * | 2002-06-04 | 2004-01-15 | Sharp Corp | Continuous speech recognition apparatus and continuous speech recognition method, continuous speech recognition program and program recording medium |
JP2008064815A (en) * | 2006-09-05 | 2008-03-21 | Nippon Hoso Kyokai <Nhk> | Voice recognition device and voice recognition program |
WO2011083528A1 (en) * | 2010-01-06 | 2011-07-14 | 日本電気株式会社 | Data processing apparatus, computer program therefor, and data processing method |
JP2011527030A (en) * | 2008-07-02 | 2011-10-20 | グーグル・インコーポレーテッド | Speech recognition using parallel recognition tasks. |
-
2013
- 2013-02-18 JP JP2014500915A patent/JPWO2013125203A1/en active Pending
- 2013-02-18 WO PCT/JP2013/000876 patent/WO2013125203A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04298796A (en) * | 1991-03-28 | 1992-10-22 | Nec Corp | Voice recognition device |
JPH06282295A (en) * | 1993-03-29 | 1994-10-07 | A T R Jido Honyaku Denwa Kenkyusho:Kk | Adaptive search system |
JP2000029495A (en) * | 1998-05-07 | 2000-01-28 | Cselt Spa (Cent Stud E Lab Telecomun) | Method and device for voice recognition using recognition techniques of a neural network and a markov model |
WO2000010160A1 (en) * | 1998-08-17 | 2000-02-24 | Sony Corporation | Speech recognizing device and method, navigation device, portable telephone, and information processor |
JP2003208194A (en) * | 2002-01-11 | 2003-07-25 | Nec Corp | Voice recognition method |
JP2003208195A (en) * | 2002-01-16 | 2003-07-25 | Sharp Corp | Device, method and program for recognizing consecutive speech, and program recording medium |
JP2004012615A (en) * | 2002-06-04 | 2004-01-15 | Sharp Corp | Continuous speech recognition apparatus and continuous speech recognition method, continuous speech recognition program and program recording medium |
JP2008064815A (en) * | 2006-09-05 | 2008-03-21 | Nippon Hoso Kyokai <Nhk> | Voice recognition device and voice recognition program |
JP2011527030A (en) * | 2008-07-02 | 2011-10-20 | グーグル・インコーポレーテッド | Speech recognition using parallel recognition tasks. |
WO2011083528A1 (en) * | 2010-01-06 | 2011-07-14 | 日本電気株式会社 | Data processing apparatus, computer program therefor, and data processing method |
Non-Patent Citations (1)
Title |
---|
HIROSHI KOJIMA ET AL.: "Ultra-Rapid Speech Recognition based on Search Termination using Confidence Scoring", IEICE TECHNICAL REPORT, vol. 108, no. 422, 22 January 2009 (2009-01-22), pages 13 - 18 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020079921A (en) * | 2018-11-13 | 2020-05-28 | バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド | Voice interaction realizing method, device, computer device and program |
JP2020187313A (en) * | 2019-05-17 | 2020-11-19 | 日本放送協会 | Voice recognition device, recognition result output control device, and program |
Also Published As
Publication number | Publication date |
---|---|
JPWO2013125203A1 (en) | 2015-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7249017B2 (en) | Speech recognition with score calculation | |
US9812122B2 (en) | Speech recognition model construction method, speech recognition method, computer system, speech recognition apparatus, program, and recording medium | |
KR100755677B1 (en) | Apparatus and method for dialogue speech recognition using topic detection | |
KR100486733B1 (en) | Method and apparatus for speech recognition using phone connection information | |
JP5739718B2 (en) | Interactive device | |
US9978364B2 (en) | Pronunciation accuracy in speech recognition | |
EP2685452A1 (en) | Method of recognizing speech and electronic device thereof | |
KR20050082249A (en) | Method and apparatus for domain-based dialog speech recognition | |
US20070038453A1 (en) | Speech recognition system | |
JP2011033680A (en) | Voice processing device and method, and program | |
JP5326169B2 (en) | Speech data retrieval system and speech data retrieval method | |
CN107610693B (en) | Text corpus construction method and device | |
JP5276610B2 (en) | Language model generation apparatus, program thereof, and speech recognition system | |
JP4700522B2 (en) | Speech recognition apparatus and speech recognition program | |
JP3961780B2 (en) | Language model learning apparatus and speech recognition apparatus using the same | |
JP2005148342A (en) | Method for speech recognition, device, and program and recording medium for implementing the same method | |
WO2013125203A1 (en) | Speech recognition device, speech recognition method, and computer program | |
KR20130126570A (en) | Apparatus for discriminative training acoustic model considering error of phonemes in keyword and computer recordable medium storing the method thereof | |
JP2005275348A (en) | Speech recognition method, device, program and recording medium for executing the method | |
JP4987530B2 (en) | Speech recognition dictionary creation device and speech recognition device | |
JP4733436B2 (en) | Word / semantic expression group database creation method, speech understanding method, word / semantic expression group database creation device, speech understanding device, program, and storage medium | |
Arısoy et al. | Discriminative n-gram language modeling for Turkish | |
Qian et al. | A Multi-Space Distribution (MSD) and two-stream tone modeling approach to Mandarin speech recognition | |
JPH08241096A (en) | Speech recognition method | |
JP3894419B2 (en) | Speech recognition apparatus, method thereof, and computer-readable recording medium recording these programs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13751130 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014500915 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13751130 Country of ref document: EP Kind code of ref document: A1 |