JP2002091491A - Voice control system for plural pieces of equipment - Google Patents

Voice control system for plural pieces of equipment

Info

Publication number
JP2002091491A
JP2002091491A JP2000284613A JP2000284613A JP2002091491A JP 2002091491 A JP2002091491 A JP 2002091491A JP 2000284613 A JP2000284613 A JP 2000284613A JP 2000284613 A JP2000284613 A JP 2000284613A JP 2002091491 A JP2002091491 A JP 2002091491A
Authority
JP
Japan
Prior art keywords
voice
user
control system
controlled
operation content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2000284613A
Other languages
Japanese (ja)
Inventor
Atsushi Ouchi
淳 大内
Yoshio Ozawa
芳男 小澤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanyo Electric Co Ltd
Original Assignee
Sanyo Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanyo Electric Co Ltd filed Critical Sanyo Electric Co Ltd
Priority to JP2000284613A priority Critical patent/JP2002091491A/en
Publication of JP2002091491A publication Critical patent/JP2002091491A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Electric Ovens (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a voice control system for plural pieces of equipment in which recognition rate of the equipment that is a control object is made higher by detecting the uttering direction of a user. SOLUTION: The system is provided with pieces of equipment 80, 82 and 84 which are the control objects, microphones 20, 22 and 24 which are arranged at plural locations in a space and detect the voice of the user, a sound collecting means 30 which collects voice data detected by the microphones, a voice recognition means 40 which analyzes the contents of the voice data inputted to the means 30, a distribution analyzing means 50 which detects the uttered direction of the user from the size of the voice data inputted into the means 30, a reasoning means 60 which determines the equipment that becomes a control object and the control contents based on the uttered direction of the user analyzed by the means 50 and an equipment control means 70 which generates control signals for the equipment that becomes the control object.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】本発明は、音声により遠隔か
ら複数の機器を操作することのできる音声制御システム
に関するものであり、具体的には音声認識率を高めるこ
とのできる音声制御システムに関するものである。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a voice control system capable of remotely operating a plurality of devices by voice, and more particularly to a voice control system capable of increasing a voice recognition rate. is there.

【0002】[0002]

【従来の技術】使用者の発した音声に基づいて、複数の
機器の操作を行なう音声制御システムが知られている。
この種音声制御システム(90)は、図6に示すように、1
つのマイクロホン(91)を通じて検出した使用者の音声を
集音手段(92)に集め、音声認識手段(93)で解析し、その
解析結果に基づいて機器制御手段(94)によって、対象と
なる機器(95)(96)(97)を操作するものである。音声認識
手段(93)には、予め操作対象となる機器(95)(96)(97)の
名称と操作内容を記憶させておき、入力された音声デー
タと、記憶されているデータとのパターンマッチングに
より、制御する機器の特定と操作内容が特定される。
2. Description of the Related Art There is known a voice control system for operating a plurality of devices based on a voice uttered by a user.
As shown in FIG. 6, this kind of voice control system (90)
The user's voice detected through the two microphones (91) is collected by the sound collection means (92), analyzed by the voice recognition means (93), and based on the analysis result, the device control means (94) performs the operation on the target device. (95), (96) and (97) are operated. The voice recognition means (93) stores the names of the devices (95), (96), and (97) to be operated in advance and the contents of the operation, and stores a pattern between the input voice data and the stored data. The matching specifies the device to be controlled and the operation content.

【0003】[0003]

【発明が解決しようとする課題】上記音声制御システム
は、雑音の多い環境、例えば台所などでは、音声の認識
が困難であり、上手く機能しないことがあった。また、
マイクロホン(91)は、集音率を高めるため、使用者が身
につけておく必要があった。また、従来の音声認識手段
(93)では、予め特定使用者の音声パターンを記憶させる
必要があり、さらに、記憶された特定使用者以外の音声
では認識率が大きく低下してしまう問題があった。
In the above-mentioned voice control system, in a noisy environment, for example, in a kitchen or the like, it is difficult to recognize the voice, and the voice control system may not function well. Also,
The microphone (91) had to be worn by the user in order to increase the sound collection rate. In addition, conventional voice recognition means
In the case of (93), it is necessary to store the voice pattern of the specific user in advance, and further, there is a problem that the stored voice other than the specific user significantly lowers the recognition rate.

【0004】本発明の目的は、使用者の発声方向を検出
することによって、制御対象となる機器の認識率を高め
ることのできる複数機器の音声制御システムを提供する
ことである。
An object of the present invention is to provide a voice control system for a plurality of devices which can increase the recognition rate of a device to be controlled by detecting a direction in which the user speaks.

【0005】[0005]

【課題を解決するための手段】上記課題を解決するため
に、本発明の複数機器の音声制御システム(10)は、制御
対象となる複数の機器(80)(82)(84)と、空間の複数箇所
に配置され、使用者の音声を検出する複数のマイクロホ
ン(20)(22)(24)と、各マイクロホン(20)(22)(24)が検出
した音声データを集める検出する集音手段(30)と、集音
手段(30)に入力された音声データの内容を解析する音声
認識手段(40)と、集音手段(30)に入力された音声データ
の大きさから、使用者の発声方向を検出する分布分析手
段(50)と、音声認識手段(40)によって解析された音声デ
ータの内容と、分布分析手段(50)によって解析された使
用者の発声方向に基づいて、制御対象となる機器(82)と
操作内容を決定する推論手段(60)と、推論手段(60)によ
り決定された機器(82)と操作内容に基づいて、制御対象
となる機器(82)に制御信号を発する機器制御手段(70)
と、を具える。
In order to solve the above-mentioned problems, a voice control system (10) for a plurality of devices according to the present invention comprises a plurality of devices (80) (82) (84) to be controlled and a space A plurality of microphones (20), (22), and (24) that are arranged at a plurality of locations and detect a user's voice, and a sound collection that collects voice data detected by the microphones (20), (22), and (24) Means (30), a voice recognition means (40) for analyzing the content of the voice data input to the sound collection means (30), and a user based on the size of the voice data input to the sound collection means (30). Based on the distribution analysis means (50) for detecting the utterance direction of the voice data, the content of the voice data analyzed by the voice recognition means (40), and the utterance direction of the user analyzed by the distribution analysis means (50). Inference means (60) for determining the target device (82) and operation content, and control based on the device (82) and operation content determined by the inference means (60). Equipment control means for emitting control signals to the subject to equipment (82) (70)
And

【0006】[0006]

【作用及び効果】使用者が、制御対象となる機器(82)に
向かって、機器(82)の名称と操作内容を発声すると、音
声は複数位置に配置された各マイクロホン(20)(22)(24)
によって集音され、音声データは、各マイクロホン(20)
(22)(24)から集音手段(30)に送信される。集音手段(30)
は、各マイクロホン(20)(22)(24)からの音声データを、
音声認識手段(40)と分布分析手段(50)に送信する。音声
認識手段(40)は、受信した音声データの内容を解析し、
制御対象となる機器(82)と操作内容を特定する。また、
分布分析手段(50)は、受信した音声データに基づいて、
使用者の発声方向を解析し、使用者がどの機器に向かっ
て発声したかを特定する。推論手段(60)は、前記音声認
識手段(40)にて解析された音声データと、分布分析手段
(50)にて解析された発声方向の両方のデータに基づい
て、制御対象となる機器(82)を特定し、また、操作内容
を特定する。機器制御手段(70)は、推論手段(60)にて決
定された機器(82)と操作内容に基づいて、制御対象とな
る機器(82)に操作内容に応じた制御信号を発して、機器
(82)を操作する。
[Operation and Effect] When the user speaks the name of the device (82) and the operation content toward the device (82) to be controlled, the sound is output from the microphones (20) (22) arranged at a plurality of positions. (twenty four)
Sound data is collected by each microphone (20)
(22) It is transmitted from (24) to the sound collecting means (30). Sound collection means (30)
Is the audio data from each microphone (20) (22) (24)
The data is transmitted to the voice recognition means (40) and the distribution analysis means (50). The voice recognition means (40) analyzes the content of the received voice data,
The device (82) to be controlled and the operation content are specified. Also,
Distribution analysis means (50), based on the received audio data,
The voice direction of the user is analyzed, and the device to which the user uttered is specified. The inference means (60) includes the speech data analyzed by the speech recognition means (40) and the distribution analysis means.
Based on both data in the utterance direction analyzed in (50), the device (82) to be controlled is specified, and the operation content is specified. The device control means (70) issues a control signal corresponding to the operation content to the device (82) to be controlled based on the device (82) and the operation content determined by the inference means (60), and
Operate (82).

【0007】本発明では、使用者の発する音声の内容だ
けでなく、発声方向を検出して、制御対象となる機器を
特定するから、音声データの認識率を向上させることが
できる。使用者は、マイクロホンを身につける必要がな
く、また、マイクロホンの存在を意識せずに、制御した
い機器の方向に向かって発声を行なえばよいから、操作
が非常に簡単である。また、雑音の多い場所、例えば台
所などで、操作命令にノイズが入っても、分布分析手段
(50)によって機器の特定を行なうことができるから、音
声認識の精度向上を図ることができる。さらに、複数の
マイクロホンを使用することにより、集音率の向上を達
成でき、音声認識精度の向上が達成される。
According to the present invention, not only the content of the voice uttered by the user but also the direction of utterance is detected and the device to be controlled is specified, so that the recognition rate of voice data can be improved. Since the user does not need to wear the microphone and speaks in the direction of the device to be controlled without being aware of the presence of the microphone, the operation is very simple. In addition, even if the operation instruction includes noise in a place with a lot of noise, for example, a kitchen, the distribution analysis means may be used.
Since the device can be specified by (50), the accuracy of voice recognition can be improved. Furthermore, by using a plurality of microphones, an improvement in sound collection rate can be achieved, and an improvement in speech recognition accuracy can be achieved.

【0008】[0008]

【発明の実施の形態】本発明の音声制御システム(10)
は、図1に示すように、複数のマイクロホン(20)(22)(2
4)(26)と、該マイクロホン(20)(22)(24)(26)に電気的に
接続された集音手段(30)と、集音手段(30)に電気的に接
続された音声認識手段(40)及び分布分析手段(50)と、音
声認識手段(40)及び分布分析手段(50)に電気的に接続さ
れた推論手段(60)と、推論手段(60)に電気的に接続され
た機器制御手段(70)と、機器制御手段(70)に電気的に接
続された制御対象となる複数の機器(80)(82)(84)から構
成される。なお、本明細書では、説明の便宜上音声認識
手段、分布分析手段等の名称を付けているが、ソフト上
で処理を行なうこともできる。
BEST MODE FOR CARRYING OUT THE INVENTION A voice control system (10) of the present invention
Are a plurality of microphones (20), (22), (2) as shown in FIG.
4) (26), sound collecting means (30) electrically connected to the microphones (20), (22), (24) and (26), and sound electrically connected to the sound collecting means (30). The recognition means (40) and the distribution analysis means (50), the inference means (60) electrically connected to the speech recognition means (40) and the distribution analysis means (50), and the inference means (60) are electrically connected. It is composed of a connected device control means (70) and a plurality of devices (80), (82), (84) to be controlled electrically connected to the device control means (70). In this specification, the names of the voice recognition unit, the distribution analysis unit, and the like are given for convenience of description, but the processing may be performed on software.

【0009】なお、以下では、本発明を図2に示す台所
(88)に適用した実施例について説明し、制御対象となる
機器として、流し(80)、ガスコンロ(82)及び電子レンジ
(84)を例に挙げ、各機器には、表1に示す操作内容の制
御を行なうものとする。
In the following, the present invention is applied to a kitchen shown in FIG.
An embodiment applied to (88) will be described, and as a device to be controlled, a sink (80), a gas stove (82), and a microwave oven will be described.
Taking (84) as an example, it is assumed that each device controls the operation contents shown in Table 1.

【0010】[0010]

【表1】 [Table 1]

【0011】マイクロホン(20)(22)(24)(26)は、台所(8
8)の天井、床又は壁面に、使用者(100)の作業空間を取
り囲むように複数配置される。図示の実施例では、マイ
クロホン(20)(22)(24)(26)を円周上に等間隔に配置して
いる。各マイクロホン(20)(22)(24)(26)は、台所(88)内
の音を集音し、集音された音は、集音手段(30)に送られ
る。例えば、使用者(100)がガスコンロ(82)の方向に向
かって、「ガスコンロ、強」という操作命令を発声した
ときには(図3のステップ1)、各マイクロホン(20)(22)
(24)が集音する音声の大きさは、図4に示すように、ガ
スコンロ(82)に近い位置に配備されたマイクロホン(22)
が最大となる。
The microphones (20), (22), (24) and (26) are connected to the kitchen (8).
A plurality of them are arranged on the ceiling, floor or wall of 8) so as to surround the work space of the user (100). In the illustrated embodiment, the microphones (20), (22), (24) and (26) are arranged at equal intervals on the circumference. The microphones (20), (22), (24), and (26) collect sounds in the kitchen (88), and the collected sounds are sent to the sound collecting means (30). For example, when the user (100) utters an operation command “gas stove, strong” in the direction of the gas stove (82) (step 1 in FIG. 3), each microphone (20) (22)
As shown in FIG. 4, the volume of the sound collected by the microphone (24) is the microphone (22) arranged near the gas stove (82).
Is the largest.

【0012】集音手段(30)は、マイクロホンが集音した
音声データを集める。なお、集音手段(30)等の以下の制
御手段は、台所(88)内の適所に配置される。集音手段(3
0)は、例えば、各マイクロホンごとに集音された音声デ
ータの振幅(図4参照)を夫々測定し(図3のステップ
2)、最大振幅となるマイクロホン(22)の音声データを
音声認識手段(40)に送信する(ステップ3)。また、各マ
イクロホン(20)(22)(24)(26)の音声データの振幅は、分
布分析手段(50)に送られる(ステップ4)。
The sound collecting means (30) collects sound data collected by the microphone. The following control means such as the sound collecting means (30) are arranged at appropriate places in the kitchen (88). Sound collection means (3
0), for example, measures the amplitude (see FIG. 4) of the voice data collected for each microphone (step 2 in FIG. 3), and determines the voice data of the microphone (22) having the maximum amplitude as voice recognition means. (40) (step 3). The amplitude of the audio data of each of the microphones (20), (22), (24) and (26) is sent to the distribution analysis means (50) (step 4).

【0013】音声認識手段(40)には、予め制御対象とな
る機器名と操作内容の音声パターンが夫々入力されてお
り、最大振幅のマイクロホン(22)の音声データとのパタ
ーンマッチング等の公知の音声認識技術によって、制御
対象となる機器の特定と、操作内容の特定を行なう(ス
テップ3)。
The name of the device to be controlled and the voice pattern of the operation content are input to the voice recognition means (40) in advance, and are known in the art such as pattern matching with the voice data of the microphone (22) having the maximum amplitude. The device to be controlled and the content of the operation are specified by the voice recognition technology (step 3).

【0014】具体的には、音声認識手段(40)では、操作
命令に含まれる機器名と操作内容を、記憶されている機
器名と操作内容の音声パターンとパターンマッチング
し、機器名と操作内容の識別率を算出する。例えば、
「ガスコンロ、強」という操作命令に対して、音声認識
手段(40)での音声識別率が、表2のようである場合、機
器名と操作内容の各音声識別率を乗じて、どの機器名と
操作内容の組合せが、使用者の発した命令に近いかを、
表3に示すように算出してポイント化する。
Specifically, the voice recognition means (40) performs pattern matching of the device name and the operation content included in the operation command with the stored device name and the voice pattern of the operation content, and obtains the device name and the operation content. Is calculated. For example,
If the voice recognition rate of the voice recognition means (40) is as shown in Table 2 with respect to the operation command “Gas stove, strong”, the device name is multiplied by the voice recognition rate of each operation content to determine which device name. Whether the combination of and the operation content is close to the command issued by the user.
It is calculated as shown in Table 3 and converted into points.

【0015】[0015]

【表2】 [Table 2]

【0016】[0016]

【表3】 [Table 3]

【0017】上記により算出された「操作命令(機器名
と操作内容の組合せ)」と「ポイント」は、推論手段(6
0)へ送信される。
The “operation command (combination of device name and operation content)” and “point” calculated above are used as inference means (6).
Sent to 0).

【0018】分布分析手段(50)は、機器名の認識率を高
めるために、使用者がどの機器に向かって発声したの
か、使用者の発声方向を特定する(図3のステップ4)。
具体的には、各マイクロホン(20)(22)(24)(26)からの音
声データを、例えば、図4に示すような波形データとし
て受信する。分布分析手段(50)は、受信した各音声デー
タの最大振幅を測定して、二乗平均などにより使用者の
発声した方向の確率を算出する。例えば、各マイクロホ
ン(20)(22)(24)の最大振幅が図4に示すように、順に1
0dB、20dB、12dB、また、マイクロホン(26)
(26)(26)の最大振幅が夫々3dBであったとする。この
場合、使用者が各機器(80)(82)(84)の方向を向いている
確率は、二乗平均を用いて、表4に示すように算出する
ことができる。
The distribution analyzing means (50) specifies the device to which the user uttered and the direction of the user's utterance in order to increase the recognition rate of the device name (step 4 in FIG. 3).
Specifically, audio data from each of the microphones (20), (22), (24), and (26) is received, for example, as waveform data as shown in FIG. The distribution analysis means (50) measures the maximum amplitude of each received voice data and calculates the probability of the direction in which the user uttered by means of root mean square or the like. For example, as shown in FIG. 4, the maximum amplitude of each of the microphones (20), (22) and (24) is 1 in order.
0dB, 20dB, 12dB, and a microphone (26)
(26) It is assumed that the maximum amplitude in (26) is 3 dB, respectively. In this case, the probability that the user faces each device (80) (82) (84) can be calculated as shown in Table 4 using the mean square.

【0019】[0019]

【表4】 [Table 4]

【0020】上記により算出された「機器名」と「確
率」は、推論手段(60)へ送信される。
The “device name” and “probability” calculated as described above are transmitted to the inference means (60).

【0021】推論手段(60)は、音声認識手段(40)で算出
された「機器名と操作内容の組合せ」の「ポイント」
(ステップ3)と、分布分析手段(50)で算出された「機器
名」の「確率」(ステップ4)を、ファジィ理論などを利
用して結合する(図3のステップ5)。具体的には、共通
する機器名のポイントと確率を乗じて、表5に示すよう
に、ポイント化する。
The inference means (60) is a "point" of the "combination of device name and operation content" calculated by the voice recognition means (40).
(Step 3) and the "probability" (step 4) of the "device name" calculated by the distribution analysis means (50) are combined using fuzzy logic or the like (step 5 in FIG. 3). Specifically, the points are multiplied by the points of the common device name and the probability as shown in Table 5.

【0022】[0022]

【表5】 [Table 5]

【0023】推論手段(60)は、各ポイントの比率から
「機器名」と「操作内容」の推論結果とする。具体的に
は、表6に示すように、全てのポイントの総和を分母と
し、各ポイントを分子として推論結果の確率を算出す
る。
The inference means (60) uses the ratio of each point as an inference result of "device name" and "operation content". Specifically, as shown in Table 6, the sum of all points is used as a denominator, and each point is used as a numerator to calculate the probability of the inference result.

【0024】[0024]

【表6】 [Table 6]

【0025】推論手段(60)で算出された推論結果のう
ち、最も確率の高い機器名と操作内容の組合せについ
て、予め定められた閾値と比較し、確率が閾値よりも高
ければ、その機器名と操作内容を機器制御手段(70)に送
信する。確率が閾値よりも低い場合には、送信は行なわ
ない。なお、この場合、音や光などにより、操作命令が
認識されなかったことを表示するようにしてもよい。
Of the inference results calculated by the inference means (60), the combination of the device name and the operation content having the highest probability is compared with a predetermined threshold value. If the probability is higher than the threshold value, the device name is determined. Is transmitted to the device control means (70). If the probability is lower than the threshold, no transmission is performed. In this case, the fact that the operation command was not recognized may be displayed by a sound or light.

【0026】機器制御手段(70)は、推論手段(60)から送
信された機器名と操作内容とを含む操作命令を受信し、
制御対象となる機器に、操作内容に対応した操作信号を
送信する(図3のステップ6)。上記の実施例の場合、
「ガスコンロ」に「強」(火力を強める)という操作内容
を送信する。
The device control means (70) receives the operation command including the device name and the operation content transmitted from the inference means (60),
An operation signal corresponding to the operation content is transmitted to the device to be controlled (step 6 in FIG. 3). In the case of the above embodiment,
The operation content of "strong" (increase the thermal power) is transmitted to the "gas stove".

【0027】制御対象となる機器(上記の場合、ガスコ
ンロ(82))は、受信した操作信号中の操作内容に基づい
て、その操作を実行する。
The device to be controlled (the gas stove (82) in the above case) executes the operation based on the operation content in the received operation signal.

【0028】使用者(100)が、流し(80)や電子レンジ(8
4)の方向を向いて操作命令を発声した場合も、上記と同
様に機器の選択と操作内容の決定が行なわれる。なお、
使用者(100)が、これら機器(80)(82)(84)とは別の方
向、例えば、図2の矢印とは逆の方向を向いて、操作命
令を発声した場合には、マイクロホン(26)(26)(26)の何
れかの音声データが最大振幅となるから、分布分析手段
(50)で算出される機器(80)(82)(84)の方向を向いている
確率は低くなる。従って、たとえ操作命令が音声認識手
段(40)にて高い識別率で識別されても、推論手段(60)に
て算出される推論結果は低いものとなり、操作命令が機
器制御手段(70)に送信されることはない。
The user (100) can use the sink (80) or the microwave (8
When the operation command is issued in the direction of 4), the selection of the device and the determination of the operation content are performed in the same manner as described above. In addition,
When the user (100) utters an operation command in a direction different from those of the devices (80), (82), and (84), for example, in a direction opposite to the arrow in FIG. 2, the microphone ( 26) Since any of the audio data of (26) and (26) has the maximum amplitude, the distribution analysis means
The probability that the device (80), (82), (84) calculated in (50) faces the direction becomes low. Therefore, even if the operation command is identified with a high identification rate by the voice recognition means (40), the inference result calculated by the inference means (60) is low, and the operation command is sent to the device control means (70). It will not be sent.

【0029】上記実施例によれば、使用者は、マイクロ
ホン(20)(22)(24)(26)の位置を気にすることなく、制御
したい機器に向けて、機器名と操作内容とを発声するだ
けで、その発声方向及び発声された機器名との両方によ
って機器の選択が行なわれるから、機器の認識率の向上
を図ることができる。
According to the above embodiment, the user can input the device name and the operation content to the device to be controlled without worrying about the positions of the microphones (20), (22), (24), and (26). By simply speaking, the device is selected based on both the direction of the utterance and the name of the uttered device, so that the recognition rate of the device can be improved.

【0030】なお、上記実施例では、使用者は、機器名
と操作内容とを発声するようにしているが、機器名は、
分布分析手段(50)によって決定することができるから、
使用者が、制御したい機器に向けて、操作内容のみを発
声することにより、発声方向から機器を選択し、操作内
容のみを音声認識して、機器の操作を行なうこともでき
る。
In the above embodiment, the user utters the device name and the operation content.
Because it can be determined by the distribution analysis means (50),
By uttering only the operation content toward the device to be controlled, the user can select the device from the direction of utterance, perform voice recognition of only the operation content, and operate the device.

【0031】この場合、音声認識手段(40)による操作内
容の認識率(上記表2の右側参照)と、分布分析手段(50)
による発声方向からの特定された機器の確率(上記表4
参照)を、推論手段(60)で乗じて、各ポイントの比率か
ら「機器名」と「操作内容」の推論結果の確率を算出
し、最も確率の高い機器名と操作内容の組合せについ
て、予め定められた閾値と比較し、上記と同様に、その
機器名と操作内容を機器制御手段(70)に送信して、機器
を制御すればよい。
In this case, the recognition rate of the operation content by the voice recognition means (40) (see the right side of Table 2 above) and the distribution analysis means (50)
Probability of the specified device from the utterance direction according to
) By the inference means (60) to calculate the probability of the inference result of `` device name '' and `` operation content '' from the ratio of each point, and for the combination of the device name and operation content with the highest probability, The device may be controlled by comparing it with the determined threshold value and transmitting the device name and the operation content to the device control means (70) in the same manner as described above.

【0032】上記実施例によれば、使用者の発声する命
令は、操作内容のみで済むから、操作命令の簡素化を図
り、音声制御システム(10)の使いやすさを向上できる。
According to the above-described embodiment, since the command issued by the user is only the operation content, the operation command can be simplified and the usability of the voice control system (10) can be improved.

【0033】上記実施例の説明は、本発明を説明するた
めのものであって、特許請求の範囲に記載の発明を限定
し、或は範囲を減縮する様に解すべきではない。又、本
発明の各部構成は上記実施例に限らず、特許請求の範囲
に記載の技術的範囲内で種々の変形が可能である。
The description of the above embodiments is for the purpose of illustrating the present invention, and should not be construed as limiting the invention described in the appended claims or reducing the scope thereof. Further, the configuration of each part of the present invention is not limited to the above embodiment, and various modifications can be made within the technical scope described in the claims.

【0034】マイクロホン(20)(22)(24)は、図5に示す
ように、制御対象となる機器(80)(82)(84)に夫々取り付
けてもよい。また、制御対象となる機器の種類は、上記
実施例に限定されず、また、本発明の音声制御システム
の適用空間も台所に限定されない。
As shown in FIG. 5, the microphones (20), (22) and (24) may be respectively attached to the devices (80), (82) and (84) to be controlled. Further, the type of the device to be controlled is not limited to the above embodiment, and the application space of the voice control system of the present invention is not limited to the kitchen.

【図面の簡単な説明】[Brief description of the drawings]

【図1】本発明の音声制御システムのブロック図であ
る。
FIG. 1 is a block diagram of a voice control system according to the present invention.

【図2】本発明を台所の機器制御に適用した説明図であ
る。
FIG. 2 is an explanatory diagram in which the present invention is applied to kitchen appliance control.

【図3】本発明の流れを示すフロー図である。FIG. 3 is a flowchart showing the flow of the present invention.

【図4】集音された音声データの一例を示す波形図であ
る。
FIG. 4 is a waveform diagram showing an example of collected sound data.

【図5】本発明の異なる事例を示す説明図である。FIG. 5 is an explanatory diagram showing a different case of the present invention.

【図6】従来の音声制御システムのブロック図である。FIG. 6 is a block diagram of a conventional voice control system.

【符号の説明】[Explanation of symbols]

(10) 音声制御システム (20)(22)(24) マイクロホン (30) 集音手段 (40) 音声認識手段 (50) 分布分析手段 (60) 推論手段 (70) 機器制御手段 (10) Voice control system (20) (22) (24) Microphone (30) Sound collection means (40) Voice recognition means (50) Distribution analysis means (60) Inference means (70) Equipment control means

フロントページの続き (51)Int.Cl.7 識別記号 FI テーマコート゛(参考) H04R 1/40 320 G10L 3/00 511 551N Continued on the front page (51) Int.Cl. 7 Identification symbol FI Theme coat II (Reference) H04R 1/40 320 G10L 3/00 511 551N

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】 制御対象となる複数の機器(80)(82)(84)
と、 空間内の複数箇所に配置され、使用者の音声を検出する
複数のマイクロホン(20)(22)(24)と、 各マイクロホン(20)(22)(24)が検出した音声データを集
める集音手段(30)と、 集音手段(30)に入力された音声データの内容を解析する
音声認識手段(40)と、 集音手段(30)に入力された音声データの大きさから、使
用者の発声方向を検出する分布分析手段(50)と、 音声認識手段(40)によって解析された音声データの内容
と、分布分析手段(50)によって解析された使用者の発声
方向に基づいて、制御対象となる機器(82)と操作内容を
決定する推論手段(60)と、 推論手段(60)により決定された機器(82)と操作内容に基
づいて、制御対象となる機器(82)に制御信号を発する機
器制御手段(70)と、 を具えることを特徴とする複数機器の音声制御システ
ム。
A plurality of devices to be controlled (80) (82) (84)
And a plurality of microphones (20), (22), and (24) that are placed at multiple locations in the space and detect the user's voice, and collect voice data detected by each of the microphones (20), (22), and (24) A sound collecting means (30), a voice recognizing means (40) for analyzing the content of the voice data input to the sound collecting means (30), and a size of the voice data input to the sound collecting means (30). Distribution analysis means (50) for detecting the user's utterance direction, the content of the voice data analyzed by the voice recognition means (40), and the user's utterance direction analyzed by the distribution analysis means (50) A device to be controlled (82) and inference means (60) for determining the operation content; and a device (82) to be controlled based on the device (82) and the operation content determined by the inference means (60). A device control means (70) for issuing a control signal to the device, and a voice control system for a plurality of devices.
【請求項2】 分布分析手段(50)は、マイクロホン(20)
(22)(24)から集音手段(30)に入力された音声データの振
幅を比較し、使用者の発声方向を検出する請求項1に記
載の複数機器の音声制御システム。
2. The distribution analysis means (50) includes a microphone (20).
(2) The sound control system for a plurality of devices according to claim 1, wherein the amplitude of the sound data input from (24) to the sound collecting means (30) is compared to detect the direction of utterance of the user.
【請求項3】 音声認識手段(40)は、マイクロホン(20)
(22)(24)から集音手段(30)に入力された音声データのう
ち、最も振幅の大きい音声データの内容を解析する請求
項1又は請求項2に記載の複数機器の音声制御システ
ム。
3. The voice recognition means (40) includes a microphone (20).
(22) The voice control system for a plurality of devices according to claim 1 or 2, wherein the content of the voice data having the largest amplitude among the voice data input from (24) to the sound collecting means (30) is analyzed.
【請求項4】 使用者の発する音声には、制御対象とな
る機器名と操作内容が含まれる請求項1乃至請求項3の
何れかに記載の複数機器の音声制御システム。
4. The voice control system for a plurality of devices according to claim 1, wherein the voice uttered by the user includes a device name to be controlled and an operation content.
【請求項5】 使用者の発する音声には、制御対象とな
る機器の操作内容のみが含まれ、音声認識手段(40)にて
操作内容が解析され、分布分析手段(50)にて制御対象と
なる機器(82)が特定される請求項1乃至請求項3の何れ
かに記載の複数機器の音声制御システム。
5. The voice uttered by the user includes only the operation content of the device to be controlled, the operation content is analyzed by the voice recognition means (40), and the control object is analyzed by the distribution analysis means (50). The voice control system for a plurality of devices according to any one of claims 1 to 3, wherein a device (82) to be specified is specified.
【請求項6】 制御対象となる機器(80)(82)(84)は、台
所用器具である請求項1乃至請求項5の何れかに記載の
複数機器の音声制御システム。
6. The audio control system for a plurality of appliances according to claim 1, wherein the appliances (80), (82), and (84) to be controlled are kitchen appliances.
JP2000284613A 2000-09-20 2000-09-20 Voice control system for plural pieces of equipment Withdrawn JP2002091491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2000284613A JP2002091491A (en) 2000-09-20 2000-09-20 Voice control system for plural pieces of equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2000284613A JP2002091491A (en) 2000-09-20 2000-09-20 Voice control system for plural pieces of equipment

Publications (1)

Publication Number Publication Date
JP2002091491A true JP2002091491A (en) 2002-03-27

Family

ID=18768796

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2000284613A Withdrawn JP2002091491A (en) 2000-09-20 2000-09-20 Voice control system for plural pieces of equipment

Country Status (1)

Country Link
JP (1) JP2002091491A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002311990A (en) * 2000-12-19 2002-10-25 Hewlett Packard Co <Hp> Activation method and system of voice-controlled apparatus
DE10208468A1 (en) * 2002-02-27 2003-09-04 Bsh Bosch Siemens Hausgeraete Electric domestic appliance, especially extractor hood with voice recognition unit for controlling functions of appliance, comprises a motion detector, by which the position of the operator can be identified
JP2004053581A (en) * 2002-04-17 2004-02-19 Daimler Chrysler Ag Method of detecting direction of line of sight by microphone
GB2394589A (en) * 2002-10-25 2004-04-28 Motorola Inc Speech recognition device
JP2009210956A (en) * 2008-03-06 2009-09-17 National Institute Of Advanced Industrial & Technology Operation method and operation device for the same, and program
GB2485145A (en) * 2010-10-29 2012-05-09 Displaylink Uk Ltd Audio command routing method for voice-controlled applications in multi-display systems
JP2015050766A (en) * 2013-09-03 2015-03-16 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Equipment control method, audio equipment control system and cooking equipment
EP2851621A1 (en) 2013-09-03 2015-03-25 Panasonic Intellectual Property Corporation of America Speech-based appliance control method, speech-based appliance control system, and cooking appliance using such method
CN105895100A (en) * 2016-06-29 2016-08-24 广东美的厨房电器制造有限公司 Kitchen voice control device, system and method
JP2017509917A (en) * 2014-02-19 2017-04-06 ノキア テクノロジーズ オサケユイチア Determination of motion commands based at least in part on spatial acoustic characteristics
GB2564237A (en) * 2017-05-23 2019-01-09 Lenovo Singapore Pte Ltd Method of associating user input with a device
US11373648B2 (en) 2018-09-25 2022-06-28 Fujifilm Business Innovation Corp. Control device, control system, and non-transitory computer readable medium
FR3130048A1 (en) 2021-12-07 2023-06-09 Seb S.A. System for controlling a plurality of domestic appliances

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002311990A (en) * 2000-12-19 2002-10-25 Hewlett Packard Co <Hp> Activation method and system of voice-controlled apparatus
DE10208468A1 (en) * 2002-02-27 2003-09-04 Bsh Bosch Siemens Hausgeraete Electric domestic appliance, especially extractor hood with voice recognition unit for controlling functions of appliance, comprises a motion detector, by which the position of the operator can be identified
JP2004053581A (en) * 2002-04-17 2004-02-19 Daimler Chrysler Ag Method of detecting direction of line of sight by microphone
GB2394589A (en) * 2002-10-25 2004-04-28 Motorola Inc Speech recognition device
GB2394589B (en) * 2002-10-25 2005-05-25 Motorola Inc Speech recognition device and method
JP2009210956A (en) * 2008-03-06 2009-09-17 National Institute Of Advanced Industrial & Technology Operation method and operation device for the same, and program
GB2485145A (en) * 2010-10-29 2012-05-09 Displaylink Uk Ltd Audio command routing method for voice-controlled applications in multi-display systems
EP2851621A1 (en) 2013-09-03 2015-03-25 Panasonic Intellectual Property Corporation of America Speech-based appliance control method, speech-based appliance control system, and cooking appliance using such method
JP2015050766A (en) * 2013-09-03 2015-03-16 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America Equipment control method, audio equipment control system and cooking equipment
US9316400B2 (en) 2013-09-03 2016-04-19 Panasonic Intellctual Property Corporation of America Appliance control method, speech-based appliance control system, and cooking appliance
JP2017509917A (en) * 2014-02-19 2017-04-06 ノキア テクノロジーズ オサケユイチア Determination of motion commands based at least in part on spatial acoustic characteristics
CN105895100A (en) * 2016-06-29 2016-08-24 广东美的厨房电器制造有限公司 Kitchen voice control device, system and method
GB2564237A (en) * 2017-05-23 2019-01-09 Lenovo Singapore Pte Ltd Method of associating user input with a device
US10573171B2 (en) 2017-05-23 2020-02-25 Lenovo (Singapore) Pte. Ltd. Method of associating user input with a device
US11373648B2 (en) 2018-09-25 2022-06-28 Fujifilm Business Innovation Corp. Control device, control system, and non-transitory computer readable medium
FR3130048A1 (en) 2021-12-07 2023-06-09 Seb S.A. System for controlling a plurality of domestic appliances
EP4194972A1 (en) 2021-12-07 2023-06-14 Seb S.A. System for controlling a plurality of household appliances

Similar Documents

Publication Publication Date Title
US11922095B2 (en) Device selection for providing a response
US9510090B2 (en) Device and method for capturing and processing voice
JP4939935B2 (en) Binaural hearing aid system with matched acoustic processing
JP2002091491A (en) Voice control system for plural pieces of equipment
US11437021B2 (en) Processing audio signals
JP4729927B2 (en) Voice detection device, automatic imaging device, and voice detection method
US6243322B1 (en) Method for estimating the distance of an acoustic signal
JP3838029B2 (en) Device control method using speech recognition and device control system using speech recognition
CN109920419B (en) Voice control method and device, electronic equipment and computer readable medium
US9349384B2 (en) Method and system for object-dependent adjustment of levels of audio objects
CN106992015A (en) Voice-activation system
EP2504745B1 (en) Communication interface apparatus and method for multi-user
JP2008205896A (en) Sound emitting and picking up device
US12039970B1 (en) System and method for source authentication in voice-controlled automation
JP2005227512A (en) Sound signal processing method and its apparatus, voice recognition device, and program
US7177806B2 (en) Sound signal recognition system and sound signal recognition method, and dialog control system and dialog control method using sound signal recognition system
CN114175145A (en) Multimodal intelligent audio device system attention expression
CN113516975A (en) Intelligent household voice-operated switch system and control method
WO2020230460A1 (en) Information processing device, information processing system, information processing method, and program
JP2020030271A (en) Conversation voice level notification system and conversation voice level notification method
WO2023125534A1 (en) Input detection apparatus, system, and related device thereof
JP2020166148A (en) Sound collection control device, sound collection control program and conference support system
JPS63118197A (en) Voice detector
JPH05289690A (en) Voice recognition controller
JPH04184497A (en) Voice recognition device

Legal Events

Date Code Title Description
A300 Application deemed to be withdrawn because no request for examination was validly filed

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20071204