JPH0286397A - Microphone array - Google Patents
Microphone arrayInfo
- Publication number
- JPH0286397A JPH0286397A JP23806788A JP23806788A JPH0286397A JP H0286397 A JPH0286397 A JP H0286397A JP 23806788 A JP23806788 A JP 23806788A JP 23806788 A JP23806788 A JP 23806788A JP H0286397 A JPH0286397 A JP H0286397A
- Authority
- JP
- Japan
- Prior art keywords
- microphone
- microphones
- sound
- wavelength
- noise
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010586 diagram Methods 0.000 description 14
- 238000000034 method Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000006866 deterioration Effects 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
Landscapes
- Circuit For Audible Band Transducer (AREA)
- Obtaining Desirable Characteristics In Audible-Bandwidth Transducers (AREA)
Abstract
Description
【発明の詳細な説明】
[産業上の利用分野]
本発明は、複数のマイクロホンを特別に配置して、雑音
抑圧を行う装置に対し好適なものとしたマイクロホンア
レーに関するものである。DETAILED DESCRIPTION OF THE INVENTION [Field of Industrial Application] The present invention relates to a microphone array in which a plurality of microphones are specially arranged and is suitable for a device that suppresses noise.
[従来の技術]
従来より、複数個のマイクロホンの配置によって雑音抑
圧を行うマイクロホンアレーや手法が提案されている。[Prior Art] Conventionally, microphone arrays and methods have been proposed that suppress noise by arranging a plurality of microphones.
その一つは、主に人力の目的とする信号(目的信号)の
中心周波数の音の波長の1/2の間隔にマイクロホンを
配置し、上記の目的信号に対して指向性を鋭くすること
によって雑音抑圧を行うマイクロホンアレーである。ま
た他の一つは、文献kaneda:ASSP−34pp
1391−1400 ’86において提案されているも
ので、雑音の到来方向に指向性の死角を形成して雑音抑
圧を行う手法である。この手法は、各マイクロホンの人
力信号の相関を計算することによって、雑音の到来方向
を検出し、その方向に指向性の死角を形成して雑音抑圧
を行うものである。One method is to place microphones at intervals of 1/2 the wavelength of the sound at the center frequency of the human-powered target signal (target signal), and sharpen the directivity with respect to the target signal. This is a microphone array that suppresses noise. The other one is from the document Kaneda: ASSP-34pp.
This method was proposed in 1391-1400 '86, and is a method of suppressing noise by forming a directional blind spot in the direction of noise arrival. This method detects the direction of arrival of noise by calculating the correlation between the human input signals of each microphone, and suppresses the noise by forming a directional blind spot in that direction.
[発明が解決しようとする課題]
しかしながら、上記従来の技術におけるマイクロホンア
レーでは、実際に人力の対象となる目的信号が比較的狭
い帯域のもである場合有効であるが、音声などのように
非常に広い帯域の信号である場合には十分な効果が得ら
れないという問題点があった。一方、雑音の到来方向に
指向性の死角を形成して雑音抑圧を行う従来の手法では
、これまで提案されているマイクロホンの配置にそのま
ま適応することができない問題点があった。[Problems to be Solved by the Invention] However, the microphone array in the above-mentioned conventional technology is effective when the target signal that is actually subject to human input has a relatively narrow band. However, there is a problem in that a sufficient effect cannot be obtained when the signal has a wide band. On the other hand, the conventional method of suppressing noise by forming a directional blind spot in the direction of noise arrival has the problem that it cannot be directly adapted to the microphone arrangement proposed so far.
本発明は、上記問題点を解決するために創案されてもの
で、雑音の到来方向に対して指向性の死角を形成し雑音
抑圧を行う装置の性能(S/Hの改善)を最大限発揮さ
せるためのマイクロホンアレーを提供することを目的と
する。The present invention was created to solve the above problems, and maximizes the performance (improvement of S/H) of a device that suppresses noise by forming a directional blind spot in the direction of noise arrival. The purpose of the present invention is to provide a microphone array for
[課題を解決するための手段]
上記の目的を達成するための本発明のマイクロホンアレ
ーの構成は、
複数個のマイクロホンを直線状あるいは平面状あるいは
立体状に配置したマイクロホンアレーであって、
1個以上の上記マイクロホンを含んで互いに平行な平面
群を想定した場合にその平面群の内で隣り合う平面間の
間隔の最も広いものについて、その間隔が入力対象とす
る周波数帯域の上限周波数の音の波長の半分またはほぼ
半分の長さになるように上記各マイクロホンを配置する
ことを特徴とする。[Means for Solving the Problems] The configuration of the microphone array of the present invention for achieving the above object is a microphone array in which a plurality of microphones are arranged linearly, in a plane, or in a three-dimensional shape, and one Assuming a group of planes that are parallel to each other and include the above-mentioned microphones, the distance between adjacent planes that is the widest among the planes corresponds to the upper limit frequency of the frequency band to be input. Each of the microphones is arranged so that the length thereof is half or approximately half the wavelength.
[作用]
本発明は、雑音の到来方向に対し指向性の死角を形成し
て雑音の抑圧を行う装置の性能(S、/Hの改善量)を
低下させる要因が、空間的エリアジングによる高域の性
能低下とマイクロホン間隔の狭いときに起こる低域の性
能低下であることに着目し、それらによる性能低下が雑
音の到来方向に対してのマイクロホン間隔が人力対象と
しての周波数帯域の上限周波数の音の波長の半分の長さ
の関係にあるときに最も少なくなることを見出し、複数
のマイクロホン素子でマイクロホンアレーを構成するに
際して、少なくとも一部に上記の関係またはほぼ上記の
関係が得られるようにマイクロホンを配置することによ
り、空間的なエリアジングの発生を防ぎ、かつ低域の性
能低下を最も少なくして、上記装置におけるS/Nの改
善量を最も良くする。[Operation] The present invention is characterized in that the factor that reduces the performance (amount of improvement in S, /H) of a device that suppresses noise by forming a directional blind spot with respect to the direction of arrival of noise is the increase in spatial aliasing. We focused on the performance deterioration in the low frequency range and the performance deterioration in the low frequency range that occurs when the microphone spacing is narrow. It was discovered that the amount of sound is minimized when the length is half the wavelength of the sound, and when configuring a microphone array with multiple microphone elements, the above relationship or almost the above relationship can be obtained in at least a part of the microphone elements. By arranging the microphones, the occurrence of spatial aliasing is prevented, and the deterioration in low frequency performance is minimized, thereby maximizing the amount of improvement in S/N in the above device.
[実施例]
以下、本発明の実施例を図面に基づいて詳細に説明する
。まず、本発明の基本的な構成と作用を述べる。[Example] Hereinafter, an example of the present invention will be described in detail based on the drawings. First, the basic structure and operation of the present invention will be described.
第1図は本発明の第1実施例を示すマイクロホンアレー
の基本的な配置図である。本実施例は、2個のマイクロ
ホン1.2を間隔dで配置したもので、その間隔dを入
力対象とする周波数帯域の上限周波数(以下帯域上限周
波数と記す)の音の波長の半分にする。今、上記周波数
帯域を0〜4k Hzとし帯域上限周波数の音の波長を
λ4にで表すと、マイクロホン1.2はd=λ4に/2
の間隔で配置される。FIG. 1 is a basic layout diagram of a microphone array showing a first embodiment of the present invention. In this embodiment, two microphones 1.2 are arranged at an interval d, and the interval d is set to be half the wavelength of the sound at the upper limit frequency of the frequency band to be input (hereinafter referred to as band upper limit frequency). . Now, if the above frequency band is 0 to 4kHz and the wavelength of the sound at the upper limit frequency of the band is expressed as λ4, the microphone 1.2 will be d=λ4/2.
are arranged at intervals of
第2図は第1実施例におけるマイクロホン間隔とS/N
改善量の関係図であり、第1実施例において目的信号S
がマイクロホン1.2を結ぶ線より角度θ、の方向から
到来し、雑音Nが上記線より角度θ。の方向から到来し
ている時、マイクロホン間隔dを種々に変化させた場合
の信号対雑音比S/Nの改善ff1PFの測定例を示し
ている。その横軸は間隔dの波長λ4kに対する比で表
し、各信号S、Nは0〜4kHzの白色雑音とする。○
印でプロットした曲線Aはθ1=30°、θ0=160
°のときのd/λ、とPF [dBコの関係を示し、以
下同様に口中でプロットした曲線Bはθ。Figure 2 shows the microphone spacing and S/N in the first embodiment.
It is a relationship diagram of the amount of improvement, and in the first embodiment, the target signal S
The noise N comes from a direction at an angle θ from the line connecting microphones 1 and 2, and the noise N comes from a direction at an angle θ from the line. An example of measuring the improvement in signal-to-noise ratio S/N (ff1PF) when the microphone interval d is variously changed when the microphone is coming from the direction shown in FIG. The horizontal axis represents the ratio of the interval d to the wavelength λ4k, and each signal S, N is assumed to be white noise of 0 to 4 kHz. ○
Curve A plotted with marks is θ1=30°, θ0=160
Curve B, which shows the relationship between d/λ and PF [dB] when the angle is θ, is similarly plotted in the mouth.
=40°、θ。=150°の場合、Δ印でプロットした
曲線Cはθ、= 140°、θ、=60°の場合を示し
ている。図から明らかなように、マイクロホン間隔dが
d=λ4に/2の時、即ち帯域上限周波数の音の波長の
半分の時にS/N改善量PFが最ら良い値となることが
わかる。このマイクロホン間隔dによるS/Hの改善量
PFの違いは、dくλ4に/2では低域のS/Hの改善
量に、d〉λ4に/2では高域のS/Hの改善量に依存
しているためであると考えられる。従ってそれぞれの原
因は次の通りであると考えられる。=40°, θ. = 150°, the curve C plotted with Δ marks shows the cases when θ, = 140° and θ, = 60°. As is clear from the figure, the S/N improvement amount PF has the best value when the microphone interval d is d=λ4/2, that is, when it is half the wavelength of the sound of the band upper limit frequency. The difference in S/H improvement amount PF depending on the microphone spacing d is that when d>λ4/2, the amount of improvement in low-frequency S/H increases, and when d>λ4/2, the amount of improvement in high-frequency S/H. This is thought to be due to the dependence on Therefore, each cause is considered to be as follows.
■dが小さくなると(d<λ4に/2)低域において、
目的信号Sの方向の感度が低下するためS/Nの改善量
PFが減少する。■When d becomes small (d<λ4/2), in the low range,
Since the sensitivity in the direction of the target signal S decreases, the S/N improvement amount PF decreases.
■dが大きくなると(d>λ、に/2)高域において、
空間的なエリアジングが生じるため、雑音の到来方向以
外に制御できない指向性の死角が形成され、S/Nの改
善量PFが減少する。■When d becomes large (d>λ, 2/2), in the high range,
Since spatial aliasing occurs, a directional blind spot that cannot be controlled other than the direction of noise arrival is formed, and the S/N improvement amount PF decreases.
■に較べて■による影響の方が大きいことが、第2図の
測定の結果かられかる。上記測定において実際には、雑
音の到来方向θ。の変化に対して音波の到達時間差を考
慮すれば、みかけのマイクロホン間隔r(雑音の到来方
向の距離差)はr=d・cosθ。で表される。この場
合、dはrの最大値であり、■の空間的なエリアジング
が生じない条件はrの最大値としてのdがdくλ4に/
2であって、また■の感度低下が生じない条件はそのd
がd〉λ4にの場合である。すなわち、マイクロホン2
個の場合の間隔dは、帯域上限周波数の音の波長の1/
2が最適である。It can be seen from the measurement results in Figure 2 that the influence of ■ is greater than that of ■. In the above measurement, the direction of arrival of the noise is actually θ. Considering the arrival time difference of sound waves with respect to the change in , the apparent microphone interval r (distance difference in the arrival direction of noise) is r = d・cos θ. It is expressed as In this case, d is the maximum value of r, and the condition in which spatial aliasing does not occur is that d as the maximum value of r is λ4/
2, and the condition in which the decrease in sensitivity of ■ does not occur is that d.
This is the case when d>λ4. That is, microphone 2
The interval d in the case of
2 is optimal.
次に、マイクロホンの数を増やし、直線状に配置する場
合について述べる。第3図は本発明の第2実施例を示す
直線状配置のマイクロホンアレーの配置図である。本実
施例は、3個のマイクロホン1,2.3を直線状にかつ
マイクロホン3がマイクロホン1.2の外側に来るよう
に配置し、マイクロホン1.2の間隔d、およびマイク
ロホン2.3の間隔d、をそれぞれ帯域上限周波数(4
kHzとする)の音の波長λ、の半分λ4に/2となる
ように配置するものである。Next, we will discuss the case where the number of microphones is increased and they are arranged in a straight line. FIG. 3 is a layout diagram of a linearly arranged microphone array showing a second embodiment of the present invention. In this embodiment, three microphones 1 and 2.3 are arranged in a straight line with microphone 3 located outside microphone 1.2, and the distance d between microphones 1.2 and 2.3 is d, respectively the band upper limit frequency (4
It is arranged so that it is half λ4 of the sound wavelength λ, which is assumed to be kHz.
第4図は第2実施例におけるマイクロホン間隔とS/N
改善量の関係図であり、マイクロホン間隔d2側を変化
させた時のS/Nの改善量を示している。第3図におい
て、目的信号Sはマイクロホンアレーに対し90°方向
から到来し、雑音Nは30°方向から、雑音N、は18
0°方向から到来するものとする。この関係図により明
らかなように、マイクロホン3は、マイクロホン1.2
の外側に設け、かつd、をλ4J2にするとS/Nの改
善量が最も良いことがわかる。この時、マイクロホン3
を1.2の間に配置すると空間的なエリアジングは生じ
ないが、各マイクロホンが非常に接近するため、低域の
S/Hの改善量が劣化する。このことはマイクロホンの
数が増えても同様のことが言える。すなわち、マイクロ
ホンを直線配置する場合には、隣接した個々のマイクロ
ホンの間隔をそれぞれ帯域上限周波数の音の波長の1/
2にするのが最適である。Figure 4 shows the microphone spacing and S/N in the second embodiment.
It is a relational diagram of the amount of improvement, and shows the amount of improvement in S/N when the microphone interval d2 side is changed. In Fig. 3, the target signal S comes from a direction of 90 degrees to the microphone array, the noise N comes from a direction of 30 degrees, and the noise N is 18 degrees.
It is assumed that the light comes from the 0° direction. As is clear from this relationship diagram, microphone 3 is different from microphone 1.2.
It can be seen that the amount of improvement in S/N is the best if it is provided outside of , and d is set to λ4J2. At this time, microphone 3
If it is placed between 1.2, spatial aliasing does not occur, but since the microphones are placed very close to each other, the amount of improvement in low frequency S/H deteriorates. The same thing can be said even if the number of microphones increases. In other words, when microphones are arranged in a straight line, the distance between adjacent microphones is set to 1/1/2 of the wavelength of the sound at the upper limit frequency of the band.
It is best to set it to 2.
さらに、マイクロホンを平面状に二次元に配置する場合
について述べる。第5図(a)、(b)は本発明の第3
実施例を示す平面状配置のマイクロホンアレーの配置図
である。二次元配置では、あらゆる音の到来方向を想定
した場合において隣接する2つのマイクロホンのみかけ
の間隔の内、最大値をd m a xとすると、空間的
なエリアジングが生じない条件はdmax<λ4に/2
である。Furthermore, a case where microphones are arranged two-dimensionally in a plane will be described. FIGS. 5(a) and 5(b) show the third embodiment of the present invention.
FIG. 2 is a layout diagram of a microphone array arranged in a plane according to an embodiment. In a two-dimensional arrangement, assuming that the maximum value of the apparent distance between two adjacent microphones is dmax assuming all directions of sound arrival, the condition under which spatial aliasing does not occur is dmax<λ4 ni/2
It is.
(2L)の例は3側のマイクロホンl 2,3の二次元
配置を示すもので、それぞれを三角形の頂点に配置する
場合を示している。この場合における隣接する2個のマ
イクロホンのみかけの間隔の内、最大値d m a x
はマイクロホン2.3を結ぶ線とマイクロホンlの間隔
となる。このように、みかけのマイクロホン間隔の最大
値d m a xは、必ずしも実際の各マイクロホン間
隔dとは一致しない。The example (2L) shows a two-dimensional arrangement of microphones l2 and 3 on the third side, each of which is arranged at the apex of a triangle. In this case, the maximum value d m a x of the apparent distance between two adjacent microphones
is the distance between the line connecting microphones 2.3 and microphone l. In this way, the maximum value dmax of the apparent microphone spacing does not necessarily match the actual microphone spacing d.
従来のように実際の各マイクロホン間隔dをλ。As in the conventional method, the actual distance between each microphone d is λ.
/2に配置すれば空間的なエリアジングは生じないが、
d m a xをλ4に/2にする場合に比、べて、ア
レー長が小さくなる。しかし、低域に対しては各マイク
ロホンをできるだけ離して配置するのが望ましい。その
ため、二次元配置では空間的なエリアジングが生じない
条件下で、各マイクロホンをできるだけ離して配置する
ためには、みかけのマイクロホン間隔の最大値d m
a xを帯域上限周波数の音の波長λ4にの1/2にす
るのが最適となる。すなわち、このときS/N改善量が
最大となる。(b)は4個のマイクロホンl、2,3.
4の二次元配置を示すもので、3個のマイクロホン1.
2.3を三角形の頂点にかつ円周上に配置し、マイクロ
ホン4をその円の中心に配置した二次元配置の例である
。この場合におけるみかけのマイクロホン間隔の最大値
d m a xは各マイクロホン1.2.3とマイクロ
ホン4の間隔dであり、この間隔dが帯域上限周波数の
音の波長λ4.の1/2となるように4個のマイクびホ
ン1,23.4を配置する。If placed at /2, spatial aliasing will not occur, but
The array length becomes smaller than when dmax is set to λ4/2. However, for low frequencies, it is desirable to place the microphones as far apart as possible. Therefore, in order to arrange the microphones as far apart as possible under the condition that spatial aliasing does not occur in a two-dimensional arrangement, the maximum value of the apparent microphone spacing d m
It is optimal to set a x to 1/2 of the wavelength λ4 of the sound at the upper limit frequency of the band. That is, at this time, the amount of S/N improvement becomes maximum. (b) shows four microphones l, 2, 3.
4 shows a two-dimensional arrangement of three microphones 1.
This is an example of a two-dimensional arrangement in which microphones 2 and 3 are arranged at the vertices of a triangle and on the circumference, and the microphone 4 is arranged at the center of the circle. The maximum value of the apparent microphone spacing d m a x in this case is the spacing d between each microphone 1.2.3 and the microphone 4, and this spacing d corresponds to the sound wavelength λ4. Four microphones 1 and 23.4 are arranged so that the number of microphones is 1/2.
第6図は、第5図(b)の第3実施例におけるマイクロ
ホン間隔とS/N改善量の関係図を示している。第5図
(b)において、目的信号Sは90@方向(マイクロホ
ン1からマイクロホン4へ向う方向)から到来し、雑音
N1は30°方向から、雑音N、は一30°方向から雑
音N3は−90”方向から到来するものとする。みかけ
のマイクロホン間隔の最大値dmax (=d)を変え
た場合、図から明らかなようにd/λ4に= 0.5
(= 1/2 )のときS/N改善量PF[dB]が最
も良いことがわかる。以上のことはマイクロホンの数が
増えても同じことが言え、さらに三次元配置についても
同様のことが言える。FIG. 6 shows a relationship between the microphone spacing and the S/N improvement amount in the third embodiment shown in FIG. 5(b). In FIG. 5(b), the target signal S comes from the 90@ direction (direction from microphone 1 to microphone 4), the noise N1 comes from the 30° direction, the noise N3 comes from the 30° direction, and the noise N3 comes from the 30° direction. It is assumed that the sound comes from the 90" direction. If the maximum value of the apparent microphone spacing dmax (=d) is changed, as is clear from the figure, d/λ4 = 0.5
It can be seen that the S/N improvement amount PF [dB] is the best when (= 1/2). The same can be said of the above even if the number of microphones increases, and also the same can be said of three-dimensional arrangement.
以下、本発明の実施例の内、二次元(平面状)配置及び
三次元(立体状)配置の対称形の代表例を示す。第7図
(a)、(b)は本発明の二次元配置の実施例を示す配
置図であって、3個のマイクロホンl、2.3を正三角
形に配置した場合である。(a)に示すような方向りか
ら雑音Nが到来している時、雑音Nは先ずマイクロホン
lに入射し、次にマイクロホン2.3に入射する。その
ため、マイクロホンlに入射してからマイクロホン2.
3に入射するまでの距離はそれぞれdである。またマイ
クロホン2.3には同時に雑音が入射するため、マイク
ロホン2.3の距離差はゼロである。また(b)に示す
ような方向Eから雑音Nが到来した時には、雑音Nは先
ずマイクロホンlに入射し、次にマイクロホン2に、最
後にマイクロポン3に入射する。そのため、雑音の到来
方向に対する各マイクロホンのみかけの間隔はそれぞれ
図に示すd13+ d 3tになる。このみかけのマイ
クロホン間隔d ll+ d 3tは、雑音の到来方向
によって変化する。このように変化するみかけのマイク
ロホン間隔のうち、隣接するみかけのマイクロホン間隔
が最大となるのは、この実施例においては、(a)に示
す方向りから雑音Nが到来した時である。つまり、(2
)に示すみかけのマイクロホン間隔の最大値dを使用す
る周波数帯域、の上限周波数の音の波長のI/2にする
と、空間的なエリアジングが生じない。Hereinafter, representative examples of symmetrical two-dimensional (planar) arrangement and three-dimensional (three-dimensional) arrangement among the embodiments of the present invention will be shown. FIGS. 7(a) and 7(b) are layout diagrams showing an embodiment of the two-dimensional arrangement of the present invention, in which three microphones 1 and 2.3 are arranged in an equilateral triangle. When the noise N is coming from the direction shown in (a), the noise N first enters the microphone l, and then enters the microphone 2.3. Therefore, after entering microphone L, microphone 2.
The distance from which the light enters the light source 3 is d, respectively. Further, since noise is simultaneously incident on the microphones 2.3, the distance difference between the microphones 2.3 is zero. Further, when noise N arrives from the direction E as shown in (b), the noise N first enters the microphone l, then the microphone 2, and finally the microphone 3. Therefore, the apparent interval between each microphone with respect to the direction of arrival of the noise is d13+d3t as shown in the figure. This apparent microphone interval d ll+ d 3t changes depending on the arrival direction of the noise. Among the apparent microphone intervals that change in this way, the apparent interval between adjacent microphones becomes maximum when the noise N arrives from the direction shown in (a) in this embodiment. In other words, (2
), spatial aliasing will not occur if the maximum value d of the apparent microphone spacing is set to I/2 of the sound wavelength of the upper limit frequency of the frequency band used.
第8図(a)、(b)はそれぞれマイクロホン4個の場
合の二次元配置の実施例である。(a)はマイクロホン
1,2.3を正三角形状に配置し、マイクロホン4をそ
の中心に配置したものであり、(b)はマイクロホンI
、2,3.4を正方形状に配置したものである。図中の
d m a xがそれぞれ先に述べた、みかけのマイク
ロホン間隔のうち隣接するみかけのマイクロホン間隔の
最大値を表しており、この間隔を使用する周波数帯域の
上限周波数の波長の1/2にする。FIGS. 8(a) and 8(b) each show an example of a two-dimensional arrangement in the case of four microphones. (a) shows microphones 1, 2, and 3 arranged in an equilateral triangle shape, with microphone 4 placed in the center, and (b) shows microphone I.
, 2, 3.4 are arranged in a square shape. d m a x in the figure each represents the maximum value of the apparent distance between adjacent microphones among the apparent microphone distances mentioned above, and this distance is 1/2 of the wavelength of the upper limit frequency of the frequency band in which this distance is used. Make it.
同じく、第9図はマイクロホン5個の場合の二次元配置
の実施例、第1O図(a)、(b)はマイクロホン6個
の場合の二次元配置の実施例、第11図はマイクロホン
8個の場合の二次元配置の実施例であり、それぞれマイ
クロホン1〜5. 1〜6.1〜8を正多角形状に配置
するか、正多角形状に配置するとともにその中心に1つ
のマイクロホンを配置したものである。各図中のd m
a xがみかけのマイクロホン間隔の最大値であり、
このd m a xを使用する周波数帯域の上限周波数
の音の波長の1/2とする。Similarly, Fig. 9 is an example of a two-dimensional arrangement with five microphones, Figs. 1O (a) and (b) are an example of a two-dimensional arrangement with six microphones, and Fig. 11 is an example of a two-dimensional arrangement with eight microphones. This is an example of a two-dimensional arrangement in the case of microphones 1 to 5, respectively. 1 to 6. 1 to 8 are arranged in a regular polygonal shape, or arranged in a regular polygonal shape and one microphone is arranged in the center thereof. d m in each figure
a x is the maximum value of the apparent microphone spacing,
This dmax is assumed to be 1/2 of the wavelength of the sound at the upper limit frequency of the frequency band to be used.
第12図は三次元配置におけるマイクロホン4個の場合
の本発明の実施例であり、マイクロホン1.2,3.4
を正三角錐状に配置したものである。この場合、みかけ
のマイクロホン間隔の最大値d m a xは、マイク
ロホン2,3.4を含む平面図とマイクロホンlの距離
となり、この間隔dmaxを帯域上限周波数の音の波長
の1/2とする。FIG. 12 shows an embodiment of the present invention in the case of four microphones in a three-dimensional arrangement, with microphones 1.2, 3.4
are arranged in a regular triangular pyramid shape. In this case, the maximum value of the apparent microphone spacing dmax is the distance between the plan view including microphones 2 and 3.4 and microphone l, and this spacing dmax is set to 1/2 of the wavelength of the sound at the upper limit frequency of the band. .
第13図は三次元配置におけるマイクロホン5個の場合
の実施例であり、マイクロホンl、2゜3.4を正三角
錐状に配置するとともに、その中心にマイクロホン5を
配置したものである。この場合のみかけのマイクロホン
間隔の最大値dmaXは各マイクロホン1〜4とマイク
ロホン5の距離となり、この間隔d m a xが帯域
上限周波数の音の波長の1/2となるように配置する。FIG. 13 shows an example of five microphones in a three-dimensional arrangement, in which the microphones 1 and 2°3.4 are arranged in a regular triangular pyramid shape, and the microphone 5 is arranged at the center. In this case, the maximum value dmax of the apparent microphone spacing is the distance between each of the microphones 1 to 4 and the microphone 5, and the microphones are arranged so that this spacing dmax is 1/2 of the wavelength of the sound at the upper limit frequency of the band.
以上のマイクロホンアレーの実施例における直線状、平
面状、立体状のマイクロホン配置を一般的に述べれば、
ある角度から音波がそれぞれのマイクロホンに順番に入
射する時、あるマイクロホンに入射し、次のマイクロホ
ンに入射するまでの距離差を、音波の到来方向に対する
2つのマイクロホンのみかけの間隔とした時、雑音の到
来方向によって変化するこれらのそれぞれのマイクロホ
ンのみかけの間隔の最大値を使用する周波数帯域の上限
周波数の音の波長の半分としたものであると言うことが
できる。さらに詳しく言えば、そのマイクロホン配置は
、1個以上のマイクロホンを含んで互いに平行な平面群
を想定した場合にその平面群の内で隣り合う平面間の間
隔の最も広いものについて、その間隔が入力対象とする
周波数帯域の上限周波数の音の波長の半分の長さになる
ようにしたものと言うことができる。Generally speaking, the linear, planar, and three-dimensional microphone arrangements in the above microphone array embodiments are as follows:
When a sound wave enters each microphone in turn from a certain angle, and the difference in distance from the time it enters one microphone to the next microphone is taken as the apparent distance between the two microphones with respect to the direction of arrival of the sound wave, noise It can be said that the maximum value of the apparent spacing between these microphones, which changes depending on the direction of arrival of the sound, is half the wavelength of the sound at the upper limit frequency of the frequency band used. To be more specific, the microphone arrangement is based on the assumption that, when a group of parallel planes containing one or more microphones is assumed, the distance between adjacent planes is the input value for the widest distance between adjacent planes in the group of planes. It can be said that the length is half the wavelength of the sound at the upper limit frequency of the target frequency band.
なお、上記実施例では主に複数個のマイクロホンを対称
形に配置した場合を例としたが、非対称な配置の場合に
も同様の効果を得ることができることは言うまでもない
。また、上記実施例のマイクロホン配置においては、み
かけのマイクロホン間隔の最大値d m a xを帯域
上限周波数の音の波長の1/2とするのが最適であると
したが、第4図および第6図のマイクロホン間隔とS/
N改善量の関係図かられかるように、l/2の近傍の間
隔のときにも相当の効果が得られるので、S/Nの改善
量が所望の範囲または許容できる範囲にあれば、1/2
以外の間隔にdmaxを設定しても良い。このように、
本発明はその主旨に沿って種々に応用され、種々の実施
態様を取り得るものである。In the above embodiment, the case where a plurality of microphones are arranged symmetrically is mainly taken as an example, but it goes without saying that the same effect can be obtained even when the microphones are arranged asymmetrically. In addition, in the microphone arrangement of the above embodiment, it was assumed that it is optimal to set the maximum value of the apparent microphone spacing dmax to 1/2 of the wavelength of the sound of the band upper limit frequency. Microphone spacing and S/ in Figure 6
As can be seen from the relational diagram of the amount of improvement in N, a considerable effect can be obtained even when the interval is close to 1/2, so if the amount of improvement in S/N is within the desired or allowable range, 1 /2
dmax may be set to other intervals. in this way,
The present invention can be applied in various ways in accordance with its gist and can take various embodiments.
[発明の効果]
以上の説明で明らかなように、本発明のマイクロホンア
レーによれば、S/Nの改善量を最も左右する要因であ
る空間的なエリアジングの発生を防ぎ、低域のS/Nの
改善量を最も良くする利点がある。[Effects of the Invention] As is clear from the above explanation, the microphone array of the present invention prevents the occurrence of spatial aliasing, which is the factor that most influences the amount of S/N improvement, and improves low-frequency S/N. This has the advantage of maximizing the amount of improvement in /N.
第1図は本発明の第1実施例を示す基本的なマイクロホ
ンアレーの配置図、第2図は第1実施例におけるマイク
ロホン間隔とS/N改善量の関係図、第3図は本発明の
第2実施例を示す直線状配置のマイクロホンアレーの配
置図、第4図は第2実施例におけるマイクロホン間隔と
S/N改善量の関係図、第5図(a)、(b)は本発明
の第3実施例を示す平面状配置のマイクロホンアレーの
配置図、第6図は第3実施例におけるマイクロホン間隔
とS/N改善量の関係図、第7図(a )、<b )第
8図(a)、(b)、第9図、第10図(a)(b)、
第11図は本発明の実施例のうち二次元配置の代表例を
示す図、第12図、第13図は本発明の実施例のうち二
次元配置の代表例を示す図である。
1.2,3.4・・・マイクロホン。
0.02 Q、1
0.51
第2図
第4図
第9図
(b)
第5図
第6図
第11図
第12区
第13図FIG. 1 is a basic microphone array arrangement diagram showing the first embodiment of the present invention, FIG. 2 is a diagram showing the relationship between microphone spacing and S/N improvement amount in the first embodiment, and FIG. FIG. 4 is a diagram showing the relationship between microphone spacing and S/N improvement amount in the second embodiment, and FIGS. 5(a) and (b) are according to the present invention. FIG. 6 is a diagram showing the relationship between microphone spacing and S/N improvement amount in the third embodiment. FIG. 7 (a), <b) 8 Figures (a), (b), Figure 9, Figure 10 (a) (b),
FIG. 11 is a diagram showing a typical example of a two-dimensional arrangement among the embodiments of the present invention, and FIGS. 12 and 13 are diagrams showing typical examples of the two-dimensional arrangement among the embodiments of the invention. 1.2, 3.4...Microphone. 0.02 Q, 1 0.51 Figure 2 Figure 4 Figure 9 (b) Figure 5 Figure 6 Figure 11 Figure 12 Section Figure 13
Claims (1)
るいは立体状に配置したマイクロホンアレーであって、 1個以上の上記マイクロホンを含んで互いに平行な平面
群を想定した場合にその平面群の内で隣り合う平面間の
間隔の最も広いものについて、その間隔が入力対象とす
る周波数帯域の上限周波数の音の波長の半分またはほぼ
半分の長さになるように上記各マイクロホンを配置する
ことを特徴とするマイクロホンアレー。(1) A microphone array in which a plurality of microphones are arranged linearly, in a plane, or in a three-dimensional manner, and when a plane group containing one or more of the above-mentioned microphones is assumed to be parallel to each other, within that plane group Each of the microphones is arranged so that the widest distance between adjacent planes is half or approximately half the wavelength of the sound at the upper limit frequency of the frequency band to be input. microphone array.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP23806788A JPH0286397A (en) | 1988-09-22 | 1988-09-22 | Microphone array |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP23806788A JPH0286397A (en) | 1988-09-22 | 1988-09-22 | Microphone array |
Publications (1)
Publication Number | Publication Date |
---|---|
JPH0286397A true JPH0286397A (en) | 1990-03-27 |
Family
ID=17024664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP23806788A Pending JPH0286397A (en) | 1988-09-22 | 1988-09-22 | Microphone array |
Country Status (1)
Country | Link |
---|---|
JP (1) | JPH0286397A (en) |
Cited By (120)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1206161A1 (en) * | 2000-11-10 | 2002-05-15 | Sony International (Europe) GmbH | Microphone array with self-adjusting directivity for handsets and hands free kits |
WO2003015467A1 (en) * | 2001-08-08 | 2003-02-20 | Apple Computer, Inc. | Spacing for microphone elements |
JP2006060525A (en) * | 2004-08-20 | 2006-03-02 | Ryuichiro Yukawa | Sound collection method for reproducing 3-dimensional sound image |
WO2007052645A1 (en) * | 2005-11-02 | 2007-05-10 | Yamaha Corporation | Sound collecting device |
JP2008092512A (en) * | 2006-10-05 | 2008-04-17 | Casio Hitachi Mobile Communications Co Ltd | Voice input unit |
JP2010056763A (en) * | 2008-08-27 | 2010-03-11 | Murata Machinery Ltd | Voice recognition apparatus |
JP2011049974A (en) * | 2009-08-28 | 2011-03-10 | Ihi Corp | Receiving array apparatus |
US8238584B2 (en) | 2005-11-02 | 2012-08-07 | Yamaha Corporation | Voice signal transmitting/receiving apparatus |
JP2013078118A (en) * | 2011-09-15 | 2013-04-25 | Jvc Kenwood Corp | Noise reduction device, audio input device, radio communication device, and noise reduction method |
US8565464B2 (en) | 2005-10-27 | 2013-10-22 | Yamaha Corporation | Audio conference apparatus |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US8977584B2 (en) | 2010-01-25 | 2015-03-10 | Newvaluexchange Global Ai Llp | Apparatuses, methods and systems for a digital conversation management platform |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
JPWO2022091591A1 (en) * | 2020-10-30 | 2022-05-05 | ||
WO2022102311A1 (en) * | 2020-11-10 | 2022-05-19 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Sound pickup device |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
-
1988
- 1988-09-22 JP JP23806788A patent/JPH0286397A/en active Pending
Cited By (164)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9646614B2 (en) | 2000-03-16 | 2017-05-09 | Apple Inc. | Fast, language-independent method for user authentication by voice |
EP1206161A1 (en) * | 2000-11-10 | 2002-05-15 | Sony International (Europe) GmbH | Microphone array with self-adjusting directivity for handsets and hands free kits |
WO2003015467A1 (en) * | 2001-08-08 | 2003-02-20 | Apple Computer, Inc. | Spacing for microphone elements |
US7349849B2 (en) | 2001-08-08 | 2008-03-25 | Apple, Inc. | Spacing for microphone elements |
JP2006060525A (en) * | 2004-08-20 | 2006-03-02 | Ryuichiro Yukawa | Sound collection method for reproducing 3-dimensional sound image |
US10318871B2 (en) | 2005-09-08 | 2019-06-11 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US8855286B2 (en) | 2005-10-27 | 2014-10-07 | Yamaha Corporation | Audio conference device |
US8565464B2 (en) | 2005-10-27 | 2013-10-22 | Yamaha Corporation | Audio conference apparatus |
JP2007129485A (en) * | 2005-11-02 | 2007-05-24 | Yamaha Corp | Sound pickup device |
US8238584B2 (en) | 2005-11-02 | 2012-08-07 | Yamaha Corporation | Voice signal transmitting/receiving apparatus |
WO2007052645A1 (en) * | 2005-11-02 | 2007-05-10 | Yamaha Corporation | Sound collecting device |
US8942986B2 (en) | 2006-09-08 | 2015-01-27 | Apple Inc. | Determining user intent based on ontologies of domains |
US9117447B2 (en) | 2006-09-08 | 2015-08-25 | Apple Inc. | Using event alert text as input to an automated assistant |
US8930191B2 (en) | 2006-09-08 | 2015-01-06 | Apple Inc. | Paraphrasing of user requests and results by automated digital assistant |
JP2008092512A (en) * | 2006-10-05 | 2008-04-17 | Casio Hitachi Mobile Communications Co Ltd | Voice input unit |
US10568032B2 (en) | 2007-04-03 | 2020-02-18 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10381016B2 (en) | 2008-01-03 | 2019-08-13 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US9626955B2 (en) | 2008-04-05 | 2017-04-18 | Apple Inc. | Intelligent text-to-speech conversion |
US9865248B2 (en) | 2008-04-05 | 2018-01-09 | Apple Inc. | Intelligent text-to-speech conversion |
US9535906B2 (en) | 2008-07-31 | 2017-01-03 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
US10108612B2 (en) | 2008-07-31 | 2018-10-23 | Apple Inc. | Mobile device having human language translation capability with positional feedback |
JP2010056763A (en) * | 2008-08-27 | 2010-03-11 | Murata Machinery Ltd | Voice recognition apparatus |
US9959870B2 (en) | 2008-12-11 | 2018-05-01 | Apple Inc. | Speech recognition involving a mobile device |
US9858925B2 (en) | 2009-06-05 | 2018-01-02 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US10795541B2 (en) | 2009-06-05 | 2020-10-06 | Apple Inc. | Intelligent organization of tasks items |
US10475446B2 (en) | 2009-06-05 | 2019-11-12 | Apple Inc. | Using context information to facilitate processing of commands in a virtual assistant |
US11080012B2 (en) | 2009-06-05 | 2021-08-03 | Apple Inc. | Interface for a virtual digital assistant |
US10283110B2 (en) | 2009-07-02 | 2019-05-07 | Apple Inc. | Methods and apparatuses for automatic speech recognition |
JP2011049974A (en) * | 2009-08-28 | 2011-03-10 | Ihi Corp | Receiving array apparatus |
US10679605B2 (en) | 2010-01-18 | 2020-06-09 | Apple Inc. | Hands-free list-reading by intelligent automated assistant |
US8903716B2 (en) | 2010-01-18 | 2014-12-02 | Apple Inc. | Personalized vocabulary for digital assistant |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US9548050B2 (en) | 2010-01-18 | 2017-01-17 | Apple Inc. | Intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US10496753B2 (en) | 2010-01-18 | 2019-12-03 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10553209B2 (en) | 2010-01-18 | 2020-02-04 | Apple Inc. | Systems and methods for hands-free notification summaries |
US11423886B2 (en) | 2010-01-18 | 2022-08-23 | Apple Inc. | Task flow identification based on user intent |
US10705794B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Automatically adapting user interfaces for hands-free interaction |
US10706841B2 (en) | 2010-01-18 | 2020-07-07 | Apple Inc. | Task flow identification based on user intent |
US8892446B2 (en) | 2010-01-18 | 2014-11-18 | Apple Inc. | Service orchestration for intelligent automated assistant |
US9424862B2 (en) | 2010-01-25 | 2016-08-23 | Newvaluexchange Ltd | Apparatuses, methods and systems for a digital conversation management platform |
US8977584B2 (en) | 2010-01-25 | 2015-03-10 | Newvaluexchange Global Ai Llp | Apparatuses, methods and systems for a digital conversation management platform |
US9424861B2 (en) | 2010-01-25 | 2016-08-23 | Newvaluexchange Ltd | Apparatuses, methods and systems for a digital conversation management platform |
US9431028B2 (en) | 2010-01-25 | 2016-08-30 | Newvaluexchange Ltd | Apparatuses, methods and systems for a digital conversation management platform |
US9633660B2 (en) | 2010-02-25 | 2017-04-25 | Apple Inc. | User profiling for voice input processing |
US10049675B2 (en) | 2010-02-25 | 2018-08-14 | Apple Inc. | User profiling for voice input processing |
US10762293B2 (en) | 2010-12-22 | 2020-09-01 | Apple Inc. | Using parts-of-speech tagging and named entity recognition for spelling correction |
US10102359B2 (en) | 2011-03-21 | 2018-10-16 | Apple Inc. | Device access using voice authentication |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10706373B2 (en) | 2011-06-03 | 2020-07-07 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US11120372B2 (en) | 2011-06-03 | 2021-09-14 | Apple Inc. | Performing actions associated with task items that represent tasks to perform |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10241644B2 (en) | 2011-06-03 | 2019-03-26 | Apple Inc. | Actionable reminder entries |
US9798393B2 (en) | 2011-08-29 | 2017-10-24 | Apple Inc. | Text correction processing |
JP2013078118A (en) * | 2011-09-15 | 2013-04-25 | Jvc Kenwood Corp | Noise reduction device, audio input device, radio communication device, and noise reduction method |
US10241752B2 (en) | 2011-09-30 | 2019-03-26 | Apple Inc. | Interface for a virtual digital assistant |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US9483461B2 (en) | 2012-03-06 | 2016-11-01 | Apple Inc. | Handling speech synthesis of content for multiple languages |
US9953088B2 (en) | 2012-05-14 | 2018-04-24 | Apple Inc. | Crowd sourcing information to fulfill user requests |
US10079014B2 (en) | 2012-06-08 | 2018-09-18 | Apple Inc. | Name recognition system |
US9495129B2 (en) | 2012-06-29 | 2016-11-15 | Apple Inc. | Device, method, and user interface for voice-activated navigation and browsing of a document |
US9576574B2 (en) | 2012-09-10 | 2017-02-21 | Apple Inc. | Context-sensitive handling of interruptions by intelligent digital assistant |
US9971774B2 (en) | 2012-09-19 | 2018-05-15 | Apple Inc. | Voice-based media searching |
US10199051B2 (en) | 2013-02-07 | 2019-02-05 | Apple Inc. | Voice trigger for a digital assistant |
US10978090B2 (en) | 2013-02-07 | 2021-04-13 | Apple Inc. | Voice trigger for a digital assistant |
US9368114B2 (en) | 2013-03-14 | 2016-06-14 | Apple Inc. | Context-sensitive handling of interruptions |
US9697822B1 (en) | 2013-03-15 | 2017-07-04 | Apple Inc. | System and method for updating an adaptive speech recognition model |
US9922642B2 (en) | 2013-03-15 | 2018-03-20 | Apple Inc. | Training an at least partial voice command system |
US9633674B2 (en) | 2013-06-07 | 2017-04-25 | Apple Inc. | System and method for detecting errors in interactions with a voice-based digital assistant |
US9966060B2 (en) | 2013-06-07 | 2018-05-08 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9582608B2 (en) | 2013-06-07 | 2017-02-28 | Apple Inc. | Unified ranking with entropy-weighted information for phrase-based semantic auto-completion |
US9620104B2 (en) | 2013-06-07 | 2017-04-11 | Apple Inc. | System and method for user-specified pronunciation of words for speech synthesis and recognition |
US9966068B2 (en) | 2013-06-08 | 2018-05-08 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10657961B2 (en) | 2013-06-08 | 2020-05-19 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10185542B2 (en) | 2013-06-09 | 2019-01-22 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
US9300784B2 (en) | 2013-06-13 | 2016-03-29 | Apple Inc. | System and method for emergency calls initiated by voice command |
US10791216B2 (en) | 2013-08-06 | 2020-09-29 | Apple Inc. | Auto-activating smart responses based on activities from remote devices |
US9620105B2 (en) | 2014-05-15 | 2017-04-11 | Apple Inc. | Analyzing audio input for efficient speech and music recognition |
US10592095B2 (en) | 2014-05-23 | 2020-03-17 | Apple Inc. | Instantaneous speaking of content on touch devices |
US9502031B2 (en) | 2014-05-27 | 2016-11-22 | Apple Inc. | Method for supporting dynamic grammars in WFST-based ASR |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9842101B2 (en) | 2014-05-30 | 2017-12-12 | Apple Inc. | Predictive conversion of language input |
US11257504B2 (en) | 2014-05-30 | 2022-02-22 | Apple Inc. | Intelligent assistant for home automation |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US11133008B2 (en) | 2014-05-30 | 2021-09-28 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9734193B2 (en) | 2014-05-30 | 2017-08-15 | Apple Inc. | Determining domain salience ranking from ambiguous words in natural speech |
US10497365B2 (en) | 2014-05-30 | 2019-12-03 | Apple Inc. | Multi-command single utterance input method |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9966065B2 (en) | 2014-05-30 | 2018-05-08 | Apple Inc. | Multi-command single utterance input method |
US10169329B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Exemplar-based natural language processing |
US10083690B2 (en) | 2014-05-30 | 2018-09-25 | Apple Inc. | Better resolution when referencing to concepts |
US10078631B2 (en) | 2014-05-30 | 2018-09-18 | Apple Inc. | Entropy-guided text prediction using combined word and character n-gram language models |
US10289433B2 (en) | 2014-05-30 | 2019-05-14 | Apple Inc. | Domain specific language for encoding assistant dialog |
US9760559B2 (en) | 2014-05-30 | 2017-09-12 | Apple Inc. | Predictive text input |
US9785630B2 (en) | 2014-05-30 | 2017-10-10 | Apple Inc. | Text prediction using combined word N-gram and unigram language models |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
US10904611B2 (en) | 2014-06-30 | 2021-01-26 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10659851B2 (en) | 2014-06-30 | 2020-05-19 | Apple Inc. | Real-time digital assistant knowledge updates |
US9668024B2 (en) | 2014-06-30 | 2017-05-30 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US10446141B2 (en) | 2014-08-28 | 2019-10-15 | Apple Inc. | Automatic speech recognition based on user feedback |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10431204B2 (en) | 2014-09-11 | 2019-10-01 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US10789041B2 (en) | 2014-09-12 | 2020-09-29 | Apple Inc. | Dynamic thresholds for always listening speech trigger |
US9986419B2 (en) | 2014-09-30 | 2018-05-29 | Apple Inc. | Social reminders |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US9886432B2 (en) | 2014-09-30 | 2018-02-06 | Apple Inc. | Parsimonious handling of word inflection via categorical stem + suffix N-gram language models |
US9646609B2 (en) | 2014-09-30 | 2017-05-09 | Apple Inc. | Caching apparatus for serving phonetic pronunciations |
US10552013B2 (en) | 2014-12-02 | 2020-02-04 | Apple Inc. | Data detection |
US11556230B2 (en) | 2014-12-02 | 2023-01-17 | Apple Inc. | Data detection |
US9711141B2 (en) | 2014-12-09 | 2017-07-18 | Apple Inc. | Disambiguating heteronyms in speech synthesis |
US9865280B2 (en) | 2015-03-06 | 2018-01-09 | Apple Inc. | Structured dictation using intelligent automated assistants |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10311871B2 (en) | 2015-03-08 | 2019-06-04 | Apple Inc. | Competing devices responding to voice triggers |
US11087759B2 (en) | 2015-03-08 | 2021-08-10 | Apple Inc. | Virtual assistant activation |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US10567477B2 (en) | 2015-03-08 | 2020-02-18 | Apple Inc. | Virtual assistant continuity |
US9899019B2 (en) | 2015-03-18 | 2018-02-20 | Apple Inc. | Systems and methods for structured stem and suffix language models |
US9842105B2 (en) | 2015-04-16 | 2017-12-12 | Apple Inc. | Parsimonious continuous-space phrase representations for natural language processing |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US10127220B2 (en) | 2015-06-04 | 2018-11-13 | Apple Inc. | Language identification from short strings |
US10101822B2 (en) | 2015-06-05 | 2018-10-16 | Apple Inc. | Language input correction |
US10255907B2 (en) | 2015-06-07 | 2019-04-09 | Apple Inc. | Automatic accent detection using acoustic models |
US10186254B2 (en) | 2015-06-07 | 2019-01-22 | Apple Inc. | Context-based endpoint detection |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US11500672B2 (en) | 2015-09-08 | 2022-11-15 | Apple Inc. | Distributed personal assistant |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US9697820B2 (en) | 2015-09-24 | 2017-07-04 | Apple Inc. | Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks |
US11010550B2 (en) | 2015-09-29 | 2021-05-18 | Apple Inc. | Unified language modeling framework for word prediction, auto-completion and auto-correction |
US10366158B2 (en) | 2015-09-29 | 2019-07-30 | Apple Inc. | Efficient word encoding for recurrent neural network language models |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US11526368B2 (en) | 2015-11-06 | 2022-12-13 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
US10446143B2 (en) | 2016-03-14 | 2019-10-15 | Apple Inc. | Identification of voice inputs providing credentials |
US9934775B2 (en) | 2016-05-26 | 2018-04-03 | Apple Inc. | Unit-selection text-to-speech synthesis based on predicted concatenation parameters |
US9972304B2 (en) | 2016-06-03 | 2018-05-15 | Apple Inc. | Privacy preserving distributed evaluation framework for embedded personalized systems |
US10249300B2 (en) | 2016-06-06 | 2019-04-02 | Apple Inc. | Intelligent list reading |
US11069347B2 (en) | 2016-06-08 | 2021-07-20 | Apple Inc. | Intelligent automated assistant for media exploration |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10354011B2 (en) | 2016-06-09 | 2019-07-16 | Apple Inc. | Intelligent automated assistant in a home environment |
US10509862B2 (en) | 2016-06-10 | 2019-12-17 | Apple Inc. | Dynamic phrase expansion of language input |
US10192552B2 (en) | 2016-06-10 | 2019-01-29 | Apple Inc. | Digital assistant providing whispered speech |
US11037565B2 (en) | 2016-06-10 | 2021-06-15 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10733993B2 (en) | 2016-06-10 | 2020-08-04 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
US10067938B2 (en) | 2016-06-10 | 2018-09-04 | Apple Inc. | Multilingual word prediction |
US10490187B2 (en) | 2016-06-10 | 2019-11-26 | Apple Inc. | Digital assistant providing automated status report |
US10269345B2 (en) | 2016-06-11 | 2019-04-23 | Apple Inc. | Intelligent task discovery |
US11152002B2 (en) | 2016-06-11 | 2021-10-19 | Apple Inc. | Application integration with a digital assistant |
US10089072B2 (en) | 2016-06-11 | 2018-10-02 | Apple Inc. | Intelligent device arbitration and control |
US10297253B2 (en) | 2016-06-11 | 2019-05-21 | Apple Inc. | Application integration with a digital assistant |
US10521466B2 (en) | 2016-06-11 | 2019-12-31 | Apple Inc. | Data driven natural language event detection and classification |
US10593346B2 (en) | 2016-12-22 | 2020-03-17 | Apple Inc. | Rank-reduced token representation for automatic speech recognition |
US11405466B2 (en) | 2017-05-12 | 2022-08-02 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10791176B2 (en) | 2017-05-12 | 2020-09-29 | Apple Inc. | Synchronization and task delegation of a digital assistant |
US10810274B2 (en) | 2017-05-15 | 2020-10-20 | Apple Inc. | Optimizing dialogue policy decisions for digital assistants using implicit feedback |
JPWO2022091591A1 (en) * | 2020-10-30 | 2022-05-05 | ||
WO2022102311A1 (en) * | 2020-11-10 | 2022-05-19 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | Sound pickup device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JPH0286397A (en) | Microphone array | |
US6118883A (en) | System for controlling low frequency acoustical directivity patterns and minimizing directivity discontinuities during frequency transitions | |
KR0152663B1 (en) | Imgae derived directional microphones | |
JP3866828B2 (en) | Wide array of circularly symmetric zero-redundancy planes over a wide frequency range | |
US7269263B2 (en) | Method of broadband constant directivity beamforming for non linear and non axi-symmetric sensor arrays embedded in an obstacle | |
US7889873B2 (en) | Microphone aperture | |
KR940003447B1 (en) | Undirectional second order gradient microphone | |
EP2773131B1 (en) | Spherical microphone array | |
US4629029A (en) | Multiple driver manifold | |
JPH1070412A (en) | Phased array having form of logarithmic spiral | |
Stamać et al. | Designing the Acoustic Camera using MATLAB with respect to different types of microphone arrays | |
US5596550A (en) | Low cost shading for wide sonar beams | |
EP3515091B1 (en) | Loudspeaker horn array | |
US7991170B2 (en) | Loudspeaker crossover filter | |
US11671751B2 (en) | Microphone array | |
EP3716644A1 (en) | Two-way quasi point-source wide-dispersion speaker | |
JP2004279390A (en) | Beam forming by microphone using indefinite term | |
US5504716A (en) | Passive sonar transducer arrangement | |
US8379892B1 (en) | Array of high frequency loudspeakers | |
JPS6224702A (en) | Adaptive antenna system | |
JP2000152391A (en) | Method for sensor layout | |
JPS5915183Y2 (en) | horn speaker | |
US20050157590A1 (en) | Surface acoustic antenna for submarines | |
JPS5951371A (en) | Antenna | |
US20210152926A1 (en) | Variable port microphone |