JP4616736B2 - Sound collection and playback device - Google Patents

Sound collection and playback device Download PDF

Info

Publication number
JP4616736B2
JP4616736B2 JP2005262435A JP2005262435A JP4616736B2 JP 4616736 B2 JP4616736 B2 JP 4616736B2 JP 2005262435 A JP2005262435 A JP 2005262435A JP 2005262435 A JP2005262435 A JP 2005262435A JP 4616736 B2 JP4616736 B2 JP 4616736B2
Authority
JP
Japan
Prior art keywords
sound
sound source
signals
band
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2005262435A
Other languages
Japanese (ja)
Other versions
JP2007074665A (en
Inventor
真理子 青木
賢一 古家
章俊 片岡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP2005262435A priority Critical patent/JP4616736B2/en
Publication of JP2007074665A publication Critical patent/JP2007074665A/en
Application granted granted Critical
Publication of JP4616736B2 publication Critical patent/JP4616736B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

この発明はステレオ収音した音を再生する収音再生装置に関し、特に空間に複数の音源が異なる位置に位置されている場合に、音源の方向情報を強調し、聴取者の位置によらず、全ての聴取者に音の方向を正しく知覚させることを可能とする収音再生装置に関する。   The present invention relates to a sound collection / reproduction device that reproduces stereo-collected sound, particularly when a plurality of sound sources are located at different positions in space, and emphasizes the direction information of the sound source, regardless of the position of the listener. The present invention relates to a sound collecting / reproducing apparatus that enables all listeners to correctly perceive the direction of sound.

従来のステレオ収音再生技術は、2本のマイクロホンで収音することで、収音する際に生じる音の方向情報(一つの音源から2本のマイクロホンへ収音される時に生じるマイクロホン間の到達レベル差及び到達時間差(到達位相差))を利用し、二つのスピーカで音を再生することで、各音源の方向を聴取者に知覚させることができるものとなっている(例えば、非特許文献1参照)。
ジョン M・アーグル著、沢口真生訳、「ハンドブック・オブ・レコーディング・エンジニアリング・セカンドエディション」、株式会社ステレオサウンド、2004年6月30日、p.45−46
The conventional stereo sound collection and reproduction technology collects sound with two microphones, so that information on the direction of the sound that occurs when the sound is picked up (arrival between microphones when sound is collected from one sound source to two microphones) By using the level difference and the arrival time difference (arrival phase difference) to reproduce sound with two speakers, the direction of each sound source can be perceived by the listener (for example, non-patent literature). 1).
John M. Argle, translated by Masaki Sawaguchi, “Handbook of Recording Engineering Second Edition”, Stereo Sound, Inc., June 30, 2004, p. 45-46

しかしながら、従来のステレオ収音再生技術では二つのスピーカの真ん中(スイートスポット)に位置した場合にのみ、音の方向を知覚可能であり、聴取者の位置に制限があった。即ち、二つのスピーカのどちらか一方の近くに位置した聴取者、例えば左側のスピーカの近くに位置した聴取者は近くのスピーカから再生される音に影響され、すべての音が左側から鳴っているように聞こえるという問題があった。
この発明の目的はこの問題に鑑み、聴取者の位置によらず、全ての聴取者に音の方向を正しく知覚させることを可能とする収音再生装置を提供することにある。
However, in the conventional stereo sound collection and reproduction technology, the direction of the sound can be perceived only when it is located in the middle (sweet spot) of the two speakers, and the position of the listener is limited. That is, a listener located near one of the two speakers, for example, a listener located near the left speaker is affected by the sound reproduced from the nearby speaker, and all sounds are played from the left side. There was a problem that it sounded like.
In view of this problem, an object of the present invention is to provide a sound collecting / reproducing apparatus that allows all listeners to correctly perceive the direction of sound regardless of the position of the listener.

この発明によれば、位置関係が既知の複数の音源からの音を収音して再生する収音再生装置は、互いに離して配置され、上記音を収音する2本のマイクロホンと、それら2本のマイクロホンの各収音信号が入力され、それら各収音信号をそれぞれ複数の周波数帯域信号に分割・変換する帯域分割手段と、その帯域分割手段から上記各複数の周波数帯域信号が入力され、それら両周波数帯域信号の同一帯域毎に、2本のマイクロホンの位置に起因して生ずる上記周波数帯域信号のレベルもしくは位相の差を帯域別チャネル間パラメータ値差として検出する帯域別チャネル間パラメータ値差検出手段と、各音源に対して個別に測定した帯域別チャネル間パラメータ値差の全周波数帯域での平均と分散を求め、各音源毎に、帯域別チャネル間パラメータ値差検出手段から入力された帯域別チャネル間パラメータ値差が、上記平均に上記分散を加算した値と上記平均から上記分散を減算した値との間にあるかを判定し、帯域別チャネル間パラメータ値差が上記平均に上記分散を加算した値と上記平均から上記分散を減算した値との間にある場合に、上記周波数帯域信号の帯域が上記判定を行った音源から入力された音を主に含むと判定して、その判定情報を出力する音源信号判定手段と、上記判定情報及び上記各複数の周波数帯域信号が入力され、上記各複数の周波数帯域信号を上記判定情報で主に含むと判定された音源に対応する上記周波数帯域信号はそのまま出力用音源信号とし、それ以外の音源に対応する上記周波数帯域信号には0に近い小さな正の重み値を乗算して出力用音源信号とすることで、音源と対応する音源数と同数の出力用音源信号を作成する重み乗算手段と、上記各出力用音源信号が入力され、それら出力用音源信号をそれぞれ時間波形に戻して出力信号とする音源信号合成手段と、上記出力信号がそれぞれ入力され、その入力された出力信号を再生する音源数と同数の拡声手段とを備え、それら拡声手段はそれぞれ入力される上記出力信号に音が強調されている音源の位置と対応付けられて配列されているものとされる。 According to the present invention, a sound collecting / reproducing apparatus for collecting and reproducing sounds from a plurality of sound sources having a known positional relationship is arranged apart from each other, the two microphones for collecting the sound, and the two Each collected sound signal of the microphone is input, and each divided sound signal is divided and converted into a plurality of frequency band signals, and each of the plurality of frequency band signals is input from the band dividing means, For each same band of both frequency band signals, the parameter value difference between channels by band for detecting the difference in level or phase of the frequency band signal caused by the position of two microphones as the parameter value difference between channels by band The average and dispersion of the difference between the detection means and the parameter value for each channel separately measured for each sound source in all frequency bands are obtained. The channel-by-band parameter value difference input from the data value difference detecting means determines whether the difference is between the value obtained by adding the variance to the average and the value obtained by subtracting the variance from the average. When the difference between the parameter values is between the value obtained by adding the variance to the average and the value obtained by subtracting the variance from the average, the frequency band signal band is input from the sound source that has performed the determination. Sound source signal determination means that outputs the determination information, and the determination information and each of the plurality of frequency band signals are input, and the plurality of frequency band signals are mainly used as the determination information. said frequency band signals as an output sound signal, said frequency band signals to a small positive multiplier to output the sound source signal weighting values near 0 corresponding to the other sound source corresponding to the determined sound source and comprising Thus, the weight multiplying means for generating the same number of output sound source signals as the number of sound sources corresponding to the sound sources, and each of the output sound source signals are input, and each of the output sound source signals is converted back to a time waveform and the output signal Sound source signal synthesizing means, and the above output signals are input, and the number of sound output means is the same as the number of sound sources for reproducing the input output signals. It is assumed that the sound source is arranged in association with the position of the sound source.

この発明によれば、音源数と同数の出力信号を作成し、かつ各出力信号には一つの音源からの信号成分が主に含まれるように信号を作成し、それら出力信号を音源数と同数とされ、かつ音源の位置関係と対応する位置関係をもって配列された拡声手段で再生するものとなっており、よって聴取者の位置によらず、全ての聴取者に音の方向を正しく知覚させることができる。   According to the present invention, the same number of output signals as the number of sound sources are created, and signals are created so that each output signal mainly includes signal components from one sound source, and these output signals are the same as the number of sound sources. And is reproduced by the loudspeaker arranged in the positional relationship corresponding to the positional relationship of the sound source, so that all listeners correctly perceive the direction of the sound regardless of the position of the listener. Can do.

この発明の実施形態を図面を参照して実施例により説明する。
図1はこの発明による収音再生装置の一実施例として、音源の個数が3個の場合の構成を示したものであり、この例では収音再生装置は2本のマイクロホン11,12と帯域分割手段13と帯域別チャネル間パラメータ値差検出手段14と音源信号判定手段15と重み乗算手段16と音源信号合成手段17と拡声手段とによって構成されている。拡声手段はスピーカとされ、この例では三つのスピーカ18,18,18を備えている。なお、図中、L,C及びRはそれぞれ音源を示す。
Embodiments of the present invention will be described with reference to the drawings.
FIG. 1 shows a configuration of a sound collecting / reproducing apparatus according to an embodiment of the present invention when the number of sound sources is three. In this example, the sound collecting / reproducing apparatus includes two microphones 11 and 12 and a band. The dividing means 13, the inter-band parameter value difference detecting means 14, the sound source signal determining means 15, the weight multiplying means 16, the sound source signal synthesizing means 17, and the loudspeaker means. The loudspeaker is a speaker, and in this example, three loudspeakers 18 L , 18 C and 18 R are provided. In the figure, L, C, and R each represent a sound source.

2本のマイクロホン11,12は所定距離、互いに離して配置されており、マイクロホン11,12に対して音源Lは左側に、つまりマイクロホン11の近くに位置しているものとし、音源Rはマイクロホン11,12に対して右側に、つまりマイクロホン12の近くに位置しているものとする。また、音源Cはマイクロホン11,12の正面方向において2本のマイクロホン11,12の真ん中に(中心線上に)位置しているものとする。なお、これら音源L,C,Rの位置関係は既知とする。
ここで、これら音源L,C,Rの発する音(音波)をそれぞれS(n),S(n),S(n)とし、それらが左側のマイクロホン11で収音された収音信号をx(n)とし、右側のマイクロホン12で収音された収音信号をx(n)とする。
The two microphones 11 and 12 are spaced apart from each other by a predetermined distance. The sound source L is located on the left side of the microphones 11 and 12, that is, near the microphone 11, and the sound source R is the microphone 11. , 12 on the right side, that is, near the microphone 12. Further, it is assumed that the sound source C is located in the middle (on the center line) of the two microphones 11 and 12 in the front direction of the microphones 11 and 12. It is assumed that the positional relationship between these sound sources L, C, and R is known.
Here, the sounds (sound waves) emitted by the sound sources L, C, and R are S L (n), S C (n), and S R (n), respectively, and they are collected by the left microphone 11. Assume that the signal is x L (n), and the collected sound signal collected by the right microphone 12 is x R (n).

収音信号x(n),x(n)は帯域分割手段13に入力され、帯域分割手段13においてはこれら入力された収音信号x(n),x(n)を時間区間毎に区切り、その区間に対して例えば高速フーリエ変換などで周波数帯域信号X(ω)及びX(ω),i=1,…,Nに変換し、予め決められた複数の帯域に分割する。ここで、Nは帯域数とする。なお、各帯域の信号が主として一つの音源からの信号成分よりなる程度に細かく分割する。
周波数帯域信号X(ω),X(ω)は帯域別チャネル間パラメータ値差検出手段14に入力され、帯域別チャネル間パラメータ値差検出手段14においては下記式(1),(2)で定義されるチャネル間レベル差(チャネル間到達レベル差)ΔLev(ω)及びチャネル間位相差(チャネル間到達位相差)Δang(ω)を算出する。
The collected sound signals x L (n) and x R (n) are input to the band dividing unit 13, and the band dividing unit 13 converts the input collected sound signals x L (n) and x R (n) into time intervals. For each section, the section is converted into frequency band signals X Li ) and X Ri ), i = 1,..., N by, for example, fast Fourier transform, and a plurality of predetermined bands Divide into Here, N is the number of bands. It should be noted that each band signal is subdivided to such an extent that it mainly comprises signal components from one sound source.
The frequency band signals X Li ) and X Ri ) are input to the band-specific channel parameter value difference detection means 14, and the band-specific channel parameter value difference detection means 14 receives the following equations (1), ( An inter-channel level difference (inter-channel arrival level difference) ΔLev (ω i ) and an inter-channel phase difference (inter-channel arrival phase difference) Δang (ω i ) defined in 2) are calculated.

ΔLev(ω)=20 log10(|X(ω)|/|X(ω)|)…(1)
Δang(ω)=angX(ω)−angX(ω) …(2)
音源信号判定手段15では帯域別チャネル間パラメータ値差検出手段14で算出されたチャネル間レベル差ΔLev(ω)やチャネル間位相差Δang(ω)を用いて各帯域がどの音源から発せられた信号であるかを判定する。
ΔLev (ω i ) = 20 log 10 (| X Li ) | / | X Ri ) |) (1)
Δang (ω i ) = angX Li ) −angX Ri ) (2)
The sound source signal determining means 15 uses which sound source each band is generated using the inter-channel level difference ΔLev (ω i ) and the inter-channel phase difference Δang (ω i ) calculated by the band-specific inter-channel parameter value difference detecting means 14. It is determined whether the signal is correct.

まず、全帯域をチャネル間レベル差ΔLev(ω)で判定する場合について説明する。
各音源L,C,Rのチャネル間レベル差を個別に測定し、それぞれその平均と分散を求める。ここで、個別に測定した場合の音源Lのチャネル間レベル差ΔLevL(ω)の平均をMLとし、分散をρLとする。同様に、音源Cのチャネル間レベル差ΔLevC(ω)の平均をMC、分散をρCとし、音源Rのチャネル間レベル差ΔLevR(ω)の平均をMR、分散をρRとする。
First, the case where the entire band is determined by the inter-channel level difference ΔLev (ω i ) will be described.
The level difference between channels of each sound source L, C, R is individually measured, and the average and variance are obtained respectively. Here, the average of the inter-channel level difference ΔLevL (ω i ) of the sound source L when measured individually is ML, and the variance is ρL. Similarly, the average of the inter-channel level differences ΔLevC sound source C (ω i) MC, dispersed and .rho.c, the average of the inter-channel level differences ΔLevR sound source R (ω i) MR, the dispersion and pr.

音源L,C,Rは図1に示したような配置関係にあるため、ML,MC,MRを比較すると、下記式(3)の大小関係が成立する。
ML>MC>MR …(3)
従って、全帯域のチャネル間レベル差ΔLev(ω)に対し、下記式(4)を満たす帯域を選定することにより、音源Lの信号を主に含む帯域を判定する。
ML−ρL≦ΔLev(ω)≦ML+ρL …(4)
同様に、下記式(5),(6)により音源Cの信号を主に含む帯域及び音源Rの信号を主に含む帯域を判定する。
Since the sound sources L, C, and R are in an arrangement relationship as shown in FIG. 1, when ML, MC, and MR are compared, the magnitude relationship of the following equation (3) is established.
ML>MC> MR (3)
Therefore, the band mainly including the signal of the sound source L is determined by selecting a band satisfying the following formula (4) for the inter-channel level difference ΔLev (ω i ) of the entire band.
ML−ρL ≦ ΔLev (ω i ) ≦ ML + ρL (4)
Similarly, the band mainly including the signal of the sound source C and the band mainly including the signal of the sound source R are determined by the following formulas (5) and (6).

MC−ρC≦ΔLev(ω)≦MC+ρC …(5)
MR−ρR≦ΔLev(ω)≦MR+ρR …(6)
このように、音源信号判定手段15では各帯域がいずれの音源の信号を主に含むかを判定し、その判定結果として各帯域毎にどの音源の信号を主に含むかの下記式(7−1)〜(7−3)に示したような判定情報を重み乗算手段16に送る。式(7−1)は帯域iが音源Lの信号を主に含むと判定した場合の判定情報を示し、式(7−2),(7−3)は帯域iが音源C、音源Rの信号をそれぞれ主に含むと判定した場合の判定情報を示す。
MC−ρC ≦ ΔLev (ω i ) ≦ MC + ρC (5)
MR−ρR ≦ ΔLev (ω i ) ≦ MR + ρR (6)
In this way, the sound source signal determination means 15 determines which sound source signal each band mainly includes, and as a determination result, which sound source signal is mainly included for each band (7− Determination information as shown in 1) to (7-3) is sent to the weight multiplication means 16. Expression (7-1) indicates determination information when it is determined that the band i mainly includes the signal of the sound source L. Expressions (7-2) and (7-3) indicate that the band i is the sound source C and the sound source R. The determination information when it is determined that each signal is mainly included is shown.

Res(ω)=L …(7−1)
Res(ω)=C …(7−2)
Res(ω)=R …(7−3)
次に、チャネル間位相差Δang(ω)で判定する場合について説明する。
判定のためのパラメータの値の差としてチャネル間位相差Δang(ω)を用いる場合もチャネル間レベル差ΔLev(ω)を用いた場合と同様の考え方ができる。即ち、各音源L,C,Rのチャネル間位相差を個別に測定し、それぞれその平均と分散を求める。個別に測定した場合の音源Lのチャネル間位相差ΔangL(ω)の平均をangMLとし、分散をangρLとする。同様に、音源Cのチャネル間位相差ΔangC(ω)の平均をangMC、分散をangρCとし、音源Rのチャネル間位相差ΔangR(ω)の平均をangMR、分散をangρRとする。
Res (ω i ) = L (7-1)
Res (ω i ) = C (7-2)
Res (ω i ) = R (7-3)
Next, a case where determination is performed using the inter-channel phase difference Δang (ω i ) will be described.
Even when the inter-channel phase difference Δang (ω i ) is used as the difference between the parameter values for determination, the same idea as when the inter-channel level difference ΔLev (ω i ) is used can be used. That is, the inter-channel phase differences of the sound sources L, C, and R are individually measured, and the average and variance are obtained respectively. The average of the inter-channel phase difference ΔangL (ω i ) of the sound source L when measured individually is angML, and the variance is angρL. Similarly, AngMC the average of inter-channel phase difference ΔangC sound source C (ω i), the dispersion and AngroC, the average of the inter-channel phase difference ΔangR sound source R (ω i) angMR, the dispersion and Angroaru.

angMLとangMCとangMRとはMLとMCとMRの場合と同様、下記の大小関係が成立する。
angML>angMC>angMR …(8)
従って、例えば音源Lの信号を主に含む帯域を判定する場合、下記式(9)を満たす帯域を選定すればよく、同様に音源C,Rの信号をそれぞれ主に含む帯域を判定する場合、それぞれ下記式(10),(11)を満たす帯域を選定すればよい。
angML−angρL≦Δang(ω)≦angML+angρL …(9)
angMC−angρC≦Δang(ω)≦angMC+angρC …(10)
angMR−angρR≦Δang(ω)≦angMR+angρR …(11)
各帯域においてチャネル間レベル差ΔLev(ω)とチャネル間位相差Δang(ω)のうち、どちらを使うかについては、例えば入力系の特性により異なる。例えばマイクロホン11,12に2本の指向性マイクロホンを使う場合には全帯域でチャネル間レベル差ΔLev(ω)が安定して算出され、これに対し、指向性の影響でチャネル間位相差Δang(ω)は乱れやすいため、全帯域をチャネル間レベル差ΔLev(ω)で判定するのが好ましい。
angML, angMC, and angMR have the following magnitude relationship as in the case of ML, MC, and MR.
angML>angMC> angMR (8)
Therefore, for example, when determining a band mainly including the signal of the sound source L, a band satisfying the following equation (9) may be selected. Similarly, when determining a band mainly including the signals of the sound sources C and R, Bands satisfying the following expressions (10) and (11) may be selected.
angML−angρL ≦ Δang (ω i ) ≦ angML + angρL (9)
angMC−angρC ≦ Δang (ω i ) ≦ angMC + angρC (10)
angMR-angρR ≦ Δang (ω i ) ≦ angMR + angρR (11)
Which one of the inter-channel level difference ΔLev (ω i ) and the inter-channel phase difference Δang (ω i ) is used in each band depends on, for example, the characteristics of the input system. For example, when two directional microphones are used for the microphones 11 and 12, the inter-channel level difference ΔLev (ω i ) is stably calculated over the entire band, whereas the inter-channel phase difference Δang is influenced by the directivity. Since (ω i ) is likely to be disturbed, it is preferable to determine the entire band by the inter-channel level difference ΔLev (ω i ).

一方、マイクロホン11,12に2本の無指向性マイクロホンを使う場合にはチャネル間位相差Δang(ω)を用いてもよい。この場合、一般的に低域(1kHz以下)ではチャネル間レベル差ΔLev(ω)がつきにくいため、チャネル間位相差Δang(ω)を用い、高域では位相が回転するため、チャネル間位相差Δang(ω)を一意に求めることが難しくなることから、チャネル間レベル差ΔLev(ω)を用いるようにしてもよい。
重み乗算手段16には音源信号判定手段15から判定情報が入力され、また帯域分割手段13から周波数帯域信号X(ω),X(ω)が入力され、重み乗算手段16においては音源信号判定手段15で判定された結果に基づき、下記の方法で重み値を乗算する。
On the other hand, when two omnidirectional microphones are used for the microphones 11 and 12, an inter-channel phase difference Δang (ω i ) may be used. In this case, since the channel-to-channel level difference ΔLev (ω i ) is generally difficult in the low frequency range (1 kHz or less), the phase difference Δang (ω i ) is used in the high frequency range. Since it is difficult to uniquely determine the phase difference Δang (ω i ), the inter-channel level difference ΔLev (ω i ) may be used.
Determination information is input to the weight multiplication unit 16 from the sound source signal determination unit 15, and frequency band signals X Li ) and X Ri ) are input from the band dividing unit 13. Based on the result determined by the sound source signal determination means 15, the weight value is multiplied by the following method.

まず、3つの音源L,C,Rからの信号を個別に音源信号合成手段17から出力するために、出力のための周波数帯域信号を音源の個数分(3つ)用意する。これらを例えばY(ω),Y(ω),Y(ω),i=1,…,Nとする。これらを以後、出力用音源信号と呼ぶ。これら出力用音源信号Y(ω),Y(ω),Y(ω)は音源信号判定手段15からの判定情報に基づいて下記のように重みを乗算される。
Res(ω)=Lの場合、
(ω)=X(ω
(ω)=(α/2)・(X(ω)+X(ω))
(ω)=α・X(ω
Res(ω)=Cの場合、
(ω)=α・X(ω
(ω)=(X(ω)+X(ω))/2
(ω)=α・X(ω
Res(ω)=Rの場合、
(ω)=α・X(ω
(ω)=(α/2)・(X(ω)+X(ω))
(ω)=X(ω
ここで、αは0に近い小さな値とし、例えば0.1や0.2程度とする。なお、αを0にしても各出力用音源信号Y(ω),Y(ω),Y(ω)は一つの音源からの成分を主に含むことには変わりはないが、0にすれば、どこの出力用音源信号Y(ω),Y(ω),Y(ω)からも出力されない周波数成分が生じるために、歪が生じやすくなる。よって、αの値は0.1や0.2程度とする。
First, in order to individually output signals from the three sound sources L, C, and R from the sound source signal synthesizing unit 17, frequency band signals for output are prepared for the number of sound sources (three). These are, for example, Y Li ), Y Ci ), Y Ri ), i = 1,. These are hereinafter referred to as output sound source signals. These output sound source signals Y Li ), Y Ci ), Y Ri ) are multiplied by weights as follows based on the determination information from the sound source signal determination means 15.
If Res (ω i ) = L,
Y Li ) = X Li )
Y Ci ) = (α / 2) · (X Li ) + X Ri ))
Y Ri ) = α · X Ri )
If Res (ω i ) = C,
Y Li ) = α · X Li )
Y Ci ) = (X Li ) + X Ri )) / 2
Y Ri ) = α · X Ri )
If Res (ω i ) = R,
Y Li ) = α · X Li )
Y Ci ) = (α / 2) · (X Li ) + X Ri ))
Y Ri ) = X Ri )
Here, α is a small value close to 0, for example, about 0.1 or 0.2. Note that even if α is set to 0, each output sound source signal Y Li ), Y Ci ), Y Ri ) does not change without including a component from one sound source. However, if 0 is set, a frequency component that is not output from any output sound source signal Y Li ), Y Ci ), Y Ri ) is generated, so that distortion is likely to occur. Therefore, the value of α is set to about 0.1 or 0.2.

なお、各出力用音源信号Y(ω),Y(ω),Y(ω)の作成に用いる周波数帯域信号として上記においてはX(ω)とX(ω)の両者を使っているが、これは下記の理由による。
(ω)とX(ω)のいずれを使うかについては各音源L,C,Rからの信号がX(ω)とX(ω)のうち、どちらの方に高いSN比で受音されているかに依存する。例えば、音源Lは左側のマイクロホン11の方に近いのでX(ω)の方に高いSN比で受音される。そのため、X(ω)に重み付けした信号を出力用音源信号Y(ω)として用いる。音源Rについては逆に右側のマイクロホン12の方に近いので、X(ω)の方に高いSN比で受音される。そのため、X(ω)に重み付けした信号を出力用音源信号Y(ω)として用いる。真ん中の音源Cについてはマイクロホン11と12に同じ大きさで受音されるため、両方の信号X(ω),X(ω)を用いる。その際、大きさを音源Lや音源Rと揃えるため、X(ω)とX(ω)の和に乗算する重み値の値を半分とする。
In the above description, X Li ) and X Ri ) are used as the frequency band signals used for generating the output sound source signals Y Li ), Y Ci ), and Y Ri ). ) Is used for the following reasons.
As to whether to use X Li ) or X Ri ), the signal from each sound source L, C, R is either X Li ) or X Ri ) Depending on whether the sound is received with a high SN ratio. For example, since the sound source L is closer to the left microphone 11, the sound is received with a higher S / N ratio toward X Li ). Therefore, using a signal weighted X L (ω i) as an output sound signal Y L (ω i). On the contrary, the sound source R is closer to the microphone 12 on the right side, so that sound is received at a higher S / N ratio toward X Ri ). Therefore, using a signal weighted X R (ω i) as an output sound signal Y R (ω i). Since the sound source C in the middle is received by the microphones 11 and 12 with the same magnitude, both signals X Li ) and X Ri ) are used. At that time, in order to make the size equal to that of the sound source L and the sound source R, the value of the weight value multiplied by the sum of X Li ) and X Ri ) is halved.

各出力用音源信号Y(ω),Y(ω),Y(ω)は音源信号合成手段17に入力され、音源信号合成手段17では各出力用音源信号Y(ω),Y(ω),Y(ω)それぞれを逆フーリエ変換により時間波形に戻して出力信号y(n),y(n),y(n)とする。そして、その出力信号y(n),y(n),y(n)をスピーカ18,18,18でそれぞれ再生する。なお、スピーカは音源数と同数用意されている。
スピーカ18,18,18の配置については音源Lの信号を強調して出力する出力信号y(n)を再生するためのスピーカ18は聴取者の左側に、音源Rの信号を強調して出力する出力信号y(n)を再生するためのスピーカ18は聴取者の右側に、音源Cの信号を強調して出力する出力信号y(n)を再生するためのスピーカ18はスピーカ18と18の間に、つまり中央に設置する必要がある。
Each output sound source signal Y Li ), Y Ci ), Y Ri ) is input to the sound source signal synthesizing unit 17, and each sound source signal Y Li ), Y Ci ), Y Ri ) are converted back to time waveforms by inverse Fourier transform to be output signals y L (n), y C (n), y R (n). Then, the output signals y L (n), y C (n), y R (n) are reproduced by the speakers 18 L , 18 C , 18 R , respectively. The number of speakers is the same as the number of sound sources.
As for the arrangement of the speakers 18 L , 18 C , and 18 R , the speaker 18 L for reproducing the output signal y L (n) that emphasizes and outputs the signal of the sound source L outputs the signal of the sound source R to the left side of the listener. A speaker 18 R for reproducing the output signal y R (n) to be output with emphasis is a speaker for reproducing the output signal y C (n) to be output with emphasis on the signal of the sound source C on the right side of the listener. 18 C needs to be installed between the speakers 18 L and 18 R , that is, in the center.

以上説明した構成及び処理により、この例では2本のマイクロホン11,12で収音した信号に対し、チャネル間レベル差もしくはチャネル間位相差に基づいて各帯域に重みをつけた信号を、音源の個数と同じ数の出力信号として出力し、それら出力信号を別々のスピーカで再生するものであって、各出力信号には一つの音源からの信号が主に含まれることになり、その結果、出力信号には音の方向情報が受音信号よりも強調された状態で含まれることになり、よってスイートスポット以外の場所、例えばいずれかのスピーカの近くに位置する聴取者に対しても音の方向がわかるように音を聞かせることができるものとなる。なお、スピーカ18と18は聴取者に対して左右対称に配置すると、最も高い効果が得られる。また、スピーカ18,18,18は例えば一直線上に配列すればよいが、これに限らず、聴取者を囲むように円弧状をなすように配置してもよい。 With the configuration and processing described above, in this example, a signal weighted to each band based on the inter-channel level difference or inter-channel phase difference with respect to the signal collected by the two microphones 11 and 12 is converted into the sound source. The number of output signals is the same as the number of output signals, and these output signals are reproduced by separate speakers. Each output signal mainly contains a signal from one sound source. The signal will contain the sound direction information in a more emphasized state than the received signal, so the direction of the sound will also be applied to a listener other than the sweet spot, such as a listener located near one of the speakers. As you can see, you can hear the sound. Incidentally, the speaker 18 L and 18 R is when placed symmetrically with respect to the listener, the highest effect is obtained. The speakers 18 L , 18 C , and 18 R may be arranged on a straight line, for example. However, the present invention is not limited to this, and the speakers 18 L , 18 C , and 18 R may be arranged in an arc shape so as to surround the listener.

上述した実施例では音源の個数が3個の場合を例に説明したが、音源の個数が例えば4個以上に増えた場合も同じ考え方で適用できる。以下、音源の個数がQ個に増えた場合について図2を参照して説明する。なお、図1と対応する部分には同一符号を付してある。
図2では2本のマイクロホン11,12の位置から見て音源を左から順に音源1,音源2,…,音源Qとする。各音源1,2,…,Qの発する音S(n),S(n),…,S(n)はマイクロホン11,12で収音され、収音信号x(n),x(n)は帯域分割手段13に入力されて周波数帯域信号X(ω)及びX(ω),i=1,…,Nに変換される。
In the above-described embodiments, the case where the number of sound sources is three has been described as an example. However, the same idea can be applied when the number of sound sources is increased to, for example, four or more. Hereinafter, a case where the number of sound sources is increased to Q will be described with reference to FIG. The parts corresponding to those in FIG. 1 are denoted by the same reference numerals.
In FIG. 2, it is assumed that the sound sources are sound source 1, sound source 2,..., Sound source Q in order from the left as viewed from the positions of the two microphones 11 and 12. Sounds S 1 (n), S 2 (n),..., S Q (n) generated by the sound sources 1, 2,..., Q are collected by the microphones 11 and 12, and collected sound signals x L (n), x R (n) is input to the band dividing means 13 and converted into frequency band signals X Li ) and X Ri ), i = 1,.

周波数帯域信号X(ω),X(ω)は帯域別チャネル間パラメータ値差検出手段14に入力されてチャネル間レベル差ΔLev(ω)及びチャネル間位相差Δang(ω)が前述した式(1),(2)により算出され、音源信号判定手段15ではこれらチャネル間レベル差ΔLev(ω)やチャネル間位相差Δang(ω)を用いて各帯域がどの音源から発せられた信号であるかを判定する。
以下、一例としてチャネル間レベル差ΔLev(ω)で判定する場合について説明する。
The frequency band signals X Li ) and X Ri ) are input to the inter-channel parameter value difference detecting means 14 for each band, and the inter-channel level difference ΔLev (ω i ) and the inter-channel phase difference Δang (ω i ). Is calculated by the above-described equations (1) and (2), and the sound source signal determination means 15 uses the inter-channel level difference ΔLev (ω i ) and the inter-channel phase difference Δang (ω i ) to determine which sound source each band has. It is determined whether the signal is emitted.
Hereinafter, as an example, a case where the determination is made based on the inter-channel level difference ΔLev (ω i ) will be described.

各音源からのチャネル間レベル差を個別に測定し、それぞれその平均と分散を求める。ここで、各音源1,2,…,Qのチャネル間レベル差ΔLev1(ω),ΔLev2(ω),…,ΔLevQ(ω)の平均をそれぞれM1,M2,…,MQとし、分散をそれぞれρ1,ρ2,…,ρQとすると、下記式(12)の大小関係が成立する。
M1>M2>…>MQ …(12)
従って、例えば音源1の信号を主に含む帯域を判定するためには下記式(13)を満たす帯域を選定すればよい。
The level difference between channels from each sound source is measured individually, and the average and variance are obtained respectively. Here, the average of the inter-channel level differences ΔLev1 (ω i ), ΔLev2 (ω i ),..., ΔLevQ (ω i ) of each sound source 1, 2,..., Q is M1, M2,. Are respectively represented by ρ1, ρ2,..., ΡQ, and the following magnitude relationship is established.
M1>M2>...> MQ (12)
Therefore, for example, in order to determine the band mainly including the signal of the sound source 1, a band satisfying the following equation (13) may be selected.

M1−ρ1≦ΔLev(ω)≦M1+ρ1 …(13)
そして、式(13)を満たす帯域は音源1の信号を主に含むと判定し、下記式(14−1)に示したような判定情報を音源信号判定手段15は重み乗算手段16に送る。
Res(ω)=1 …(14−1)
同様にして、音源2や音源Qの信号を主に含むと判定した帯域は下記式(14−2),(14−3)に示したような判定情報を重み乗算手段16に送る。
Res(ω)=2 …(14−2)
Res(ω)=Q …(14−3)
重み乗算手段16においては出力のための出力用音源信号を音源の個数分(Q個)用意する。これらをY(ω),Y(ω),…,Y(ω),i=1,…,Nとする。これら出力用音源信号Y(ω),Y(ω),…,Y(ω)は音源信号判定手段15からの判定情報に基づいて重みを乗算される。
M1−ρ1 ≦ ΔLev (ω i ) ≦ M1 + ρ1 (13)
Then, it is determined that the band satisfying Expression (13) mainly includes the signal of the sound source 1, and the sound source signal determination means 15 sends determination information as shown in the following Expression (14-1) to the weight multiplication means 16.
Res (ω i ) = 1 (14-1)
Similarly, the band determined to contain mainly the signals of the sound source 2 and the sound source Q is sent to the weight multiplying means 16 as determination information as shown in the following equations (14-2) and (14-3).
Res (ω i ) = 2 (14-2)
Res (ω i ) = Q (14-3)
The weight multiplication means 16 prepares output sound source signals for output corresponding to the number of sound sources (Q). These are Y 1i ), Y 2i ),..., Y Qi ), i = 1,. These output sound source signals Y 1i ), Y 2i ),..., Y Qi ) are multiplied by weights based on the determination information from the sound source signal determination means 15.

ここで、音源のインデックスをm(1≦m≦Q)とし、2本のマイクロホン11,12の正面方向において2本のマイクロホン11,12の真ん中に位置している音源のインデックスをmcとする。また、音源mcより左側に位置している音源m(1≦m<mc)の信号はX(ω)の方に高いSN比で受音されており、音源mcより右側に位置している音源m(mc<m≦Q)の信号はX(ω)の方に高いSN比で受音されているとする。
今、音源信号判定手段15からの判定情報が、
Res(ω)=k
とされ、つまり帯域iにおいて信号を主に含むと判定された音源のインデックスがkの場合、各出力用音源信号Y(ω),Y(ω),…,Y(ω)は下記のように重みを乗算される。なお、k<mcとする。
Here, the index of the sound source is m (1 ≦ m ≦ Q), and the index of the sound source located in the middle of the two microphones 11 and 12 in the front direction of the two microphones 11 and 12 is mc. Further, the signal of the sound source m (1 ≦ m <mc) located on the left side of the sound source mc is received with a higher S / N ratio toward X Li ), and is located on the right side of the sound source mc. It is assumed that a signal of a sound source m (mc <m ≦ Q) is received at a higher S / N ratio toward X Ri ).
Now, the determination information from the sound source signal determination means 15 is
Res (ω i ) = k
That is, if the index of the sound source determined to mainly include a signal in the band i is k, each output sound source signal Y 1i ), Y 2i ),..., Y Qi ) Is multiplied by the weight as follows: Note that k <mc.

(ω)=X(ω
1≦m<k,k<m<mc(ω)=α・X(ω
mc(ω)=(α/2)・(X(ω)+X(ω))
mc<m≦Q(ω)=α・X(ω
これら出力用音源信号は音源信号合成手段17に入力され、逆フーリエ変換することにより時間波形に戻され、出力信号y(n),y(n),…,y(n)とされる。
拡声手段は例えばスピーカとされて音源数と同数用意され、これらスピーカ18,18,…,18にそれぞれ出力信号y(n),y(n),…,y(n)が入力されて再生される。なお、スピーカ18,18,…,18は聴取者に対して左側から順に配列されて設置される。
Y ki ) = X Li )
Y 1 ≦ m <k, k <m <mci ) = α · X Li )
Y mci ) = (α / 2) · (X Li ) + X Ri ))
Y mc <m ≦ Qi ) = α · X Ri )
These output sound source signals are input to the sound source signal synthesizing means 17 and are converted back to time waveforms by inverse Fourier transform, and output signals y 1 (n), y 2 (n),..., Y Q (n) are obtained. The
Loudspeaker means are provided the same number as the number of sound sources is a speaker for example, these speaker 18 1, 18 2, ..., respectively 18 Q output signal y 1 (n), y 2 (n), ..., y Q (n) Is input and played. The speakers 18 1 , 18 2 ,..., 18 Q are arranged and arranged in order from the left side with respect to the listener.

なお、上記においてはk<mcの場合について説明したが、k>mc及びk=mcの場合には下記のように各出力用音源信号に重みが乗算される。
k>mcの場合、
(ω)=X(ω
1≦m<mc(ω)=α・X(ω
mc(ω)=(α/2)・(X(ω)+X(ω))
mc<m<k,k<m≦Q(ω)=α・X(ω
k=mcの場合、
(ω)=(X(ω)+X(ω))/2
1≦m<mc(ω)=α・X(ω
mc<m≦Q(ω)=α・X(ω
以上説明したように、音源の個数がQ個の場合でも、聴取者の位置によらず、全ての聴取者に音の方向がわかるように音を聞かせることができる。
In the above description, the case of k <mc has been described. However, when k> mc and k = mc, each output sound source signal is multiplied by a weight as follows.
If k> mc,
Y ki ) = X Ri )
Y 1 ≦ m <mci ) = α · X Li )
Y mci ) = (α / 2) · (X Li ) + X Ri ))
Y mc <m <k, k <m ≦ Qi ) = α · X Ri )
If k = mc,
Y ki ) = (X Li ) + X Ri )) / 2
Y 1 ≦ m <mci ) = α · X Li )
Y mc <m ≦ Qi ) = α · X Ri )
As described above, even when the number of sound sources is Q, it is possible to make all the listeners hear the sound so that the direction of the sound can be understood regardless of the position of the listener.

なお、上述した例では例えば式(4)に示したように、各音源のチャネル間レベル差を個別に測定し、その平均と分散を用いて全帯域のチャネル間レベル差ΔLev(ω)に対し、どの音源の信号を主に含むか判定しているが、各音源のチャネル間レベル差の分布が重なり合うような場合には例えば式(4)に替えて下記のような式を用いて判定するようにすればよい。
ML−a・ρL≦ΔLev(ω)≦ML+a・ρL
ここで、a<1とする。
In the example described above, for example, as shown in Equation (4), the inter-channel level difference of each sound source is individually measured, and the average and variance are used to obtain the inter-channel level difference ΔLev (ω i ) of the entire band. On the other hand, it is determined which sound source signal is mainly included. However, when the distribution of the level difference between channels of each sound source overlaps, for example, the following expression is used instead of Expression (4). You just have to do it.
ML−a · ρL ≦ ΔLev (ω i ) ≦ ML + a · ρL
Here, a <1.

この発明による収音再生装置は例えばテレビ会議システムなどにおける収音再生に活用される。   The sound collecting / reproducing apparatus according to the present invention is utilized for sound collecting / reproducing in, for example, a video conference system.

この発明の一実施例を説明するためのブロック図。The block diagram for demonstrating one Example of this invention. この発明の他の実施例を説明するためのブロック図。The block diagram for demonstrating the other Example of this invention.

符号の説明Explanation of symbols

11 マイクロホン
12 マイクロホン
13 帯域分割手段
14 帯域別チャネル間パラメータ値差検出手段
15 音源信号判定手段
16 重み乗算手段
17 音源信号合成手段
DESCRIPTION OF SYMBOLS 11 Microphone 12 Microphone 13 Band division | segmentation means 14 Parameter value difference detection means between channels classified by band 15 Sound source signal determination means 16 Weight multiplication means 17 Sound source signal synthesis means

Claims (4)

位置関係が既知の複数の音源からの音を収音して再生する装置であって、
互いに離して配置され、上記音を収音する2本のマイクロホンと、
それら2本のマイクロホンの各収音信号が入力され、それら各収音信号をそれぞれ複数の周波数帯域信号に分割・変換する帯域分割手段と、
その帯域分割手段から上記各複数の周波数帯域信号が入力され、それら両周波数帯域信号の同一帯域毎に、上記2本のマイクロホンの位置に起因して生ずる上記周波数帯域信号のレベルの差を帯域別チャネル間パラメータ値差として検出する帯域別チャネル間パラメータ値差検出手段と、
上記各音源に対して個別に測定した上記帯域別チャネル間パラメータ値差の全周波数帯域での平均と分散を求め、上記各音源毎に、上記帯域別チャネル間パラメータ値差検出手段から入力された上記帯域別チャネル間パラメータ値差が、上記平均に上記分散を加算した値と上記平均から上記分散を減算した値との間にあるかを判定し、上記帯域別チャネル間パラメータ値差が上記平均に上記分散を加算した値と上記平均から上記分散を減算した値との間にある場合に、上記周波数帯域信号の帯域が上記判定を行った音源から入力された音を主に含むと判定して、その判定情報を出力する音源信号判定手段と、
上記判定情報及び上記各複数の周波数帯域信号が入力され、上記各複数の周波数帯域信号を上記判定情報で主に含むと判定された音源に対応する上記周波数帯域信号はそのまま出力用音源信号とし、それ以外の音源に対応する上記周波数帯域信号には0に近い小さな正の重み値を乗算して出力用音源信号とすることで、上記音源と対応する上記音源数と同数の出力用音源信号を作成する重み乗算手段と、
上記各出力用音源信号が入力され、それら出力用音源信号をそれぞれ時間波形に戻して出力信号とする音源信号合成手段と、
上記出力信号がそれぞれ入力され、その入力された出力信号を再生する上記音源数と同数の拡声手段とを備え、
それら拡声手段はそれぞれ入力される上記出力信号に音が強調されている音源の位置と対応付けられて配列されていることを特徴とする収音再生装置。
A device that collects and reproduces sound from a plurality of sound sources whose positional relationships are known ,
Two microphones arranged apart from each other and picking up the sound,
Band-splitting means for receiving the collected sound signals of the two microphones and dividing / converting the collected sound signals into a plurality of frequency band signals,
Each of the plurality of frequency band signals is input from the band dividing means, and the difference in the level of the frequency band signal caused by the position of the two microphones for each same band of both frequency band signals is classified by band. Channel-by-band parameter value difference detecting means for detecting a parameter value difference between channels;
The average and variance of the channel-by-band parameter value difference measured individually for each sound source in all frequency bands were obtained, and input from the channel-to-band parameter value difference detecting unit for each sound source. It is determined whether the parameter value difference between the band-specific channels is between the value obtained by adding the variance to the average and the value obtained by subtracting the variance from the average. If the frequency band signal is between the value obtained by adding the variance to the average and the value obtained by subtracting the variance from the average, it is determined that the frequency band signal band mainly includes sound input from the sound source that performed the determination. Sound source signal determination means for outputting the determination information;
The determination information and each of the plurality of frequency band signals are input, and the frequency band signal corresponding to the sound source determined to mainly include the plurality of frequency band signals in the determination information is directly used as an output sound source signal, The frequency band signals corresponding to the other sound sources are multiplied by a small positive weight value close to 0 to obtain an output sound source signal, so that the same number of output sound source signals as the number of sound sources corresponding to the sound source are obtained. A weight multiplication means to be created;
Each of the output sound source signals is input, and the sound source signal synthesizing means for returning the output sound source signals to time waveforms and using them as output signals,
Each of the output signals is input, and includes the same number of sounding means as the number of sound sources for reproducing the input output signal,
The sound collecting and reproducing apparatus is characterized in that the loudspeakers are arranged in association with the positions of the sound sources in which the sound is emphasized in the input output signals.
位置関係が既知の複数の音源からの音を収音して再生する装置であって、
互いに離して配置され、上記音を収音する2本のマイクロホンと、
それら2本のマイクロホンの各収音信号が入力され、それら各収音信号をそれぞれ複数の周波数帯域信号に分割・変換する帯域分割手段と、
その帯域分割手段から上記各複数の周波数帯域信号が入力され、それら両周波数帯域信号の同一帯域毎に、上記2本のマイクロホンの位置に起因して生ずる上記周波数帯域信号の位相の差を帯域別チャネル間パラメータ値差として検出する帯域別チャネル間パラメータ値差検出手段と、
上記各音源に対して個別に測定した上記帯域別チャネル間パラメータ値差の全周波数帯域での平均と分散を求め、上記各音源毎に、上記帯域別チャネル間パラメータ値差検出手段から入力された上記帯域別チャネル間パラメータ値差が、上記平均に上記分散を加算した値と上記平均から上記分散を減算した値との間にあるかを判定し、上記帯域別チャネル間パラメータ値差が上記平均に上記分散を加算した値と上記平均から上記分散を減算した値との間にある場合に、上記周波数帯域信号の帯域が上記判定を行った音源から入力された音を主に含むと判定して、その判定情報を出力する音源信号判定手段と、
上記判定情報及び上記各複数の周波数帯域信号が入力され、上記各複数の周波数帯域信号を上記判定情報で主に含むと判定された音源に対応する上記周波数帯域信号はそのまま出力用音源信号とし、それ以外の音源に対応する上記周波数帯域信号には0に近い小さな正の重み値を乗算して出力用音源信号とすることで、上記音源と対応する上記音源数と同数の出力用音源信号を作成する重み乗算手段と、
上記各出力用音源信号が入力され、それら出力用音源信号をそれぞれ時間波形に戻して出力信号とする音源信号合成手段と、
上記出力信号がそれぞれ入力され、その入力された出力信号を再生する上記音源数と同数の拡声手段とを備え、
それら拡声手段はそれぞれ入力される上記出力信号に音が強調されている音源の位置と対応付けられて配列されていることを特徴とする収音再生装置。
A device that collects and reproduces sound from a plurality of sound sources whose positional relationships are known ,
Two microphones arranged apart from each other and picking up the sound,
Band-splitting means for receiving the collected sound signals of the two microphones and dividing / converting the collected sound signals into a plurality of frequency band signals,
The plurality of frequency band signals are inputted from the band dividing means, and the phase difference of the frequency band signals generated due to the positions of the two microphones is classified by band for the same band of both frequency band signals. Channel-by-band parameter value difference detection means for detecting as a parameter value difference between channels;
The average and variance of the channel-by-band parameter value difference measured individually for each sound source in all frequency bands were obtained, and input from the channel-to-band parameter value difference detecting unit for each sound source. It is determined whether the parameter value difference between the band-specific channels is between the value obtained by adding the variance to the average and the value obtained by subtracting the variance from the average. If the frequency band signal is between the value obtained by adding the variance to the average and the value obtained by subtracting the variance from the average, it is determined that the frequency band signal band mainly includes sound input from the sound source that performed the determination. Sound source signal determination means for outputting the determination information;
The determination information and each of the plurality of frequency band signals are input, and the frequency band signal corresponding to the sound source determined to mainly include the plurality of frequency band signals in the determination information is directly used as an output sound source signal, The frequency band signals corresponding to the other sound sources are multiplied by a small positive weight value close to 0 to obtain an output sound source signal, so that the same number of output sound source signals as the number of sound sources corresponding to the sound source are obtained. A weight multiplication means to be created;
Each of the output sound source signals is input, and the sound source signal synthesizing means for returning the output sound source signals to time waveforms and using them as output signals,
Each of the output signals is input, and includes the same number of sounding means as the number of sound sources for reproducing the input output signal,
The sound collecting and reproducing apparatus, wherein the loudspeakers are arranged in association with the positions of the sound sources in which the sound is emphasized in the output signals respectively inputted.
請求項1又は2記載の収音再生装置において、
上記判定情報により判定された音源と対応付けされた上記出力用音源信号の作成における上記重み値を1とする時、その判定された音源以外の音源と対応付けされた上記出力用音源信号の作成における上記重み値が0.1乃至0.2とされることを特徴とする収音再生装置。
The sound collecting and reproducing apparatus according to claim 1 or 2,
Creation of the output sound source signal associated with a sound source other than the determined sound source when the weight value in creation of the output sound source signal associated with the sound source determined by the determination information is 1. The sound collecting / reproducing apparatus according to claim 1, wherein the weight value is 0.1 to 0.2.
請求項1又は2記載の収音再生装置において、
上記帯域分割手段における帯域分割は各帯域の周波数帯域信号が主として一つの音源からの信号成分よりなる程度に分割されることを特徴とする収音再生装置。
The sound collecting and reproducing apparatus according to claim 1 or 2,
The sound collecting / reproducing apparatus according to claim 1, wherein the band division in the band dividing means is performed so that the frequency band signal of each band is mainly composed of signal components from one sound source.
JP2005262435A 2005-09-09 2005-09-09 Sound collection and playback device Active JP4616736B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2005262435A JP4616736B2 (en) 2005-09-09 2005-09-09 Sound collection and playback device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2005262435A JP4616736B2 (en) 2005-09-09 2005-09-09 Sound collection and playback device

Publications (2)

Publication Number Publication Date
JP2007074665A JP2007074665A (en) 2007-03-22
JP4616736B2 true JP4616736B2 (en) 2011-01-19

Family

ID=37935675

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2005262435A Active JP4616736B2 (en) 2005-09-09 2005-09-09 Sound collection and playback device

Country Status (1)

Country Link
JP (1) JP4616736B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5294603B2 (en) * 2007-10-03 2013-09-18 日本電信電話株式会社 Acoustic signal estimation device, acoustic signal synthesis device, acoustic signal estimation synthesis device, acoustic signal estimation method, acoustic signal synthesis method, acoustic signal estimation synthesis method, program using these methods, and recording medium
JP6693340B2 (en) 2016-08-30 2020-05-13 富士通株式会社 Audio processing program, audio processing device, and audio processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10313497A (en) * 1996-09-18 1998-11-24 Nippon Telegr & Teleph Corp <Ntt> Sound source separation method, system and recording medium
JPH1146400A (en) * 1997-07-25 1999-02-16 Yamaha Corp Sound image localization device
JP2003078988A (en) * 2001-09-06 2003-03-14 Nippon Telegr & Teleph Corp <Ntt> Sound pickup device, method and program, recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10313497A (en) * 1996-09-18 1998-11-24 Nippon Telegr & Teleph Corp <Ntt> Sound source separation method, system and recording medium
JPH1146400A (en) * 1997-07-25 1999-02-16 Yamaha Corp Sound image localization device
JP2003078988A (en) * 2001-09-06 2003-03-14 Nippon Telegr & Teleph Corp <Ntt> Sound pickup device, method and program, recording medium

Also Published As

Publication number Publication date
JP2007074665A (en) 2007-03-22

Similar Documents

Publication Publication Date Title
US10674262B2 (en) Merging audio signals with spatial metadata
EP3320692B1 (en) Spatial audio processing apparatus
US8831231B2 (en) Audio signal processing device and audio signal processing method
JP5865899B2 (en) Stereo sound reproduction method and apparatus
KR100608002B1 (en) Method and apparatus for reproducing virtual sound
JP6284480B2 (en) Audio signal reproducing apparatus, method, program, and recording medium
CN104604254A (en) Audio processing device, method, and program
GB2556093A (en) Analysis of spatial metadata from multi-microphones having asymmetric geometry in devices
WO2014053875A1 (en) An apparatus and method for reproducing recorded audio with correct spatial directionality
KR20130080819A (en) Apparatus and method for localizing multichannel sound signal
US20050047619A1 (en) Apparatus, method, and program for creating all-around acoustic field
JP2009071406A (en) Wavefront synthesis signal converter and wavefront synthesis signal conversion method
JP4616736B2 (en) Sound collection and playback device
JP4116600B2 (en) Sound collection method, sound collection device, sound collection program, and recording medium recording the same
JP3174965U (en) Bone conduction 3D headphones
US20150146897A1 (en) Audio signal processing method and audio signal processing device
JP5743003B2 (en) Wavefront synthesis signal conversion apparatus and wavefront synthesis signal conversion method
JP5590169B2 (en) Wavefront synthesis signal conversion apparatus and wavefront synthesis signal conversion method
WO2018193160A1 (en) Ambience generation for spatial audio mixing featuring use of original and extended signal
JP2002152897A (en) Sound signal processing method, sound signal processing unit
Glasgal Improving 5.1 and Stereophonic Mastering/Monitoring by Using Ambiophonic Techniques
JP2019087839A (en) Audio system and correction method of the same
JP4917946B2 (en) Sound image localization processor
Griesinger Pitch, Timbre, Source Separation, and the Myths of Loudspeaker Imaging
Clark Measurement of Audio System Imaging Performance

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20070810

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100413

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100609

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20100713

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20100812

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20101012

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20101022

R150 Certificate of patent or registration of utility model

Ref document number: 4616736

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131029

Year of fee payment: 3

S531 Written request for registration of change of domicile

Free format text: JAPANESE INTERMEDIATE CODE: R313531

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350