JPH0466888A - Extraction of feature for sound source - Google Patents

Extraction of feature for sound source

Info

Publication number
JPH0466888A
JPH0466888A JP18088690A JP18088690A JPH0466888A JP H0466888 A JPH0466888 A JP H0466888A JP 18088690 A JP18088690 A JP 18088690A JP 18088690 A JP18088690 A JP 18088690A JP H0466888 A JPH0466888 A JP H0466888A
Authority
JP
Japan
Prior art keywords
sound source
estimation
signal
feature
estimation error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP18088690A
Other languages
Japanese (ja)
Other versions
JP2763821B2 (en
Inventor
Kiyohito Tokuda
清仁 徳田
Atsushi Fukazawa
深沢 敦司
Yuichi Shiraki
白木 裕一
Satoshi Shimizu
聡 清水
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oki Electric Industry Co Ltd
Original Assignee
Oki Electric Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oki Electric Industry Co Ltd filed Critical Oki Electric Industry Co Ltd
Priority to JP2180886A priority Critical patent/JP2763821B2/en
Publication of JPH0466888A publication Critical patent/JPH0466888A/en
Application granted granted Critical
Publication of JP2763821B2 publication Critical patent/JP2763821B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Abstract

PURPOSE:To achieve a reduction in the number of receivers used and a shortening of analysis time by calculating directly a sound signal by modeling it with a nonlinear combination of the feature quantities possessed by the sound source signal. CONSTITUTION:Sound source signals S1 from D pieces of sound source 1 are received with a signal receiving section 2 and when discrete time-series signals S are outputted from a nonlinear feature quantity estimating section 10, an estimation signal generating part 11 in the estimating section 10 calculates estimation signals for the signals S. An estimation error calculating section 12 inputs the estimation signals to calculate an estimation error power and the results of the calculation are outputted to a normal equation deciding section 13, which 13 determines such a estimation feature quantity vector as to minimize the estimation error power by the least squares method. Then, a correction value calculating section 14 inputs the estimation feature quantity vector to determine a correction vector, which is applied to an estimation error judging section 15. For example, the judging section 15 performs a minimization processing of the estimation error power to be used as estimation error reference. Then, an estimation feature vector corresponding to the minimum of the estimation error power obtained is outputted as reference estimation feature vector thereby enabling the determination of feature quantities of the sound source signal.

Description

【発明の詳細な説明】 (産業上の利用分野) 本発明は、水中にお(するL1置計測(音響測位)等を
行うソーナの信号処理等において、空間に任意に存在す
る音源がらの音源信号の特徴量(S幅、周波数、方位)
を抽出する音源特徴抽出方法に関するものである。
DETAILED DESCRIPTION OF THE INVENTION (Field of Industrial Application) The present invention is applicable to sound sources that exist arbitrarily in space in signal processing of sonar that performs underwater L1 position measurement (acoustic positioning), etc. Signal features (S width, frequency, direction)
The present invention relates to a sound source feature extraction method for extracting sound source features.

(従来の技術) 従来、このような分野の技術としては、例えば沖電気研
究開発、l1[4] (昭61−10>、似烏・五十嵐
著「水中音響におけるディジタル信号処理技術jp.5
B−58に記載されるものがあった。
(Prior art) Conventionally, as a technology in this field, for example, Oki Electric Research and Development, l1 [4] (1986-10>, "Digital signal processing technology in underwater acoustics" by Nigarasu and Igarashi, jp. 5)
There was one described in B-58.

前記文献に記載されているように、ソーナは、水中音響
を用いて3次元空間内(水中)を移動する目標の探索、
位置計測、類別を行うものである。
As described in the above document, SONA uses underwater acoustics to search for targets moving in a three-dimensional space (underwater);
It performs position measurement and classification.

ソーナは動作様式により、パッシブソーナとアクティブ
ソーナに分けられる。パッシブソーナは目標が放射する
音波を用い、またアクティブソーナは目標に向かって音
波を当て、その反射波(エコー〉を用いて目標の探索、
位置計測、類別を行う。
Sonar is divided into passive sonar and active sonar depending on the mode of operation. Passive sonar uses sound waves emitted by the target, while active sonar emits sound waves toward the target and uses the reflected waves (echoes) to search for the target.
Performs position measurement and classification.

ソーナで用いられる信号処理は、次のように、信号の時
間的特徴(波形、スペクトル等)を抽出するために用い
る時間的処理と、信号の空間的特徴(位置、形状、移動
速度等)を抽出するために用いる空間的処理に分けられ
る。
The signal processing used in SONA consists of temporal processing used to extract the temporal characteristics of the signal (waveform, spectrum, etc.) and spatial processing used to extract the spatial characteristics of the signal (position, shape, moving speed, etc.). It is divided into spatial processing used for extraction.

時間的処理のうち、整合フィルタおよびウィーナフィル
タは、それぞれ定められた波形およびスペクトルを持つ
信号が、既知のスペクトルを持つ雑音に埋れている時に
最大の信号対雑音比(S/N比)を得るフィルタである
。スペクトル推定は、信号の周波数に対する強度を推定
するものであり、雑音に埋れな周期的信号(線スペクト
ル)を検出し、その信号を放射する目標を票別するため
に用いられる。
Among temporal processing, matched filters and Wiener filters obtain the maximum signal-to-noise ratio (S/N ratio) when a signal with a defined waveform and spectrum is buried in noise with a known spectrum. It's a filter. Spectrum estimation is used to estimate the intensity of a signal relative to its frequency, and is used to detect a periodic signal (line spectrum) buried in noise and to classify targets that radiate the signal.

空間的処理のうち、ビームフォーミング(BF)は、受
信アレイを構成する多数の受渡器の出力信号を用い、空
間を伝搬する波動の方向性を利用して信号のS/Hの改
善、信号の入射方向、および強度(空間スペクトル)の
推定等を行う。遅延、および位相推定は、少数の受渡器
で受信される信号の間に生ずる遅延または位相の差を推
定する問題であり、主に目標の位置計測のために用いら
れる。これは、ビームフォーミングの簡約化であると考
えられる。
Among spatial processing, beamforming (BF) uses the output signals of a large number of passers that make up a receiving array, and uses the directionality of waves propagating in space to improve the S/H of the signal and to improve the signal S/H. The incident direction and intensity (spatial spectrum) are estimated. Delay and phase estimation is a problem of estimating the delay or phase difference that occurs between signals received by a small number of passers, and is mainly used for target position measurement. This can be considered a simplification of beamforming.

ビームフォーミングは、多数の受渡器の出力信号に対し
て伝搬遅延の差を補償する遅延を加えた後、加算する遅
延−加算BPが基本的なものであるが、信号が狭帯域の
場合には、遅延補償の代わりに位相補償を用いることも
できる。ビームフォーミングの特性としては空間的分解
能が特に重要であり、主ビーム幅が狭く、サイドローブ
レベルが低いことが望まれる。ビームパターンの制御法
としでは、各受渡器の出力信号に所定の重みを乗するシ
ェーディングが用いられ、主にサイドローブレベルの抑
圧Gこ効果的であった。ついで、特定の方向から入射す
る妨害波を除去するサイドローブ打消し技術が開発され
、やがて任意方向の妨害波を除去する適応ビームフォー
ミング(ABF)が考案された。さらに最近では、最新
のスペクトル推定法を方位推定に適用し、主ビームの分
解能を飛躍的に向上する信号部分空間法等が検討されて
いる。
Beamforming is basically a delay-addition BP that adds a delay to compensate for the difference in propagation delay to the output signals of multiple passers and then adds them. However, when the signal is a narrow band, , phase compensation can also be used instead of delay compensation. Spatial resolution is particularly important as a beamforming characteristic, and it is desirable that the main beam width be narrow and the sidelobe level be low. As a beam pattern control method, shading is used in which the output signal of each transfer device is multiplied by a predetermined weight, which is mainly effective in suppressing the sidelobe level. Next, sidelobe cancellation technology was developed to remove interference waves incident from a specific direction, and eventually adaptive beam forming (ABF) was devised to remove interference waves in any direction. Furthermore, recently, signal subspace methods have been studied that apply the latest spectrum estimation methods to azimuth estimation and dramatically improve the resolution of the main beam.

(発明が解決しようとする課題) しかしながら、上記いずれの音源特徴抽出方法であって
も、高い空間分解能を得るためには、入力データ数を多
くするために受渡器の数を増加させねばならない。しか
も、高分解能を得るためには、受渡器の配置も直線状や
円周状等といった音源信号に適する配置状態としなけれ
ばならないが、その受渡器の配置状態によっては窓関数
がかかつて分解能が劣化するという問題があり、的確な
配置状態に設定するためには制限がある。そのため、受
渡冊数の増大によって計算が複雑化するばかりか、装置
が大型化すると共に、受渡器の配置の困難性が生じると
いう問題があり、それを解決することが困難であった。
(Problems to be Solved by the Invention) However, in any of the above sound source feature extraction methods, in order to obtain high spatial resolution, the number of passing devices must be increased in order to increase the number of input data. Furthermore, in order to obtain high resolution, the placement of the transfer device must be in a linear or circumferential arrangement that is suitable for the sound source signal, but depending on the placement of the transfer device, the window function may be too strong and the resolution may be affected. There is a problem of deterioration, and there are restrictions on setting an appropriate arrangement state. Therefore, not only does the calculation become complicated due to the increase in the number of books to be delivered, but also the device becomes larger and the arrangement of the delivery device becomes difficult, which has been difficult to solve.

本発明は前記従来技術が持っていた課題として、高い空
間分解能を得るためには受渡器の数を増加させねばなら
ず、それによって計算の簡単化、装置構造の簡単化、及
び小形化が困難であるという点について解決した音源特
徴抽出方法を提供するものである。
The present invention solves a problem that the prior art had, in order to obtain high spatial resolution, it is necessary to increase the number of transfer devices, which makes it difficult to simplify calculations, simplify the device structure, and make it compact. The present invention provides a sound source feature extraction method that solves this problem.

(課題を解決するための手段〉 前記課題を解決するために、第1の発明は、音源からの
音源信号を複数の受信器で受信し、その受信信号を時間
・空間信号のサンプリング定理に基づきサンプリングし
て離散時系列信号を求め、その離散時系列信号から該音
源信号の特徴量を抽出する音源特徴抽出方法において、
前記離散時系列信号に基づき、前記音源信号のもつ特徴
量を非線形結合のパラメータで数式化し、その数式に従
い前記音源信号のもつ特徴量を直接、算出するようにし
たものである。
(Means for Solving the Problems) In order to solve the above problems, the first invention receives a sound source signal from a sound source with a plurality of receivers, and processes the received signals based on the temporal and spatial signal sampling theorem. In a sound source feature extraction method that obtains a discrete time series signal by sampling and extracts a feature quantity of the sound source signal from the discrete time series signal,
Based on the discrete time series signal, the feature quantity of the sound source signal is expressed as a formula using nonlinear combination parameters, and the feature quantity of the sound source signal is directly calculated according to the formula.

第2の発明は、第1の発明において、前記非線形結合の
パラメータは、最小2東線形Taylor微分補正法を
用いて算出するようにしな、ものである。
A second invention is based on the first invention, wherein the parameters of the nonlinear combination are calculated using a minimum two-east linear Taylor differential correction method.

第3の発明は、第1の発明において、前記受信器は、前
記サンプリング定理を満足させて配置するようにしたも
のである。
In a third aspect of the invention, in the first aspect, the receiver is arranged so as to satisfy the sampling theorem.

(作用) 第1の発明によれば、以上のように音源特徴抽出方法を
構成したので、音源から到来する音源信号は複数の受信
器で受信され、サンプリング定理に基づく所定周波数で
サンプリングされて離散時系列信号に変換される。そし
て、この離散時系列信号に基づき、音源信号のもつ特徴
量を用いて音源信号の数学的モデル(数式化)が構築さ
れる。
(Operation) According to the first invention, since the sound source feature extraction method is configured as described above, the sound source signal arriving from the sound source is received by a plurality of receivers, and is sampled at a predetermined frequency based on the sampling theorem to be discrete. Converted to a time series signal. Then, based on this discrete time-series signal, a mathematical model (formula) of the sound source signal is constructed using the feature quantities of the sound source signal.

次いで構築された数式に従い、音源信号のもつ特@Iが
直接、演算により算出される。これにより、受信器の数
を増やすことなく、短い分析時間で、高い空間分解能の
特徴抽出が行える。
Next, the characteristic @I of the sound source signal is directly calculated according to the constructed formula. This allows feature extraction with high spatial resolution in a short analysis time without increasing the number of receivers.

第2の発明では、最小2乗線形ray+or微分補正法
を用いて、前記数式化の際の非線形結合のパラメータを
算出しているので、計重量の少ない数式の設定が的確に
行える。
In the second aspect of the invention, since the least squares linear ray+or differential correction method is used to calculate the parameters of the non-linear combination in formulating the formula, it is possible to accurately set a formula with a small weight.

第3の発明では、受信器はサンプリング定理を満足させ
て配置するので、受信器の空間配置位置の自由度が増す
In the third invention, since the receivers are arranged so as to satisfy the sampling theorem, the degree of freedom in the spatial arrangement of the receivers is increased.

従って、前記課題を解決できるのである。Therefore, the above problem can be solved.

(実施例) 第1図は、本発明の実施例を示す音源特徴抽出方法を用
いた音源特徴抽出装置の機能ブロック図である。
(Embodiment) FIG. 1 is a functional block diagram of a sound source feature extraction device using a sound source feature extraction method showing an embodiment of the present invention.

この音源特徴抽出装置は、例えば水中にあるD個の音源
1からの音源信号S1の特徴量(振幅、周波数、方位)
を抽出する装置であり、音源1がらの音源信号S1を受
信する信号受信部2を有し、その出力側には非線形特徴
量推定部10が接続されている。信号受信部2は、音源
信号S1を電気信号に変換する複数の受信器と、その受
信器の出力をサンプリング定理に基づき所定のサンプリ
ング周波数でサンプリングして離散時系列信号S(n、
i>を出力する機能を有している。ここで、s(n、i
)中のnはサンプリングした時系列データの番号、1は
受信器の番号である。
For example, this sound source feature extraction device extracts features (amplitude, frequency, direction) of sound source signals S1 from D sound sources 1 underwater.
This device extracts a sound source signal S1 from a sound source 1, and has a signal receiving section 2 that receives a sound source signal S1 from a sound source 1, and a nonlinear feature estimating section 10 is connected to the output side of the signal receiving section 2. The signal receiving unit 2 includes a plurality of receivers that convert the sound source signal S1 into electrical signals, and samples the outputs of the receivers at a predetermined sampling frequency based on the sampling theorem to obtain a discrete time series signal S(n,
It has a function of outputting i>. Here, s(n, i
), n is the number of sampled time series data, and 1 is the number of the receiver.

非線形特徴量推定部10は、離散時系列信号S(n、i
)を入力し、音源信号S1の特@量を推定する機能を有
し、受信器で受信する受信信号に対する推定量を音源1
の推定性微量の非線形結合関数で算出する推定信号発生
部11を有し、その出力側には、その推定値の受信信号
に対する推定誤差を算出する推定誤差1出部12が接続
されている。推定誤差算出部12の出力側には、正規方
程式決定部13、補正量算出部14、及び推定誤差判定
部15が接続され、さらにその推定誤差判定部15の出
力側に、推定特徴量更新部16を介して推定信号発生部
11が接続されている。
The nonlinear feature estimation unit 10 calculates a discrete time series signal S(n, i
) and estimates the special @ quantity of the sound source signal S1, and the estimated quantity for the received signal received by the receiver is input to the sound source 1.
It has an estimated signal generation section 11 that calculates using a non-linear combination function of estimable minute amounts, and an estimation error 1 output section 12 that calculates an estimation error of the estimated value with respect to the received signal is connected to its output side. A normal equation determination unit 13, a correction amount calculation unit 14, and an estimation error determination unit 15 are connected to the output side of the estimation error calculation unit 12, and an estimated feature amount update unit is connected to the output side of the estimation error determination unit 15. The estimated signal generator 11 is connected via 16.

正規方程式決定部13は、推定誤差算出部12で算出さ
れた推定誤差と推定性微量から最小2乗線形Tay I
 or微分補正に基づき、正規方程式を決定する機能を
有している。補正量算出部14は、正規方程式決定部1
3で決定された正規方程式から、推定性微量に対する補
正量を算出する機能を有している。推定誤差判定部15
は、推定誤差算出部12で得られた推定誤差を用いてそ
の推定誤差に対する推定誤差基準を満たすか否かを判定
し、満たさない時には推定特徴量更新部16を制御して
推定性微量を更新させる。即ち、推定特徴量更新部16
において、推定性微量に、補正量算出部14で求められ
た補正量を加算した結果を新たな推定時@量として推定
信号発生部11へ入力し、前記の更新処理を該推定誤差
が推定誤差基準を満たすまで反復する機能を有している
。推定誤差算出部12で算出された推定誤差が推定誤差
基準を満たすと、その推定性微量を音源1の最終的推定
特徴量として推定誤差判定部15から出力される。
The normal equation determination unit 13 calculates a least squares linear Tay I from the estimation error calculated by the estimation error calculation unit 12 and the estimated trace amount.
It has a function to determine a normal equation based on or differential correction. The correction amount calculation unit 14 includes the normal equation determination unit 1
It has a function of calculating the correction amount for the estimated trace amount from the normal equation determined in step 3. Estimation error determination unit 15
uses the estimation error obtained by the estimation error calculation unit 12 to determine whether the estimation error standard for the estimation error is satisfied, and if the estimation error criterion is not satisfied, controls the estimated feature amount updating unit 16 to update the estimated trace amount. let That is, the estimated feature amount updating unit 16
In , the result of adding the correction amount obtained by the correction amount calculation unit 14 to the estimated trace amount is inputted to the estimation signal generation unit 11 as a new estimation time @ amount, and the above update process is performed so that the estimation error is the estimation error. It has a function to iterate until the criteria is met. When the estimation error calculated by the estimation error calculation unit 12 satisfies the estimation error criterion, the estimation error determination unit 15 outputs the estimated small amount as the final estimated feature quantity of the sound source 1.

次に、以上のような音源特徴抽出装置を用いて本実施例
の音源特徴抽出方法を説明する。
Next, a sound source feature extraction method of this embodiment will be explained using the sound source feature extraction device as described above.

先ず、音源1がD個あり、各音源1は受信器に平面波と
して受信されると考え、抽出すべき音源1の特徴量ベク
トルPは、次式(1)に示すように、各音源1の特徴量
である振幅a、 角周波数j゛ ω0、及び方位θ−(j=1.2.・・・、D>からJ
           J 成るとする。また、次式(2)に示すように、特徴量ベ
クトルPに対する推定特徴量ベクトルをp、各音源1の
推定振幅を58、推定角周波数を品7、J      
           、J推定方位をθ−(j=1.
2.・・・、D)とする。
First, considering that there are D sound sources 1 and that each sound source 1 is received by the receiver as a plane wave, the feature vector P of the sound source 1 to be extracted is calculated using the following equation (1). The feature quantities are amplitude a, angular frequency j゛ω0, and orientation θ-(j=1.2..., D> to J
J. In addition, as shown in the following equation (2), the estimated feature vector for the feature vector P is p, the estimated amplitude of each sound source 1 is 58, the estimated angular frequency is 7, J
, J estimated direction as θ−(j=1.
2. ..., D).

P:(Pl、P2++++、 p3D>=(a 1 、
 a2. ”・、 aD、 (t) 1. ω2. ”
・ωD・θ1・θ2・°゛°・aD) ・・・・・・(1) p=(p工、 l’2.・・・、+53D)” (al
 、 a2.−、 aD、 ω1. ω2. ++δつ
、θ1.θ2.・・・、aD) ・・・・・・(2) 信号受信部2の受信器の数をMとし、各受信器の位置ベ
クトルをr−(i=1.2.・・・、M)と工 する。i番目の受信器の受信信号が、信号受信部2内で
、サンプリング周波数f5でサンプリングされ、そのサ
ンプリングされた離散時系列信号をs(n、i)とし、
1フレ一ムN個のブロックデータとして考える。
P:(Pl, P2++++, p3D>=(a 1 ,
a2. ”・, aD, (t) 1. ω2.”
・ωD・θ1・θ2・°゛°・aD) ......(1) p=(p engineering, l'2...., +53D)" (al
, a2. −, aD, ω1. ω2. ++δ, θ1. θ2. ..., aD) ...... (2) The number of receivers in the signal receiving section 2 is M, and the position vector of each receiver is r-(i=1.2...,M) and work it out. The received signal of the i-th receiver is sampled at the sampling frequency f5 in the signal receiving unit 2, and the sampled discrete time series signal is defined as s(n, i),
Consider one frame as N block data.

D個の音源1からの音源信号S1が信号受信部2によっ
て受信され、その信号受信部2がら離散時系列信号s(
n、i)が非線形特徴量推定部10へ出力されると、該
非線形時@量推定部10内の推定信号発生部11では、
次のようにして受信信号の離散時系列信号s(n、i)
に対する推定信号&(n、i)を算出する。
Sound source signals S1 from D sound sources 1 are received by the signal receiving unit 2, and the signal receiving unit 2 receives the discrete time series signal s(
n, i) is output to the nonlinear feature estimation unit 10, the estimated signal generation unit 11 in the nonlinear time @quantity estimation unit 10,
The discrete time series signal s(n,i) of the received signal is
Calculate the estimated signal &(n, i) for .

即ち、各音源1が平面波で受信器に入射する場合、第i
の受信器の受信信号s(n、i)に対する推定信号i(
n、i)は、推定振幅5・、推定角周波数物、推定方位
θ3 (j=1.2.・・・。
That is, when each sound source 1 is incident on the receiver as a plane wave, the i-th
The estimated signal i(
n, i) are the estimated amplitude 5·, the estimated angular frequency object, and the estimated orientation θ3 (j=1.2...).

D)の非線形結合関数である次式(3)で数式化(モデ
ル化)される。
D) is mathematically expressed (modeled) using the following equation (3), which is a nonlinear combination function.

s (n、 1)=IajCO3((Thjn+J−r
i>J=1 ・・・・・・(3) 但し、n=1.2.・・・、N 1=1 2.  ・・・9M k、二波数ベクトル(方位)。
s (n, 1)=IajCO3((Thjn+J−r
i>J=1 (3) However, n=1.2. ..., N 1=1 2. ...9M k, two-wave number vector (azimuth).

推定誤差算出部12では、推定信号s(n、i)を入力
し、その推定信号s(n、i>の受信信号s(n、i)
に対する推定誤差パワー♀を次式(4)に従って算出し
、その算出結果を正規方程式決定部13へ出力する。
The estimation error calculation unit 12 inputs the estimated signal s(n, i) and calculates the received signal s(n, i) of the estimated signal s(n, i>
The estimated error power ♀ is calculated according to the following equation (4), and the calculation result is output to the normal equation determination unit 13.

正規方程式決定部13では、推定誤差パワー♀を最小に
するような推定特徴量ベクトルpを最小2乗法によって
求める。ここで、推定信号9(n。
The normal equation determination unit 13 determines an estimated feature vector p that minimizes the estimated error power ♀ by the least squares method. Here, the estimated signal 9(n.

i)が推定値特徴量a j、船、θj(j=1゜2、・
・・、D>の非線形結合の関数なので、最小2東線形T
aylor微分補正法を適用する。
i) is the estimated value feature a j, ship, θj (j=1゜2, ・
Since it is a function of a nonlinear combination of ..., D>, the minimum 2 east linear T
Apply the aylor differential correction method.

この丁ay+or微分補正法では、推定信号5(nj)
を各推定特徴量の周りで1次の項まで丁ay+or展開
し、これを(4)式に代入すると、最小2乗法の原理か
ら、次式(5)を得る。これは、推定誤差パワー9を減
少させる推定特徴量ベクトルpに対する補正ベクトルδ
Pを決定する正規方程式%式% 但し、補正量ベクトルE= (e   e  ・・・、e3D) 1・ 2・ マトリクスF= (fkρ)(k、 、Q =1.2.・・・、3D)ベ
クトル要素ek= 8M Σ Σ(s (n、 1)−s (n、1))fk(n
、t)n=1  i=1 マトリクス要素fk、l!” マトリクス要素gk(n、i)= δs (n、i>/δpk (k=1.2.・・・、3D) 次に、補正量算出部14では、推定性微量ベクトルpを
入力し、(5)式、つまり次式(5−1>から補正ベク
トルδPを求め、それを推定誤差判定部15へ与える。
In this day+or differential correction method, the estimated signal 5(nj)
By expanding the equation (1) to the first-order term around each estimated feature quantity and substituting this into equation (4), the following equation (5) is obtained from the principle of the least squares method. This is the correction vector δ for the estimated feature vector p that reduces the estimation error power 9.
Normal equation to determine P % Formula % However, correction amount vector E = (e e ..., e3D) 1. 2. Matrix F = (fkρ) (k, , Q = 1.2..., 3D ) vector element ek = 8M Σ Σ(s (n, 1) - s (n, 1)) fk (n
, t) n=1 i=1 matrix elements fk,l! ” Matrix element gk (n, i) = δs (n, i>/δpk (k=1.2..., 3D) Next, the correction amount calculation unit 14 inputs the estimated trace vector p, A correction vector δP is obtained from equation (5), that is, the following equation (5-1>), and is provided to the estimation error determination unit 15.

δP=F−1・E     ・・・・・・(5−1)推
定誤差判定部15では、推定誤差基準として例えば該推
定誤差パワー♀の最小化とする。最小とならない場合、
推定特徴量更新部16において、次式(6)に示すよう
に、推定性微量ベクトルpに補正ベクトルδPを加算し
たものを、新たな推定特@量ベクトルpとする。
δP=F-1·E (5-1) The estimation error determination unit 15 uses, for example, the minimization of the estimation error power ♀ as the estimation error standard. If not the minimum,
In the estimated feature amount updating unit 16, as shown in the following equation (6), the sum of the estimated trace amount vector p and the correction vector δP is set as a new estimated feature amount vector p.

p−+5+δP ・・・・・・(6) 上記処置を反復し、推定誤差パワー9を最小とする推定
性微量ベクトルpを推定性微量ベクトルとして出力する
。これにより、音源信号の特@量である振幅a、、角周
波数の1、及び方位θ、がJ           J
           J求まる。
p-+5+δP (6) The above procedure is repeated, and the estimable trace vector p that minimizes the estimated error power 9 is output as the estimable trace vector. As a result, the amplitude a, which is the special quantity of the sound source signal, the angular frequency 1, and the orientation θ are J J
J is found.

第2図は、本実施例の音源特徴抽出方法と従来の例えば
ビームフォーマとの分析結果の比較を示す図である。こ
の第2図では、横軸に方位θ(°)縦軸にパワーがとら
れ、従来のビームフォーマの指向特性曲線が示されてい
る。
FIG. 2 is a diagram showing a comparison of analysis results between the sound source feature extraction method of this embodiment and a conventional beamformer, for example. In FIG. 2, the horizontal axis represents the azimuth θ (°), and the vertical axis represents the power, and shows the directivity characteristic curve of the conventional beamformer.

本実施例の音源特徴抽出方法と従来のビームフォーマと
を比較するため、受信器は半径1.5mの半円に5°間
隔で37個が配置され、これに2音源が平面波として受
信器に到来し、それぞれ振幅a1=1v、a2=IV、
周波数f1=2.0kHz、f2=2− ○kHz、方
位θ1=8i。
In order to compare the sound source feature extraction method of this embodiment with a conventional beamformer, 37 receivers were arranged at 5° intervals in a semicircle with a radius of 1.5 m, and two sound sources were sent to the receiver as plane waves. and the amplitude a1=1v, a2=IV, respectively.
Frequency f1=2.0kHz, f2=2-○kHz, direction θ1=8i.

0°、θ2=80.0°とする。0° and θ2=80.0°.

本実施例では、37個の受信器のうち、80゜〜100
°に配置された隣り合う5個(M=5>だけを使い、分
析時間は20m5ecで各特徴量の推定を行った。これ
に対してビームフォーマでは、37個の受信器の全て(
M=37>を使い、分析時間は40m5ecで各特徴量
の推定を行うものにした(サンプリング周波数f  、
8kH2)。
In this example, out of 37 receivers, 80° to 100°
Each feature was estimated using only 5 adjacent receivers (M=5>) arranged at 37°, and the analysis time was 20m5ec.On the other hand, with the beamformer, all 37 receivers (
M = 37>, and the analysis time was 40 m5ec to estimate each feature (sampling frequency f,
8kH2).

第2図のビームフォーマの指向特性から明らかなように
、従来のビームフォーマでは、極大値がらθ1=80.
5” 、θ2=80.3″となり、しかも2個の音源に
相当するピークが重なって区別がしにくい。これに対し
、本実施例の推定値はθ1=81.o°、θ2=79.
9°となり、2個を明確にわけ、しかもより正確な推定
値を算出できる。
As is clear from the directivity characteristics of the beamformer in FIG. 2, the conventional beamformer has a maximum value of θ1=80.
5", θ2=80.3", and the peaks corresponding to the two sound sources overlap, making it difficult to distinguish them. On the other hand, the estimated value of this example is θ1=81. o°, θ2=79.
9°, which allows the two to be clearly separated and a more accurate estimated value to be calculated.

このように、本実施例では、信号受信部2に到来する音
源信号S1を、非線形性微量推定部10を用いて音源信
号S1のもつ特徴量の非線形結合でモデル化し、音源信
号S1の持つ特徴量を直接算出するようにしたので、少
ない受信器数で、しかも短時間の分析時間で、高い分解
能の特徴量抽出が可能となる。さらに、受信器数が少な
くて良いので、計箪量も少なく、それによって装置構造
の簡単化と装置の小形化か図れる。また、最小2東線形
ray+or微分補正法を用いて非線形結合のパラメー
タを算出するようにしたので、正規方程式決定部13に
おける方程式の決定が的確に行える。
As described above, in this embodiment, the sound source signal S1 arriving at the signal receiving section 2 is modeled by a nonlinear combination of the feature quantities of the sound source signal S1 using the nonlinearity trace estimating section 10, and the features of the sound source signal S1 are Since the quantities are directly calculated, it is possible to extract high-resolution feature quantities with a small number of receivers and a short analysis time. Furthermore, since the number of receivers is small, the total amount of space required is also small, thereby simplifying the structure of the device and making the device more compact. Furthermore, since the parameters of the nonlinear combination are calculated using the minimum 2-east linear ray+or differential correction method, the equation can be determined accurately in the normal equation determination unit 13.

さらにまた、受信器の空間配置を時間・空間信号のサン
プリング定理のみを満足させるように設定すれば、受信
器の配置位置の自由度が増し、その受信器の配置位置に
よらないで前記の分析時間の短時間化、空間分解能の高
分解能化、及び装置の小形化等の効果を持つ音源特徴抽
出装置の最適設計が可能となる。
Furthermore, if the spatial arrangement of the receivers is set so as to satisfy only the sampling theorem for temporal and spatial signals, the degree of freedom in the receiver arrangement increases, and the above-mentioned analysis can be carried out regardless of the receiver arrangement. It becomes possible to optimally design a sound source feature extraction device that has effects such as time reduction, high spatial resolution, and miniaturization of the device.

なお、本発明は上記実施例に限定されず、第1図の非線
形特徴量推定部10をディジタル・シグナル・プロセッ
サ(DSP)や、あるいはマイクロコンピュータを用い
たプログラム制御等で構成したり、あるいは本発明を用
いて空中の音源に対する特徴量を抽出する等、種々の変
形が可能である。
Note that the present invention is not limited to the above-mentioned embodiments, and the nonlinear feature estimation unit 10 shown in FIG. Various modifications are possible, such as extracting feature amounts for sound sources in the air using the invention.

(発明の効果) 以上詳細に説明したように、第1の発明によれば、受信
器に到来する音源信号を、その音源信号の持つ特徴量の
非線形結合でモデル化し、音源信号の持つ特徴1を直接
算出するようにしたので、使用する受信器数の減少化、
分析時間の短時間化、空間分解能の高分解能化、及び装
置構造の簡単化と小形化という効果が得られる。
(Effects of the Invention) As described in detail above, according to the first invention, the sound source signal arriving at the receiver is modeled by a nonlinear combination of the feature quantities of the sound source signal, and the feature 1 of the sound source signal is is calculated directly, reducing the number of receivers used.
The effects of shortening analysis time, increasing spatial resolution, and simplifying and downsizing the device structure can be obtained.

第2の発明では、非線形結合のパラメータは、最小2東
線形Taylor微分補正法を用いて算出するようにし
たので、モデル化する数式が的確にまとまり、それによ
って特徴抽出精度の向上が図れる。
In the second aspect of the invention, the parameters of the nonlinear combination are calculated using the minimum two-east linear Taylor differential correction method, so that the modeling formula can be accurately compiled, thereby improving the accuracy of feature extraction.

なお、最小2東線形ray+or微分補正法以外の方法
を用いても、非線形結合のパラメータを求めることが可
能である。
Note that it is also possible to obtain the parameters of the nonlinear combination using a method other than the minimum 2 east linear ray+or differential correction method.

第3の発明では、サンプリング定理を満足させて受信器
を配置するようにしたので、受信器の空間配置の自由度
が増し、その受信器の配置によらないで、前記第1の発
明とほぼ同様の効果が得られ、音源特徴抽出装置の最適
設計が可能となる。
In the third invention, since the receivers are arranged so as to satisfy the sampling theorem, the degree of freedom in the spatial arrangement of the receivers is increased, and it is almost the same as the first invention, regardless of the arrangement of the receivers. A similar effect can be obtained, and an optimal design of the sound source feature extraction device can be achieved.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明の実施例を示す音源特徴抽出方法を用い
た音源特徴抽出装置の機能ブロック図、第2図は本実施
例と従来の分析結果の比較を示す図である。 1・・・・・・音源、2・・・・・・信号受信部、10
・・・・・・非線形特徴量推定部、11・・・・・・推
定信号発生部、12・・・・・・推定誤差算出部、13
・・・・・・正規方程式決定部、14・・・・・・補正
量算出部、15・・・・・・推定部課差判定部、16・
・・・・・推定特徴量更新部。
FIG. 1 is a functional block diagram of a sound source feature extraction device using a sound source feature extraction method according to an embodiment of the present invention, and FIG. 2 is a diagram showing a comparison between the present embodiment and conventional analysis results. 1...Sound source, 2...Signal receiving section, 10
......Nonlinear feature estimation section, 11... Estimated signal generation section, 12... Estimation error calculation section, 13
...... Normal equation determination section, 14... Correction amount calculation section, 15... Estimation section, difference judgment section, 16.
...Estimated feature update unit.

Claims (1)

【特許請求の範囲】 1、音源からの音源信号を複数の受信器で受信し、その
受信信号を時間・空間信号のサンプリング定理に基づき
サンプリングして離散時系列信号を求め、その離散時系
列信号から該音源信号の特徴量を抽出する音源特徴抽出
方法において、 前記離散時系列信号に基づき、前記音源信号のもつ特徴
量を非線形結合のパラメータで数式化し、その数式に従
い前記音源信号のもつ特徴量を直接、算出することを特
徴とする音源特徴抽出方法。 2、請求項1記載の音源特徴抽出方法において、前記非
線形結合のパラメータは、最小2乗線形テーラ微分補正
法を用いて算出する音源特徴抽出方法。 3、請求項1記載の音源特徴抽出方法において、前記受
信器は、前記サンプリング定理を満足させて配置する音
源特徴抽出方法。
[Claims] 1. A sound source signal from a sound source is received by a plurality of receivers, the received signal is sampled based on the sampling theorem of temporal and spatial signals to obtain a discrete time series signal, and the discrete time series signal is obtained. In the sound source feature extraction method of extracting the feature amount of the sound source signal from A sound source feature extraction method characterized by directly calculating. 2. The sound source feature extraction method according to claim 1, wherein the parameters of the nonlinear combination are calculated using a least squares linear Taylor differential correction method. 3. The sound source feature extraction method according to claim 1, wherein the receivers are arranged so as to satisfy the sampling theorem.
JP2180886A 1990-07-09 1990-07-09 Sound source feature extraction method Expired - Fee Related JP2763821B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2180886A JP2763821B2 (en) 1990-07-09 1990-07-09 Sound source feature extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2180886A JP2763821B2 (en) 1990-07-09 1990-07-09 Sound source feature extraction method

Publications (2)

Publication Number Publication Date
JPH0466888A true JPH0466888A (en) 1992-03-03
JP2763821B2 JP2763821B2 (en) 1998-06-11

Family

ID=16091056

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2180886A Expired - Fee Related JP2763821B2 (en) 1990-07-09 1990-07-09 Sound source feature extraction method

Country Status (1)

Country Link
JP (1) JP2763821B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104330768A (en) * 2013-12-04 2015-02-04 河南科技大学 Maneuvering sound source position estimation method based on acoustic vector sensor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104330768A (en) * 2013-12-04 2015-02-04 河南科技大学 Maneuvering sound source position estimation method based on acoustic vector sensor

Also Published As

Publication number Publication date
JP2763821B2 (en) 1998-06-11

Similar Documents

Publication Publication Date Title
US8005237B2 (en) Sensor array beamformer post-processor
Ramirez Jr et al. Synthetic aperture processing for passive co-prime linear sensor arrays
CN109581352B (en) Super-resolution angle measurement system based on millimeter wave radar
US8593903B2 (en) Calibrating a multibeam sonar apparatus
CN109143190B (en) Broadband steady self-adaptive beam forming method for null broadening
Ramirez et al. Exploiting array motion for augmentation of co-prime arrays
JPH10207490A (en) Signal processor
CN114114153A (en) Multi-sound-source positioning method and system, microphone array and terminal device
CN111722225B (en) Bistatic SAR two-dimensional self-focusing method based on prior phase structure information
CN111175727B (en) Method for estimating orientation of broadband signal based on conditional wave number spectral density
CN103983946A (en) Method for processing singles of multiple measuring channels in sound source localization process
CN109669172B (en) Weak target direction estimation method based on strong interference suppression in main lobe
CN109884621B (en) Radar altimeter echo coherent accumulation method
CN115453530B (en) Double-base SAR filtering back projection two-dimensional self-focusing method based on parameterized model
JPH0466888A (en) Extraction of feature for sound source
Guerin et al. A hybrid time-frequency approach for the noise localization analysis of aircraft fly-overs
Zhang et al. A hybrid time and frequency domain beamforming method for application to source localisation on high-speed trains
Ramirez et al. Exploiting platform motion for passive source localization with a co-prime sampled large aperture array
JP2690606B2 (en) Sound source number determination method
Zhong et al. Design and assessment of a scan-and-sum beamformer for surface sound source separation
Zhu et al. Design of wide-band array with frequency invariant beam pattern by using adaptive synthesis method
CN111505636B (en) Improved RD algorithm for bistatic SAR with constant acceleration
US11329705B1 (en) Low-complexity robust beamforming for a moving source
Zhou et al. A spatial resampling minimum variance beamforming technique based on diagonal reduction
JP2763819B2 (en) Sound source feature extraction method

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees