WO2019061439A1 - 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 - Google Patents

一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 Download PDF

Info

Publication number
WO2019061439A1
WO2019061439A1 PCT/CN2017/104879 CN2017104879W WO2019061439A1 WO 2019061439 A1 WO2019061439 A1 WO 2019061439A1 CN 2017104879 W CN2017104879 W CN 2017104879W WO 2019061439 A1 WO2019061439 A1 WO 2019061439A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
signal
sound source
delay
algorithm
Prior art date
Application number
PCT/CN2017/104879
Other languages
English (en)
French (fr)
Inventor
周冉冉
崔浩
王永
郭晓宇
倪暹
Original Assignee
山东大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东大学 filed Critical 山东大学
Publication of WO2019061439A1 publication Critical patent/WO2019061439A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/24Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum

Definitions

  • the invention relates to an improved sound source localization method based on progressive serial orthogonalization blind source separation algorithm and an implementation system thereof, and belongs to the technical field of sound source localization.
  • Sound is an important carrier of information dissemination in nature.
  • people can not only obtain the voice information carried by the sound, but also obtain the sound bearing according to the characteristics of the sound propagation and the propagation path itself.
  • the positioning method for the unknown target position mainly relied on radio, laser, ultrasonic, etc., and the position information of the measured object was analyzed and calculated by actively transmitting the detection signal and receiving the reflected wave reflected by the measured object. Because it is an active way to detect, both transmission and reception use pre-defined frequency waves, so it is not susceptible to natural environment interference, and has high precision and high anti-interference characteristics. However, active positioning requires a strong transmit power, which can not be applied in low power or certain energy limiting environments.
  • the sound source positioning adopts the passive principle, which is easy to hide, uses widely used sound waves, and has low equipment cost and low power consumption, so it has been widely concerned and applied.
  • Blind source separation technology is a signal processing method developed in the 1990s. It is based on the statistical characteristics of the source signal without knowing the parameters of the source signal and the transmission channel. Only the observed signals recover the components of the source signal. process.
  • the "source” here refers to the original signal, that is, the independent component; the "blind” one is that the source signal cannot be observed, and the other is that the mixing mode of the source signal is unknown. Therefore, in the case where the source signal and the transmission channel parameters are unknown, Blind source separation techniques are used to process mixed sound signals.
  • the progressive serial orthogonal blind source separation algorithm is a kind of blind source separation algorithm. The independent components are found by the fixed point iteration of progressive orthogonalization.
  • Sound source localization based on arrival delay Suppose that a sound wave propagating in air at a constant speed has a different phase to a pair of receivers located at different positions. According to the phase difference of the sound signal received by the receiver, the time difference of the sound to each receiving end is obtained by a delay algorithm. And then find the location of the sound source.
  • the positioning algorithm has the following advantages: First, the device requirements are not high; second, the steps are simple, the calculation amount is small; second, it is convenient to combine with other systems that need to locate data.
  • Chinese patent document CN104181506A discloses a sound source localization method based on improved PHAT weighted time delay estimation, which acquires 4 channels of sound signals by a microphone array, converts them into digital signals through A/D sampling circuits, and improves the PHAT weighted by improved PHAT.
  • the cross-correlation function method performs time delay estimation algorithm processing, obtains the time delay estimation value, and combines the spatial position of the placed microphone array, and solves the nonlinear equations by iterative method to obtain the relative position of the sound source.
  • the system described in this patent does not recognize multiple sound sources and does not distinguish directional noise.
  • Chinese patent document CN 104614069A discloses a power device fault sound detection method based on joint approximation diagonalized blind source separation algorithm, and the specific steps include: (1) using a microphone array; (2) adopting a joint approximation diagonalization blind source separation algorithm For the step (1), the sound signal collected by the microphone array is used to separate the independent sound source signals; (3) the Mel frequency cepstrum coefficient MFCC of the independent sound source signal is extracted as the sound feature parameter, and the sound signal is identified by the pattern matching algorithm, which is to be tested. After the sound template is matched with all the reference sample templates, the reference sample template with the smallest matching distance is the result of the working sound recognition of the power device.
  • the performance of the joint approximation diagonalization algorithm used in this patent is greatly affected by the number of covariance matrices. When the number of matrices is larger, the computation is more complicated.
  • the present invention proposes an improved sound source localization method based on the progressive serial orthogonalization blind source separation algorithm
  • the present invention also proposes an implementation system for the above improved sound source localization method.
  • An improved sound source localization method based on progressive serial orthogonal blind source separation algorithm including the following steps:
  • step (2) Separating the sound signals collected in step (1) by using a progressive serial orthogonal blind source separation algorithm to obtain respective independent sound source signals;
  • step (3) Extracting the Mel frequency cepstral coefficient (MFCC) as the sound characteristic parameter for each independent sound source signal obtained in step (2), identifying the sound signal by the pattern matching algorithm, and selecting the independent sound source signal of the sound to be positioned. ;
  • MFCC Mel frequency cepstral coefficient
  • step (3) if it is a single sound source, it proceeds to step (5); if it is a plurality of sound sources, the time delay is calculated by the TDOA algorithm to solve the sound source position;
  • First coarse positioning obtain the envelope of the signal, low-resolution sampling, roughly calculate the delay by the generalized autocorrelation function method, and time-shift the signal according to the number of points that are roughly positioned; fine positioning: high-resolution sampling The delay is calculated by the generalized autocorrelation function method, and the precise delay is obtained to solve the sound source position.
  • the accuracy of the delay estimation is limited by the sampling frequency.
  • the higher the required precision the higher the sampling frequency required.
  • the high sampling frequency brings extremely high sampling points.
  • the amount of computation is also greater.
  • the coarse positioning fine positioning algorithm the signal is firstly subjected to a certain time domain shift using low resolution, and then high resolution is used for high precision time delay calibration.
  • this algorithm can achieve the calculation accuracy of high-resolution sampling.
  • this algorithm has only one time domain shift, only need to be adjusted during high-precision calibration. The shorter effective time can calculate the delay and reduce the amount of algorithm operation. Based on the above principle, the algorithm can solve the distance limitation between the sampling MIC. When the distance exceeds the effective duration, only one coarse positioning is needed. Time domain shifting allows you to calculate precise delays.
  • an accurate delay is obtained according to the step (5), and the steps are as follows:
  • step (3) Set 4 sound signals by step (3), that is, x 1 (t), x 2 (t), x 3 (t), x 4 (t), where t is the serial number of the sampling point in the digital signal.
  • the length is N, and the 4 channels of sound signals are windowed and filtered to eliminate noise;
  • N 1 is an integer greater than 2n less than N;
  • N 1 is the signal length, Fs is the sampling frequency ;
  • the generalized autocorrelation is used to obtain the precise delay point n′′ 12 , that is, the signals z 1 (t) and z 2 (t) are Fourier transformed into the frequency domain, and the PHAT weights the cross-power spectrum, and then the Fourier transform is inversely transformed into the time domain.
  • the cross-correlation function is obtained, and the time corresponding to the maximum time-point number of the cross-correlation is two-way delay estimation n′′ 13 , n′′ 13 and n′′ 14 are consistent with the calculation method of n′′ 12 ;
  • the delay is calculated by the TDOA algorithm, and the steps are as follows:
  • step (2) obtain the independent component that needs to be located as y i (t), i is an integer and 1 ⁇ i ⁇ 4, t is the serial number of the sampling point in the digital signal, and y i (t), x 1 (
  • the five signals of t), x 2 (t), x 3 (t), and x 4 (t) are windowed and filtered, and then Fourier transformed into the frequency domain to obtain frequency domain signals Y i (k), X 1 (k), X 2 (k), X 3 (k), X 4 (k), where k is the sequence number of the digital signal sample point corresponding to t;
  • n corresponds to The delay is the delay estimate t i1 , t i2 , t of the 4-way sound signal x 1 (t), x 2 (t), x 3 (t), x 4 (t) and the reference signal y i (t) I3 and t i4 , let R i1 (n) take the maximum value of n as n i1 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i1 >N/2,
  • R i2 (n) take the maximum value of n as n i2 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i2 >N/2, then If n i2 ⁇ N/2, then
  • R i3 (n) take the maximum value of n as n i3 , the number of points of the sound signal taken is N, and the sampling frequency is Fs. If n i3 >N/2, then If n i3 ⁇ N/2, then
  • R i4 (n) take the maximum value of n as n i4 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i4 >N/2, then If n i4 ⁇ N/2, then
  • solving the sound source position includes: setting the sound source position coordinate to (x, y, z), and obtaining the delay parameter, after passing the formula (VIII) ) Find the location coordinates of the sound source:
  • the microphone array is: (0, 0, 0), (a, 0, 0), (0, a, 0), (0, 0, a) are selected in a three-dimensional Cartesian coordinate system.
  • a is a fixed parameter, indicating three coordinates (a, 0, 0), (0, a, 0), (0, 0, a) to the coordinate system origin ( 0,0,0) The distance of the position microphone.
  • step (2) the sound signals collected in step (1) are separated by a progressive serial orthogonal blind source separation algorithm to obtain respective independent sound source signals; the steps are as follows:
  • the whitening process uses the principal component analysis method to decorrelate and scale the signal.
  • the linear whitening transformation V is as shown in equation (XV):
  • Equation (XV) matrix E is a covariance matrix
  • the unit norm feature vector is a column
  • D diag(d 1 , d 2 , d 3 , d 4 ) is a feature matrix of the diagonal element of the eigenvalue of C;
  • step a Calculate the number of independent components of the observed signal z(t), denoted as m, and m ⁇ 4; because the microphone array in step a consists of 4 microphones, 4 sets of sound signals are collected, according to the principle of blind source separation, the number of independent components Not more than the number of observed signals.
  • step 6 Check the standardized w p in step 5 to see if it converges, if it has not converge, return to step 4;
  • step 7p is updated to p+1, if p ⁇ m, return to step 4, otherwise, proceed to step 8;
  • the m independent components of the microphone array are obtained by blind source separation, that is, independent sound source signals.
  • the step (3) extracts the Mel frequency cepstral coefficient (MFCC) for each of the obtained independent sound source signals.
  • MFCC Mel frequency cepstral coefficient
  • the source signal y(t) after the pre-emphasis processing is framed, the frame length is 10ms-30ms, and the frame is shifted to 1/2-1/3 of the frame length; the characteristic change between the frame and the frame can be avoided. ;
  • Window processing for each frame of the signal can increase the continuity of the left and right ends of the frame.
  • the window function is a Hamming window. The formula is
  • step 10 performing fast Fourier transform (FFT) transformation on each frame of the signal processed in step 9, shifting the signal from the time domain to the frequency domain, obtaining the spectrum of the signal, and then taking the square of the modulus as the discrete power spectrum S(k);
  • FFT fast Fourier transform
  • Equation (XX) d[T(i), R(w(j))] is the distance between the vector T(i) to be tested and the reference template vector R(j); T(i) represents T a speech feature vector of the i-th frame; R(w(j)) represents a speech feature vector of the j-th frame in R; D represents a minimum distance between the vector to be tested and the reference sample vector;
  • the reference sample template with the smallest matching distance is the result of independent component recognition.
  • the reference template used is the same reference template.
  • the four signals collected by the microphone array signal are a single sound source, and the four signals collected by the microphone array signal are multiple sound sources. You can select the independent sound source information you want to locate according to your requirements.
  • An implementation system for realizing the above sound source localization method comprising four microphones and voltage amplification and elevation circuit modules, a storage module, An algorithm processing and system control module and a display module, wherein the four microphones and the voltage amplification and elevation circuit module are connected to the storage module, and the storage module, the algorithm processing and the system control module, and the display module are sequentially connected;
  • the four microphones and the voltage amplification and elevation circuit module acquire sound signals in real time; the storage module is configured to store the acquired sound signal and the time signal; the algorithm processing and the system control module pass the blind source based on progressive serial orthogonalization
  • the separation algorithm separates the collected mixed sound signals, calculates a time delay by selecting a TDOA sound localization algorithm, and lists the equations to solve the sound source position; the display module is used to display the sound source position.
  • the algorithm processing and system control module is a STM32 development platform; the display module is a liquid crystal display.
  • the invention uses the TDOA algorithm to calculate the time delay to obtain the sound source position.
  • the separated signal is a multi-sound source
  • the separated target signal is directly correlated with the mixed signal to calculate the delay, the calculation amount is small, and the calculation speed is fast; when the signal is collected When it is a single sound source, the improved TDOA algorithm is used for delay calculation, which can improve the accuracy to a certain extent and reduce the amount of algorithm operation.
  • the invention adopts a passive positioning method, a passive principle, and has low power consumption.
  • the invention combines blind source separation and sound source localization to make up for the insufficiency of the previous sound source localization to identify multiple sound sources.
  • FIG. 1 is a structural block diagram of an implementation system of an improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to the present invention.
  • FIG. 2 is a schematic flow chart of an improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to the present invention.
  • FIG. 3 is a schematic flow chart of an improved TDOA algorithm of the present invention.
  • An improved sound source localization method based on progressive serial orthogonal blind source separation algorithm includes the following steps:
  • the microphone array is: selecting (0, 0, 0), (a, 0, 0), (0, a, 0), (0, in the three-dimensional Cartesian coordinate system 0, a) Place the microphone in four positions to obtain the microphone array, a is a fixed parameter, indicating three coordinates (a, 0, 0), (0, a, 0), (0, 0, a) to The distance from the microphone at the origin of the coordinate system (0,0,0).
  • step (2) Using the progressive serial orthogonalization blind source separation algorithm to separate the sound signals collected in step (1) to obtain independent sound source signals; for sound localization in complex environments, using sound source separation technology, The target sound source is extracted from the ambient mixed sound signal, thereby improving the accuracy of sound localization in a complex environment.
  • step (3) Extracting the Mel frequency cepstral coefficient (MFCC) as the sound characteristic parameter for each independent sound source signal obtained in step (2), identifying the sound signal by the pattern matching algorithm, and selecting the independent sound source signal of the sound to be positioned. ;
  • MFCC Mel frequency cepstral coefficient
  • step (3) if it is a single sound source, it proceeds to step (5); if it is a plurality of sound sources, the time delay is calculated by the TDOA algorithm to solve the sound source position;
  • First coarse positioning obtain the envelope of the signal, low-resolution sampling, roughly calculate the delay by the generalized autocorrelation function method, and time-shift the signal according to the number of points that are roughly positioned; fine positioning: high-resolution sampling The delay is calculated by the generalized autocorrelation function method, and the precise delay is obtained to solve the sound source position.
  • the accuracy of the delay estimation is limited by the sampling frequency.
  • the higher the required precision the higher the sampling frequency required.
  • the high sampling frequency brings extremely high sampling points.
  • the amount of computation is also greater.
  • the coarse positioning fine positioning algorithm the signal is firstly subjected to a certain time domain shift using low resolution, and then high resolution is used for high precision time delay calibration.
  • this algorithm can achieve the calculation accuracy of high-resolution sampling.
  • this algorithm has only one time domain shift, only need to be adjusted during high-precision calibration. The shorter effective time can calculate the delay and reduce the amount of algorithm operation. Based on the above principle, the algorithm can solve the distance limitation between the sampling MIC. When the distance exceeds the effective duration, only one coarse positioning is needed. Time domain shifting allows you to calculate precise delays.
  • An improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to Embodiment 1 is characterized in that an accurate delay is obtained according to step (5), as shown in FIG. Location, including the steps below:
  • step (3) Set 4 sound signals by step (3), that is, x 1 (t), x 2 (t), x 3 (t), x 4 (t), where t is the serial number of the sampling point in the digital signal.
  • the length is N, and the 4 channels of sound signals are windowed and filtered to eliminate noise;
  • N 1 is an integer greater than 2n less than N;
  • N 1 is the signal length, Fs is the sampling frequency ;
  • the generalized autocorrelation is used to obtain the precise delay point n′′ 12 , that is, the signals z 1 (t) and z 2 (t) are Fourier transformed into the frequency domain, and the PHAT weights the cross-power spectrum, and then the Fourier transform is inversely transformed into the time domain.
  • the cross-correlation function is obtained, and the time corresponding to the maximum time-point number of the cross-correlation is two-way delay estimation n′′ 13 , n′′ 13 and n′′ 14 are consistent with the calculation method of n′′ 12 ;
  • step (2) obtain the independent component that needs to be located as y i (t), i is an integer and 1 ⁇ i ⁇ 4, t is the serial number of the sampling point in the digital signal, and y i (t), x 1 (
  • the five signals of t), x 2 (t), x 3 (t), and x 4 (t) are windowed and filtered, and then Fourier transformed into the frequency domain to obtain frequency domain signals Y i (k), X 1 (k), X 2 (k), X 3 (k), X 4 (k), where k is the sequence number of the digital signal sample point corresponding to t;
  • n corresponds to The delay is the delay estimate t i1 , t i2 , t of the 4-way sound signal x 1 (t), x 2 (t), x 3 (t), x 4 (t) and the reference signal y i (t) I3 and t i4 , let R i1 (n) take the maximum value of n as n i1 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i1 >N/2,
  • R i2 (n) take the maximum value of n as n i2 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i2 >N/2, then If n i2 ⁇ N/2, then
  • R i3 (n) take the maximum value of n as n i3 , the number of points of the sound signal taken is N, and the sampling frequency is Fs. If n i3 >N/2, then If n i3 ⁇ N/2, then
  • R i4 (n) take the maximum value of n as n i4 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i4 >N/2, then If n i4 ⁇ N/2, then
  • An improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to Embodiment 1 is characterized in that, in step (2), a progressive serial orthogonalization blind source separation algorithm is adopted.
  • Step (1) The collected sound signals are separated to obtain respective independent sound source signals; the steps are as follows:
  • the whitening process uses the principal component analysis method to decorrelate and scale the signal.
  • the linear whitening transformation V is as shown in equation (XV):
  • Equation (XV) matrix E is a covariance matrix
  • the unit norm feature vector is a column
  • D diag(d 1 , d 2 , d 3 , d 4 ) is a feature matrix of the diagonal element of the eigenvalue of C;
  • step a Calculate the number of independent components of the observed signal z(t), denoted as m, and m ⁇ 4; because the microphone array in step a consists of 4 microphones, 4 sets of sound signals are collected, according to the principle of blind source separation, the number of independent components Not more than the number of observed signals.
  • step 6 Check the standardized w p in step 5 to see if it converges, if it has not converge, return to step 4;
  • step 7p is updated to p+1, if p ⁇ m, return to step 4, otherwise, proceed to step 8;
  • the m independent components of the microphone array are obtained by blind source separation, that is, independent sound source signals.
  • the frequency cepstral coefficient (MFCC) is used as the sound characteristic parameter, and the sound signal is identified by the pattern matching algorithm, and the independent sound source signal of the sound to be positioned is selected; the steps are as follows:
  • the source signal y(t) after the pre-emphasis processing is framed, the frame length is 10ms-30ms, and the frame is shifted to 1/2-1/3 of the frame length; the characteristic change between the frame and the frame can be avoided. ;
  • Window processing for each frame of the signal can increase the continuity of the left and right ends of the frame.
  • the window function is a Hamming window. The formula is
  • step 10 performing fast Fourier transform (FFT) transformation on each frame of the signal processed in step 9, shifting the signal from the time domain to the frequency domain, obtaining the spectrum of the signal, and then taking the square of the modulus as the discrete power spectrum S(k);
  • FFT fast Fourier transform
  • Equation (XX) d[T(i), R(w(j))] is the distance between the vector T(i) to be tested and the reference template vector R(j); T(i) represents T a speech feature vector of the i-th frame; R(w(j)) represents a speech feature vector of the j-th frame in R; D represents a minimum distance between the vector to be tested and the reference sample vector;
  • the reference sample template with the smallest matching distance is the result of independent component recognition.
  • the reference template used is the same reference template.
  • the four signals collected by the microphone array signal are a single sound source, and the four signals collected by the microphone array signal are multiple sound sources. You can select the independent sound source information you want to locate according to your requirements.
  • FIG. 1 An improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to any one of embodiments 1-5, wherein the sound source localization method is implemented as shown in FIG.
  • Microphone and voltage amplification and elevation circuit module, storage module, algorithm processing and system control module and display module, four microphones and voltage amplification and elevation circuit modules are connected to the storage module, and the storage module, algorithm processing, system control module and display module are in turn connection;
  • microphones and voltage amplification and elevation circuit modules acquire sound signals in real time; storage modules are used to store acquired sound signals and time signals; algorithm processing and system control modules separate and collect acquired by progressive serial orthogonal blind source separation algorithm The sound signal is mixed, the delay is calculated by selecting the TDOA sound localization algorithm, and the equations are listed to solve the sound source position; the display module is used to display the sound source position.
  • the algorithm processing and system control module is the STM32 development platform; the display module is a liquid crystal display.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统,方法包括步骤如下:(1)采集声音信号并存储;(2)对声音信号分离,得到独立声源信号;(3)对独立声源信号,通过模式匹配算法,选取需要定位的声音的独立声源信号;(4)根据模式匹配的结果,如果为单一声源,先粗定位:求取信号的包络,低分辨率采样,通过广义自相关函数法粗略计算时延,根据粗略定位的点数对信号进行时域搬移;再细定位:高分辨率采样,通过广义自相关函数法计算时延,得到精确时延,求解出声源位置;如果为多个声源,则通过TDOA算法计算时延,求解声源位置;相比传统的TDOA方法,可以一定程度上提高精度,并减少算法运算量。

Description

一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 技术领域
本发明涉及一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统,属于声源定位技术领域。
背景技术
声音是自然界中信息传播的一个重要载体,通过获取声音信号,人们不单能获取到声音承载的语音信息,还能根据声音传播的特性和传播路径本身,通过声源定位技术,还可以获取声音承载的内容信息之外的位置信息。根据声音的这两种特性,所以声音信号的获取在安全监控、定位搜寻、区域探测等领域有着不可替代的作用。
早先对于未知目标位置的定位方法主要依靠无线电、激光、超声等方式进行,通过主动发射探测信号并接收经被测物体反射回来的反射波来分析和计算被测物体的位置信息。因为是主动方式进行探测,发射和接收都要使用预先制定的频率波,所以不易受自然环境干扰,且具有高精度和高抗干扰的特性。但是主动式定位方式需要有强大的发射功率,这会在功率不足或某些能源限制环境下无法得到应用。而声源定位采用被动式原理,便于隐藏,使用广泛存在的声波,再加上设备成本低、功耗低,因而得到了广泛的关注和应用。
盲源分离技术是20世纪90年代发展起来的一种信号处理方法,是在不知源信号和传输通道的参数的情况下,根据源信号的统计特性,仅由观测信号恢复出源信号各个成分的过程。这里的“源”指原始信号,即独立成分;“盲”一是源信号不能被观测到,二是源信号的混合方式未知,因此,在源信号与传输信道参数都未知的情况下,可以用盲源分离技术来处理混合声音信号。渐进串行正交化盲源分离算法是盲源分离算法的一种,通过渐进正交化的不动点迭代找到独立成分。
基于到达时延的声源定位。假定一个以定速在空气中传播的声波,它到达位于不同位置的一对接收机的相位不同,根据接收机接收到的声音信号的相位差别,通过时延算法求得声音到各个接收端的时间差,进而求得声源位置。该定位算法有以下几个优点:一是设备要求不高;二是步骤简单,计算量小;二是方便与其它需要定位数据的系统结合。
中国专利文献CN104181506A公开了一种基于改进PHAT加权时延估计的声源定位方法,该系统由麦克风阵列采集4路声音信号,通过A/D采样电路转化为数字信号,并通过改进的PHAT加权广义互相关函数法进行时延估计算法处理,获得时延估计值,再结合放置的麦克风阵列空间位置,利用迭代法解非线性方程组,从而得到声源的相对位置。但是,该专利所述系统不能辨识多个声源,也不能分辨带有方向性的噪声。
中国专利文献CN 104614069A公开了基于联合近似对角化盲源分离算法的电力设备故障音检测方法,具体步骤包括:(1)采用麦克风阵列;(2)采用基于联合近似对角化盲源分离算法针对步骤(1)采用麦克风阵列采集的声音信号分离各个独立声源信号;(3)提取独立声源信号的Mel频率倒谱系数MFCC作为声音特征参数,通过模式匹配算法识别声音信号,将待测试声音模板与所有的参考样本模板进行匹配后,匹配距离最小的参考样本模板就是电力设备工作音识别的结果。但是该专利采用的联合近似对角化算法的性能受协方差矩阵数量的影响较大,当矩阵的数量越大时,运算越复杂。
发明内容
为了克服现有声源定位方法中不能辨识多个声源的不足,本发明提出了一种基于渐进串行正交化盲源分离算法的改进声源定位方法;
本发明还提出了上述改进声源定位方法的实现系统。
本发明的技术方案为:
一种基于渐进串行正交化盲源分离算法的改进声源定位方法,包括步骤如下:
(1)通过麦克风阵列采集声音信号并存储;
(2)采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;
(3)对步骤(2)得到的每个独立声源信号,提取梅尔频率倒谱系数(MFCC)作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;
(4)根据步骤(3)中模式匹配的结果,如果为单一声源,则进入步骤(5);如果为多个声源,则通过TDOA算法计算时延,求解声源位置;
(5)先粗定位:求取信号的包络,低分辨率采样,通过广义自相关函数法粗略计算时延,根据粗略定位的点数对信号进行时域搬移;再细定位:高分辨率采样,通过广义自相关函数法计算时延,得到精确时延,求解声源位置。
传统的TDOA算法中,时延估计的精度受到采样频率的限制,所需精度越高,所需采样频率就越高,对于相同的采样时长,高采样频率带来极高的采样点数,算法的运算量也就越大。粗定位细定位算法中,先采用低分辨率对信号进行一定的时域搬移,再采用高分辨率进行高精度时延校准。较于低分辨率采样的传统算法,此种算法可以达到高分辨率采样的计算精度;较于高分辨率采样的传统算法,此算法因进行过一次时域搬移,在高精度校准时只需较短的有效时长即可将时延计算出来,减少了算法运算量;基于上述原理,该算法可以解决采样MIC之间的距离限制,当时延超出有效时长的距离时,只需进行一次粗定位时域搬移,即可计算精确时延。
根据本发明优选的,根据所述步骤(5)得到精确时延,包括步骤如下:
A、设定通过步骤(3)获取4路声音信号,即x1(t)、x2(t)、x3(t)、x4(t),t为数字信号中采样点的序号,长度为N,将4路声音信号进行加窗滤波处理,消除噪声;
B、对4路信号进行包络提取,只取包络的上半部分为有效信号,以Fs/n的频率进行抽点采样,得x′1(t)、x′2(t)、x′3(t)、x′4(t),Fs为盲源分离时的采样频率,n为大于1的整数;
C、对x′1(t)、x′2(t)、x′3(t)、x′4(t)进行傅立叶变换到频域,即X′1(k)、X′2(k)、X′3(k)、X′4(k),其中k为与t对应的数字信号中采样点的序号,t、k均为整数;
D、将x′1(t)作为基准信号,分别计算X′1(k)与X′2(k)、X′1(k)与X′3(k)、X′1(k)与X′4(k)的互功率谱G′12(k)、G′13(k)、G′14(k),对互功率谱G′12(k)、G′13(k)、G′14(k)进行PHAT加权操作,如式(Ⅰ)、式(Ⅱ)、式(Ⅲ)所示:
Figure PCTCN2017104879-appb-000001
Figure PCTCN2017104879-appb-000002
Figure PCTCN2017104879-appb-000003
式(Ⅰ)、式(Ⅱ)、式(Ⅲ)中,
Figure PCTCN2017104879-appb-000004
为X′1(k)的共轭;
E、将互功率谱G′12(k)、G′13(k)、G′14(k)逆变换到频域,得到对应的广义互相关函数R′12(t)、R′13(t)、R′14(t);当R′12(t)、R′13(t)、R′14(t)分别取最大值时n所对应的时延即为3路声音信号x′2(t)、x′3(t)、x′4(t)与基准信号x′1(t)的时延估计t′12、t′13、t′14
设R′1s(t)取最大值时t的值为n′1s,s=2、3、4,所取声音信号的点数为N′=fix(N/n),采样频率为Fs/n,若n′1s>N′/2,则n′1s更新为n′1s-N′-1;若n′1s≤N′/2,则n′1s不变;由此计算得到n′12、n′13、n′14
F、若n′1s≥0,将x1(t)在时域上向左平移n′1s*n个点;若n′1s<0,xs(t)在时域上向右平移n′1s*n个点;
取x1(t)、xs(t)前N1个点信号为z(t)、zs(t),N1为大于2n小于N的整数;N1为信号长度,Fs为采样频率;
按照步骤C-E采用广义自相关求取精确时延点数n″12,即将信号z1(t)、z2(t)傅立叶变换到频域,PHAT加权计算互功率谱,然后傅立叶反变换到时域求得互相关函数,取互相关最大时点数所对应的时间为两路的时延估计n″13,n″13和n″14与n″12计算方法一致;
G、则x1(t)、x2(t)的时延
Figure PCTCN2017104879-appb-000005
同理
Figure PCTCN2017104879-appb-000006
根据本发明优选的,所述步骤(4),如果为多个声源,则通过TDOA算法计算时延,包括步骤如下:
a、步骤(2)获取需要进行定位的独立分量为yi(t),i为整数且1≤i≤4,t为数字信号中采样点的序号,将yi(t)、x1(t)、x2(t)、x3(t)、x4(t)这5路信号进行加窗滤波处理,再经傅立叶变换到频域,得到频域信号Yi(k)、X1(k)、X2(k)、X3(k)、X4(k),k为与t对应的数字信号采样点的序号;
b、将独立分量yi(t)作为基准信号,分别计算Yi(k)与X1(k)、Yi(k)与X2(k)、Yi(k)与X3(k)、Yi(k)与X4(k)的互功率谱,即Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k),对互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)进行PHAT加权操作,如式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)所示:
Figure PCTCN2017104879-appb-000007
Figure PCTCN2017104879-appb-000008
Figure PCTCN2017104879-appb-000009
Figure PCTCN2017104879-appb-000010
式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)中,
Figure PCTCN2017104879-appb-000011
为Yi(k)的共轭,
Figure PCTCN2017104879-appb-000012
为PHAT函数;
c、将互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)逆变换到频域,得到对应的广义互相关函数Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n),当Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n)分别取最大值时,n所对应的时延即为4路声音信号x1(t)、x2(t)、x3(t)、x4(t)与基准信号yi(t)的时延估计ti1、ti2、ti3、ti4,设Ri1(n)取最大值时的n的值为ni1,所取声音信号的点数为N,采样频率为Fs,若ni1>N/2,则
Figure PCTCN2017104879-appb-000013
若ni1≤N/2,则
Figure PCTCN2017104879-appb-000014
ti2、ti3、ti4的计算与ti1的计算方法一致;
设Ri2(n)取最大值时的n的值为ni2,所取声音信号的点数为N,采样频率为Fs,若ni2>N/2,则
Figure PCTCN2017104879-appb-000015
若ni2≤N/2,则
Figure PCTCN2017104879-appb-000016
设Ri3(n)取最大值时的n的值为ni3,所取声音信号的点数为N,采样频率为Fs,若ni3>N/2, 则
Figure PCTCN2017104879-appb-000017
若ni3≤N/2,则
Figure PCTCN2017104879-appb-000018
设Ri4(n)取最大值时的n的值为ni4,所取声音信号的点数为N,采样频率为Fs,若ni4>N/2,则
Figure PCTCN2017104879-appb-000019
若ni4≤N/2,则
Figure PCTCN2017104879-appb-000020
d、将ti1作为基准延时,则t12=ti1-ti2表示x1(t)相对于x2(t)的延时,t13=ti1-ti3表示x1(t)相对于x3(t)的延时,t14=ti1-ti4表示x1(t)相对于x4(t)的延时,得到x1(t)相对于x2(t)、x3(t)、x4(t)的延时t12、t13、t14
根据本发明优选的,所述步骤(4)、(5)中,求解声源位置,包括:设定声源位置坐标为(x,y,z),得到延时参数以后,通过式(Ⅷ)求取声源位置坐标:
Figure PCTCN2017104879-appb-000021
求得声源的位置坐标(x,y,z),式中,t12、t13、t14为三路之间的延时值,v为声音在空气中的速度。
根据本发明优选的,所述麦克风阵列为:在三维直角坐标系下选择(0,0,0),(a,0,0),(0,a,0),(0,0,a)四个位置摆放麦克风,得到所述麦克风阵列,a为固定参数,表示三个坐标(a,0,0),(0,a,0),(0,0,a)到坐标系原点(0,0,0)位置麦克风的距离。
根据本发明优选的,所述步骤(1),通过麦克风阵列采集的声音信号即混合声音信号x(t),x(t)=[x1(t),x2(t),x3(t),x4(t)],x1(t)、x2(t)、x3(t)、x4(t)分别如式(Ⅸ)、(Ⅹ)、(Ⅺ)、(Ⅻ)所示:
x1(t)=a11s1+a12s2+a13s3+a14s4  (Ⅸ)
x2(t)=a21s1+a22s2+a23s3+a24s4  (Ⅹ)
x3(t)=a31s1+a32s2+a33s3+a34s4  (Ⅺ)
x4(t)=a41s1+a42s2+a43s3+a44s4  (Ⅻ)
式(Ⅰ)中,s1,s2,s3,s4为4个独立声源发出的声音信号,aij(i=1,2,3,4;j=1,2,3,4)是实系数。
根据本发明所优选的,步骤(2)中,采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;包括步骤如下:
①采用麦克风阵列采集到环境声音,取出同一时间段内的4路声音信号进行中心化处理,即去均值处理,去均值后得到信号
Figure PCTCN2017104879-appb-000022
通过式(XIII)求得:
Figure PCTCN2017104879-appb-000023
②对去均值后的声音信号
Figure PCTCN2017104879-appb-000024
进行白化处理,即对
Figure PCTCN2017104879-appb-000025
进行线性变换V,得到白化信号z(t):
Figure PCTCN2017104879-appb-000026
白化处理采用主分量分析方法,对信号进行去相关和缩放,线性白化变换V如式(XV)所示:
Figure PCTCN2017104879-appb-000027
式(XV)中,矩阵E以协方差矩阵
Figure PCTCN2017104879-appb-000028
的单位范数特征向量为列,D=diag(d1,d2,d3,d4)是以C的特征值为对角元素的特征矩阵;
③计算观测信号z(t)的独立成分个数,记为m,且m≤4;因为步骤a中麦克风阵列由4个麦克风组成,采集4组声音信号,根据盲源分离原理,独立分量数目不大于观测信号数目。
选择具有单位范数的初始化向量wp,p=1,2,…,m,令p=1;
④对wp进行如式(XVI)所示的迭代运算:
Figure PCTCN2017104879-appb-000029
式(XVI)中,函数g为g1(y)、g2(y)或g3(y);g1(y)=tanh(a1y),g2(y)=y*exp(-y^2/2),g3(y)=y^3;
⑤对步骤④中迭代后的wp进行正交化和标准化,正交化方法如式(XVII)所示:
Figure PCTCN2017104879-appb-000030
对wp标准化,即除以其范数,如式(XVIII)所示:
wp=wp/norm(wp)  (XVIII)
⑥对步骤⑤中标准化后的wp进行检测,看其是否收敛,如果尚未收敛,则返回步骤④;
⑦p更新为p+1,如果p≤m,返回步骤④,否则,进入步骤⑧;
⑧通过步骤③~⑦的循环计算,得到解混矩阵W={w1,w2,…,wm}T,m≤4;由式(XIX)得到源信号y(t):
y(t)=Wx(t)  (XIX)
式(XIX)中,y(t)=[y1(t),y2(t),…yi(t)…,ym(t)],i=1,2,…,m,分别为麦克风阵列采集声音信号经过盲源分离后得到的m个独立分量,即独立声源信号。
根据本发明优选的,所述步骤(3),对得到的每个独立声源信号,提取梅尔频率倒谱系数(MFCC) 作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;包括步骤如下:
⑨对步骤⑧中分离出的源信号y(t)进行如下处理:
对源信号y(t)做预加重处理,即将源信号y(t)通过一个高通滤波器,该高通滤波器的传递函数为;H(z)=1-μz-1,0.9≤μ≤1.0;
对预加重处理后的源信号y(t)做分帧处理,帧长为10ms-30ms,帧移为帧长的1/2-1/3;可以避免帧与帧之间的特性变化过大;
对每帧信号做加窗处理,可以增加帧左端和右端的连续性,窗函数为汉明窗,公式为
Figure PCTCN2017104879-appb-000031
⑩对步骤⑨处理后的每帧信号进行快速傅立叶(FFT)变换,将信号从时域转到频域,得到信号的频谱,再取模的平方作为离散功率谱S(k);
Figure PCTCN2017104879-appb-000032
将每帧的频谱参数通过梅尔刻度滤波器,梅尔刻度滤波器包括V个三角形带通滤波器,20≤V≤30,得到V个参数Pv,v=0,1,…,v-1;将每个频带的输出取对数,得到Lv,v=0,1,…,v-1;将得到的V个参数进行离散余弦变换,得到Dv,v=0,1,…,v-1;去掉D0,取D1,D2,…,Dk作为MFCC的参数;
Figure PCTCN2017104879-appb-000033
通过动态时间规整DTW算法进行声音识别,包括:
步骤
Figure PCTCN2017104879-appb-000034
中的声音信号分了p帧矢量,即{T(1):T(2):…:T(n)…:T(p)},T(n)为第n帧的语音特征矢量,1≤n≤p,参考样本中有q帧矢量,即{R(1):R(2):…:R(m)…:R(q)},R(m)为第m帧的语音特征矢量,1≤m≤q,则动态时间规整DTW算法利用时间规整函数j=w(i)完成待测试矢量与模板矢量时间轴的映射,且规整函数w满足式(XX):
Figure PCTCN2017104879-appb-000035
在式(XX)中,d[T(i),R(w(j))]是待测试矢量T(i)与参考模板矢量R(j)之间的距离;T(i)表示T中第i帧的语音特征矢量;R(w(j))表示R中第j帧的语音特征矢量;D表示待测试矢量与参考样本矢量之间的最小距离;
利用DTW将待测试声音模板与所有参考样本模板进行匹配后,匹配距离最小的参考样本模板就是独立分量识别的结果,当4路待测试声音匹配的距离最小时所用参考模板为同一个参考模板时,则麦克风阵列信号采集的4路信号为单一声源,麦克风阵列信号采集的4路信号为多个声源。即可根据要求选取需要定位的独立声源信息。
一种实现上述声源定位方法的实现系统,包括4个麦克风与电压放大抬高电路模块、存储模块、 算法处理和系统控制模块以及显示模块,所述4个麦克风与电压放大抬高电路模块均连接所述存储模块,所述存储模块、所述算法处理和系统控制模块、所述显示模块依次连接;
所述4个麦克风与电压放大抬高电路模块实时获取声音信号;所述存储模块用于存储获取的声音信号和时间信号;所述算法处理和系统控制模块通过基于渐进串行正交化盲源分离算法分离采集到的混合声音信号,通过选择TDOA声音定位算法计算时延,并列出方程组求解出声源位置;所述显示模块用于显示声源位置。
根据本发明优选的,所述算法处理和系统控制模块为STM32开发平台;所述显示模块为液晶显示屏。
本发明的有益效果为:
1、本发明采用TDOA算法计算延时求得声源位置,当分离信号为多声源时,将分离的目标信号直接与混合信号相关计算时延,运算量小,计算速度快;当采集信号为单声源时,采用改进的TDOA算法进行时延计算,可以在一定程度上提高精度,并减少算法运算量。
2、本发明采用无源定位方法,被动式原理,功耗小。
3、本发明将盲源分离与声源定位结合起来,弥补以往声源定位不能辨识多个声源的不足。
附图说明
图1为本发明基于渐进串行正交化盲源分离算法的改进声源定位方法的实现系统的结构框图。
图2为本发明基于渐进串行正交化盲源分离算法的改进声源定位方法中的流程示意图。
图3为本发明改进TDOA算法的流程示意图。
具体实施方式
下面结合说明书附图和实施例对本发明作进一步限定,但不限于此。
实施例1
一种基于渐进串行正交化盲源分离算法的改进声源定位方法,如图2所示,包括步骤如下:
(1)通过麦克风阵列采集声音信号并存储;麦克风阵列为:在三维直角坐标系下选择(0,0,0),(a,0,0),(0,a,0),(0,0,a)四个位置摆放麦克风,得到所述麦克风阵列,a为固定参数,表示三个坐标(a,0,0),(0,a,0),(0,0,a)到坐标系原点(0,0,0)位置麦克风的距离。通过麦克风阵列采集的声音信号即混合声音信号x(t),x(t)=[x1(t),x2(t),x3(t),x4(t)],x1(t)、x2(t)、x3(t)、x4(t)分别如式(Ⅸ)、(Ⅹ)、(Ⅺ)、(Ⅻ)所示:
x1(t)=a11s1+a12s2+a13s3+a14s4  (Ⅸ)
x2(t)=a21s1+a22s2+a23s3+a24s4  (Ⅹ)
x3(t)=a31s1+a32s2+a33s3+a34s4  (Ⅺ)
x4(t)=a41s1+a42s2+a43s3+a44s4  (Ⅻ)
式(Ⅰ)中,s1,s2,s3,s4为4个独立声源发出的声音信号,aij(i=1,2,3,4;j=1,2,3,4)是实系数。
(2)采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;对复杂环境下的声音定位,使用声源分离技术,可以从环境混合声音信号中将目标声源提取出来,从而可以提高复杂环境下声音定位的准确度。
(3)对步骤(2)得到的每个独立声源信号,提取梅尔频率倒谱系数(MFCC)作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;
(4)根据步骤(3)中模式匹配的结果,如果为单一声源,则进入步骤(5);如果为多个声源,则通过TDOA算法计算时延,求解声源位置;
(5)先粗定位:求取信号的包络,低分辨率采样,通过广义自相关函数法粗略计算时延,根据粗略定位的点数对信号进行时域搬移;再细定位:高分辨率采样,通过广义自相关函数法计算时延,得到精确时延,求解声源位置。
传统的TDOA算法中,时延估计的精度受到采样频率的限制,所需精度越高,所需采样频率就越高,对于相同的采样时长,高采样频率带来极高的采样点数,算法的运算量也就越大。粗定位细定位算法中,先采用低分辨率对信号进行一定的时域搬移,再采用高分辨率进行高精度时延校准。较于低分辨率采样的传统算法,此种算法可以达到高分辨率采样的计算精度;较于高分辨率采样的传统算法,此算法因进行过一次时域搬移,在高精度校准时只需较短的有效时长即可将时延计算出来,减少了算法运算量;基于上述原理,该算法可以解决采样MIC之间的距离限制,当时延超出有效时长的距离时,只需进行一次粗定位时域搬移,即可计算精确时延。
实施例2
根据实施例1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其区别在于,根据步骤(5)得到精确时延,如图3所示,求解声源位置,包括步骤如下:
A、设定通过步骤(3)获取4路声音信号,即x1(t)、x2(t)、x3(t)、x4(t),t为数字信号中采样点的序号,长度为N,将4路声音信号进行加窗滤波处理,消除噪声;
B、对4路信号进行包络提取,只取包络的上半部分为有效信号,以Fs/n的频率进行抽点采样,得x′1(t)、x′2(t)、x′3(t)、x′4(t),Fs为盲源分离时的采样频率,n为大于1的整数;
C、对x′1(t)、x′2(t)、x′3(t)、x′4(t)进行傅立叶变换到频域,即X′1(k)、X′2(k)、X′3(k)、X′4(k),其中k为与t对应的数字信号中采样点的序号,t、k均为整数;
D、将x′1(t)作为基准信号,分别计算X′1(k)与X′2(k)、X′1(k)与X′3(k)、X′1(k)与X′4(k)的互功率谱G′12(k)、G′13(k)、G′14(k),对互功率谱G′12(k)、G′13(k)、G′14(k)进行PHAT加权操作,如式(Ⅰ)、式(Ⅱ)、式(Ⅲ)所示:
Figure PCTCN2017104879-appb-000036
Figure PCTCN2017104879-appb-000037
Figure PCTCN2017104879-appb-000038
式(Ⅰ)、式(Ⅱ)、式(Ⅲ)中,
Figure PCTCN2017104879-appb-000039
为X′1(k)的共轭;
E、将互功率谱G′12(k)、G′13(k)、G′14(k)逆变换到频域,得到对应的广义互相关函数R′12(t)、R′13(t)、R′14(t);当R′12(t)、R′13(t)、R′14(t)分别取最大值时n所对应的时延即为3路声音信号x′2(t)、x′3(t)、x′4(t)与基准信号x′1(t)的时延估计t′12、t′13、t′14
设R′1s(t)取最大值时t的值为n′1s,s=2、3、4,所取声音信号的点数为N′=fix(N/n),采样频率为Fs/n,若n′1s>N′/2,则n′1s更新为n′1s-N′-1;若n′1s≤N′/2,则n′1s不变;由此计算得到n′12、n′13、n′14
F、若n′1s≥0,将x1(t)在时域上向左平移n′1s*n个点;若n′1s<0,xs(t)在时域上向右平移n′1s*n个点;
取x1(t)、xs(t)前N1个点信号为z(t)、zs(t),N1为大于2n小于N的整数;N1为信号长度,Fs为采样频率;
按照步骤C-E采用广义自相关求取精确时延点数n″12,即将信号z1(t)、z2(t)傅立叶变换到频域,PHAT加权计算互功率谱,然后傅立叶反变换到时域求得互相关函数,取互相关最大时点数所对应的时间为两路的时延估计n″13,n″13和n″14与n″12计算方法一致;
G、则x1(t)、x2(t)的时延
Figure PCTCN2017104879-appb-000040
同理
Figure PCTCN2017104879-appb-000041
H、设定独立声源坐标为(x,y,z),得到延时参数以后,通过式(Ⅷ)求取声源坐标:
Figure PCTCN2017104879-appb-000042
求得声源的位置坐标(x,y,z),式中,t12、t13、t14为三路之间的延时值,v为声音在空气中的速度。
实施例3
根据实施例1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其区别在于,所述步骤(4),如果为多个声源,则通过TDOA算法计算时延,求解声源位置,包括步骤如下:
a、步骤(2)获取需要进行定位的独立分量为yi(t),i为整数且1≤i≤4,t为数字信号中采样点的序号,将yi(t)、x1(t)、x2(t)、x3(t)、x4(t)这5路信号进行加窗滤波处理,再经傅立叶变换到频域,得到频域信号Yi(k)、X1(k)、X2(k)、X3(k)、X4(k),k为与t对应的数字信号采样点的序号;
b、将独立分量yi(t)作为基准信号,分别计算Yi(k)与X1(k)、Yi(k)与X2(k)、Yi(k)与X3(k)、Yi(k)与X4(k)的互功率谱,即Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k),对互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)进行PHAT加权操作,如式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)所示:
Figure PCTCN2017104879-appb-000043
Figure PCTCN2017104879-appb-000044
Figure PCTCN2017104879-appb-000045
Figure PCTCN2017104879-appb-000046
式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)中,
Figure PCTCN2017104879-appb-000047
为Yi(k)的共轭,
Figure PCTCN2017104879-appb-000048
为PHAT函数;
c、将互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)逆变换到频域,得到对应的广义互相关函数Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n),当Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n)分别取最大值时,n所对应的时延即为4路声音信号x1(t)、x2(t)、x3(t)、x4(t)与基准信号yi(t)的时延估计ti1、ti2、ti3、ti4,设Ri1(n)取最大值时的n的值为ni1,所取声音信号的点数为N,采样频率为Fs,若ni1>N/2,则
Figure PCTCN2017104879-appb-000049
若ni1≤N/2,则
Figure PCTCN2017104879-appb-000050
ti2、ti3、ti4的计算与ti1的计算方法一致;
设Ri2(n)取最大值时的n的值为ni2,所取声音信号的点数为N,采样频率为Fs,若ni2>N/2, 则
Figure PCTCN2017104879-appb-000051
若ni2≤N/2,则
Figure PCTCN2017104879-appb-000052
设Ri3(n)取最大值时的n的值为ni3,所取声音信号的点数为N,采样频率为Fs,若ni3>N/2,则
Figure PCTCN2017104879-appb-000053
若ni3≤N/2,则
Figure PCTCN2017104879-appb-000054
设Ri4(n)取最大值时的n的值为ni4,所取声音信号的点数为N,采样频率为Fs,若ni4>N/2,则
Figure PCTCN2017104879-appb-000055
若ni4≤N/2,则
Figure PCTCN2017104879-appb-000056
d、将ti1作为基准延时,则t12=ti1-ti2表示x1(t)相对于x2(t)的延时,t13=ti1-ti3表示x1(t)相对于x3(t)的延时,t14=ti1-ti4表示x1(t)相对于x4(t)的延时,得到x1(t)相对于x2(t)、x3(t)、x4(t)的延时t12、t13、t14
实施例4
根据实施例1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其区别在于,步骤(2)中,采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;包括步骤如下:
①采用麦克风阵列采集到环境声音,取出同一时间段内的4路声音信号进行中心化处理,即去均值处理,去均值后得到信号
Figure PCTCN2017104879-appb-000057
通过式(XIII)求得:
Figure PCTCN2017104879-appb-000058
②对去均值后的声音信号
Figure PCTCN2017104879-appb-000059
进行白化处理,即对
Figure PCTCN2017104879-appb-000060
进行线性变换V,得到白化信号z(t):
Figure PCTCN2017104879-appb-000061
白化处理采用主分量分析方法,对信号进行去相关和缩放,线性白化变换V如式(XV)所示:
Figure PCTCN2017104879-appb-000062
式(XV)中,矩阵E以协方差矩阵
Figure PCTCN2017104879-appb-000063
的单位范数特征向量为列,D=diag(d1,d2,d3,d4)是以C的特征值为对角元素的特征矩阵;
③计算观测信号z(t)的独立成分个数,记为m,且m≤4;因为步骤a中麦克风阵列由4个麦克风组成,采集4组声音信号,根据盲源分离原理,独立分量数目不大于观测信号数目。
选择具有单位范数的初始化向量wp,p=1,2,…,m,令p=1;
④对wp进行如式(XVI)所示的迭代运算:
Figure PCTCN2017104879-appb-000064
式(XVI)中,函数g为g1(y)、g2(y)或g3(y);g1(y)=tanh(a1y),g2(y)=y*exp(-y^2/2), g3(y)=y^3;
⑤对步骤④中迭代后的wp进行正交化和标准化,正交化方法如式(XVII)所示:
Figure PCTCN2017104879-appb-000065
对wp标准化,即除以其范数,如式(XVIII)所示:
wp=wp/norm(wp)  (XVIII)
⑥对步骤⑤中标准化后的wp进行检测,看其是否收敛,如果尚未收敛,则返回步骤④;
⑦p更新为p+1,如果p≤m,返回步骤④,否则,进入步骤⑧;
⑧通过步骤③~⑦的循环计算,得到解混矩阵W={w1,w2,…,wm}T,m≤4;由式(XIX)得到源信号y(t):
y(t)=Wx(t)  (XIX)
式(XIX)中,y(t)=[y1(t),y2(t),…yi(t)…,ym(t)],i=1,2,…,m,分别为麦克风阵列采集声音信号经过盲源分离后得到的m个独立分量,即独立声源信号。
实施例5
根据实施例1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其区别在于,所述步骤(3),对得到的每个独立声源信号,提取梅尔频率倒谱系数(MFCC)作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;包括步骤如下:
⑨对步骤⑧中分离出的源信号y(t)进行如下处理:
对源信号y(t)做预加重处理,即将源信号y(t)通过一个高通滤波器,该高通滤波器的传递函数为;H(z)=1-μz-1,0.9≤μ≤1.0;
对预加重处理后的源信号y(t)做分帧处理,帧长为10ms-30ms,帧移为帧长的1/2-1/3;可以避免帧与帧之间的特性变化过大;
对每帧信号做加窗处理,可以增加帧左端和右端的连续性,窗函数为汉明窗,公式为
Figure PCTCN2017104879-appb-000066
⑩对步骤⑨处理后的每帧信号进行快速傅立叶(FFT)变换,将信号从时域转到频域,得到信号的频谱,再取模的平方作为离散功率谱S(k);
Figure PCTCN2017104879-appb-000067
将每帧的频谱参数通过梅尔刻度滤波器,梅尔刻度滤波器包括V个三角形带通滤波器,20≤V≤30,得到V个参数Pv,v=0,1,…,v-1;将每个频带的输出取对数,得到Lv,v=0,1,…,v-1;将得到的V个参数进行离散余弦变换,得到Dv,v=0,1,…,v-1;去掉D0,取D1,D2,…,Dk作为MFCC 的参数;
Figure PCTCN2017104879-appb-000068
通过动态时间规整DTW算法进行声音识别,包括:
步骤
Figure PCTCN2017104879-appb-000069
中的声音信号分了p帧矢量,即{T(1):T(2):…:T(n)…:T(p)},T(n)为第n帧的语音特征矢量,1≤n≤p,参考样本中有q帧矢量,即{R(1):R(2):…:R(m)…:R(q)},R(m)为第m帧的语音特征矢量,1≤m≤q,则动态时间规整DTW算法利用时间规整函数j=w(i)完成待测试矢量与模板矢量时间轴的映射,且规整函数w满足式(XX):
Figure PCTCN2017104879-appb-000070
在式(XX)中,d[T(i),R(w(j))]是待测试矢量T(i)与参考模板矢量R(j)之间的距离;T(i)表示T中第i帧的语音特征矢量;R(w(j))表示R中第j帧的语音特征矢量;D表示待测试矢量与参考样本矢量之间的最小距离;
利用DTW将待测试声音模板与所有参考样本模板进行匹配后,匹配距离最小的参考样本模板就是独立分量识别的结果,当4路待测试声音匹配的距离最小时所用参考模板为同一个参考模板时,则麦克风阵列信号采集的4路信号为单一声源,麦克风阵列信号采集的4路信号为多个声源。即可根据要求选取需要定位的独立声源信息。
实施例6
一种实现实施例1-5任一所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法上述声源定位方法的实现系统,如图1所示,包括4个麦克风与电压放大抬高电路模块、存储模块、算法处理和系统控制模块以及显示模块,4个麦克风与电压放大抬高电路模块均连接存储模块,存储模块、算法处理和系统控制模块、显示模块依次连接;
4个麦克风与电压放大抬高电路模块实时获取声音信号;存储模块用于存储获取的声音信号和时间信号;算法处理和系统控制模块通过基于渐进串行正交化盲源分离算法分离采集到的混合声音信号,通过选择TDOA声音定位算法计算时延,并列出方程组求解出声源位置;显示模块用于显示声源位置。
算法处理和系统控制模块为STM32开发平台;显示模块为液晶显示屏。

Claims (10)

  1. 一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,包括步骤如下:
    (1)通过麦克风阵列采集声音信号并存储;
    (2)采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;
    (3)对步骤(2)得到的每个独立声源信号,提取梅尔频率倒谱系数作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;
    (4)根据步骤(3)中模式匹配的结果,如果为单一声源,则进入步骤(5);如果为多个声源,则通过TDOA算法计算时延,求解声源位置;
    (5)先粗定位:求取信号的包络,低分辨率采样,通过广义自相关函数法粗略计算时延,根据粗略定位的点数对信号进行时域搬移;再细定位:高分辨率采样,通过广义自相关函数法计算时延,得到精确时延,求解声源位置。
  2. 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,根据所述步骤(5)得到精确时延,包括步骤如下:
    A、设定通过步骤(3)获取4路声音信号,即x1(t)、x2(t)、x3(t)、x4(t),t为数字信号中采样点的序号,长度为N,将4路声音信号进行加窗滤波处理,消除噪声;
    B、对4路信号进行包络提取,只取包络的上半部分为有效信号,以Fs/n的频率进行抽点采样,得x′1(t)、x′2(t)、x′3(t)、x′4(t),Fs为盲源分离时的采样频率,n为大于1的整数;
    C、对x′1(t)、x′2(t)、x′3(t)、x′4(t)进行傅立叶变换到频域,即X′1(k)、X′2(k)、X′3(k)、X′4(k),其中k为与t对应的数字信号中采样点的序号,t、k均为整数;
    D、将x′1(t)作为基准信号,分别计算X′1(k)与X′2(k)、X′1(k)与X′3(k)、X′1(k)与X′4(k)的互功率谱G′12(k)、G′13(k)、G′14(k),对互功率谱G′12(k)、G′13(k)、G′14(k)进行PHAT加权操作,如式(Ⅰ)、式(Ⅱ)、式(Ⅲ)所示:
    Figure PCTCN2017104879-appb-100001
    Figure PCTCN2017104879-appb-100002
    Figure PCTCN2017104879-appb-100003
    式(Ⅰ)、式(Ⅱ)、式(Ⅲ)中,
    Figure PCTCN2017104879-appb-100004
    为X′1(k)的共轭;
    E、将互功率谱G′12(k)、G′13(k)、G′14(k)逆变换到频域,得到对应的广义互相关函数R′12(t)、R′13(t)、R′14(t);当R′12(t)、R′13(t)、R′14(t)分别取最大值时n所对应的时延即为3路声音信号x′2(t)、x′3(t)、x′4(t)与基准信号x′1(t)的时延估计t′12、t′13、t′14
    设R′1s(t)取最大值时t的值为n′1s,s=2、3、4,所取声音信号的点数为N′=fix(N/n),采样频率为Fs/n,若n′1s>N′/2,则n′1s更新为n′1s-N′-1;若n′1s≤N′/2,则n′1s不变;由此计算得到n′12、n′13、n′14
    F、若n′1s≥0,将x1(t)在时域上向左平移n′1s*n个点;若n′1s<0,xs(t)在时域上向右平移n′1s*n个点;
    取x1(t)、xs(t)前N1个点信号为z(t)、zs(t),N1为大于2n小于N的整数;N1为信号长度,Fs为采样频率;
    按照步骤C-E采用广义自相关求取精确时延点数n″12,即将信号z1(t)、z2(t)傅立叶变换到频域,PHAT加权计算互功率谱,然后傅立叶反变换到时域求得互相关函数,取互相关最大时点数所对应的时间为两路的时延估计n″13,n″13和n″14与n″12计算方法一致;
    G、则x1(t)、x2(t)的时延
    Figure PCTCN2017104879-appb-100005
    同理
    Figure PCTCN2017104879-appb-100006
  3. 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(4),如果为多个声源,则通过TDOA算法计算时延,包括步骤如下:
    a、设定步骤(2)获取需要进行定位的独立分量为yi(t),i为整数且1≤i≤4,t为数字信号中采样点的序号,将yi(t)、x1(t)、x2(t)、x3(t)、x4(t)这5路信号进行加窗滤波处理,再经傅立叶变换到频域,得到频域信号Yi(k)、X1(k)、X2(k)、X3(k)、X4(k),k为与t对应的数字信号采样点的序号;
    b、将独立分量yi(t)作为基准信号,分别计算Yi(k)与X1(k)、Yi(k)与X2(k)、Yi(k)与X3(k)、Yi(k)与X4(k)的互功率谱,即Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k),对互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)进行PHAT加权操作,如式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)所示:
    Figure PCTCN2017104879-appb-100007
    Figure PCTCN2017104879-appb-100008
    Figure PCTCN2017104879-appb-100009
    Figure PCTCN2017104879-appb-100010
    式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)中,
    Figure PCTCN2017104879-appb-100011
    为Yi(k)的共轭,
    Figure PCTCN2017104879-appb-100012
    为PHAT函数;
    c、将互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)逆变换到频域,得到对应的广义互相关函数Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n),当Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n)分别取最大值时,n所对应的时延即为4路声音信号x1(t)、x2(t)、x3(t)、x4(t)与基准信号yi(t)的时延估计ti1、ti2、ti3、ti4,设Ri1(n)取最大值时的n的值为ni1,所取声音信号的点数为N,采样频率为Fs,若ni1>N/2,则
    Figure PCTCN2017104879-appb-100013
    若ni1≤N/2,则
    Figure PCTCN2017104879-appb-100014
    设Ri2(n)取最大值时的n的值为ni2,所取声音信号的点数为N,采样频率为Fs,若ni2>N/2,则
    Figure PCTCN2017104879-appb-100015
    若ni2≤N/2,则
    Figure PCTCN2017104879-appb-100016
    设Ri3(n)取最大值时的n的值为ni3,所取声音信号的点数为N,采样频率为Fs,若ni3>N/2,则
    Figure PCTCN2017104879-appb-100017
    若ni3≤N/2,则
    Figure PCTCN2017104879-appb-100018
    设Ri4(n)取最大值时的n的值为ni4,所取声音信号的点数为N,采样频率为Fs,若ni4>N/2,则
    Figure PCTCN2017104879-appb-100019
    若ni4≤N/2,则
    Figure PCTCN2017104879-appb-100020
    d、将ti1作为基准延时,则t12=ti1-ti2表示x1(t)相对于x2(t)的延时,t13=ti1-ti3表示x1(t)相对于x3(t)的延时,t14=ti1-ti4表示x1(t)相对于x4(t)的延时,得到x1(t)相对于x2(t)、x3(t)、x4(t)的延时t12、t13、t14
  4. 根据权利要求2或3所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(4)、(5)中,求解声源位置,包括:设定声源位置坐标为(x,y,z),得到延时参数以后,通过式(Ⅷ)求取声源位置坐标:
    Figure PCTCN2017104879-appb-100021
    求得声源的位置坐标(x,y,z),式中,t12、t13、t14为三路之间的延时值,v为声音在空 气中的速度。
  5. 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述麦克风阵列为:在三维直角坐标系下选择(0,0,0),(a,0,0),(0,a,0),(0,0,a)四个位置摆放麦克风,得到所述麦克风阵列,a为固定参数,表示三个坐标(a,0,0),(0,a,0),(0,0,a)到坐标系原点(0,0,0)位置麦克风的距离。
  6. 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(1),通过麦克风阵列采集的声音信号即混合声音信号x(t),x(t)=[x1(t),x2(t),x3(t),x4(t)],x1(t)、x2(t)、x3(t)、x4(t)分别如式(Ⅸ)、(Ⅹ)、(Ⅺ)、(Ⅻ)所示:
    x1(t)=a11s1+a12s2+a13s3+a14s4 (Ⅸ)
    x2(t)=a21s1+a22s2+a23s3+a24s4 (Ⅹ)
    x3(t)=a31s1+a32s2+a33s3+a34s4 (Ⅺ)
    x4(t)=a41s1+a42s2+a43s3+a44s4(Ⅻ)
    式(Ⅰ)中,s1,s2,s3,s4为4个独立声源发出的声音信号,aij(i=1,2,3,4;j=1,2,3,4)是实系数。
  7. 根据权利要求6所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,步骤(2)中,采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;包括步骤如下:
    ①采用麦克风阵列采集到环境声音,取出同一时间段内的4路声音信号进行中心化处理,即去均值处理,去均值后得到信号
    Figure PCTCN2017104879-appb-100022
    通过式(XIII)求得:
    Figure PCTCN2017104879-appb-100023
    ②对去均值后的声音信号
    Figure PCTCN2017104879-appb-100024
    进行白化处理,即对
    Figure PCTCN2017104879-appb-100025
    进行线性变换V,得到白化信号z(t):
    Figure PCTCN2017104879-appb-100026
    白化处理采用主分量分析方法,对信号进行去相关和缩放,线性白化变换V如式(XV)所示:
    Figure PCTCN2017104879-appb-100027
    式(XV)中,矩阵E以协方差矩阵
    Figure PCTCN2017104879-appb-100028
    的单位范数特征向量为列,D=diag(d1,d2,d3,d4)是以C的特征值为对角元素的特征矩阵;
    ③计算观测信号z(t)的独立成分个数,记为m,且m≤4;
    选择具有单位范数的初始化向量wp,p=1,2,…,m,令p=1;
    ④对wp进行如式(XVI)所示的迭代运算:
    Figure PCTCN2017104879-appb-100029
    式(XVI)中,函数g为g1(y)、g2(y)或g3(y);g1(y)=tanh(a1y),g2(y)=y*exp(-y^2/2),g3(y)=y^3;
    ⑤对步骤④中迭代后的wp进行正交化和标准化,正交化方法如式(XVII)所示:
    Figure PCTCN2017104879-appb-100030
    对wp标准化,即除以其范数,如式(XVIII)所示:
    wp=wp/norm(wp)  (XVIII)
    ⑥对步骤⑤中标准化后的wp进行检测,看其是否收敛,如果尚未收敛,则返回步骤④;
    ⑦p更新为p+1,如果p≤m,返回步骤④,否则,进入步骤⑧;
    ⑧通过步骤③~⑦的循环计算,得到解混矩阵W={w1,w2,…,wm}T,m≤4;由式(XIX)得到源信号y(t):
    y(t)=Wx(t)  (XIX)
    式(XIX)中,y(t)=[y1(t),y2(t),…yi(t)…,ym(t)],i=1,2,…,m,分别为麦克风阵列采集声音信号经过盲源分离后得到的m个独立分量,即独立声源信号。
  8. 根据权利要求7所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(3),对得到的每个独立声源信号,提取梅尔频率倒谱系数作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;包括步骤如下:
    ⑨对步骤⑧中分离出的源信号y(t)进行如下处理:
    对源信号y(t)做预加重处理,即将源信号y(t)通过一个高通滤波器,该高通滤波器的传递函数为;H(z)=1-μz-1,0.9≤μ≤1.0;
    对预加重处理后的源信号y(t)做分帧处理,帧长为10ms-30ms,帧移为帧长的1/2-1/3;
    对每帧信号做加窗处理,窗函数为汉明窗,公式为
    Figure PCTCN2017104879-appb-100031
    ⑩对步骤⑨处理后的每帧信号进行快速傅立叶变换,将信号从时域转到频域,得到信号的频谱,再取模的平方作为离散功率谱S(k);
    Figure PCTCN2017104879-appb-100032
    将每帧的频谱参数通过梅尔刻度滤波器,梅尔刻度滤波器包括V个三角形带通滤波器,20≤ V≤30,得到V个参数Pv,v=0,1,…,v-1;将每个频带的输出取对数,得到Lv,v=0,1,…,v-1;将得到的V个参数进行离散余弦变换,得到Dv,v=0,1,…,v-1;去掉D0,取D1,D2,…,Dk作为MFCC的参数;
    Figure PCTCN2017104879-appb-100033
    通过动态时间规整DTW算法进行声音识别,包括:
    步骤
    Figure PCTCN2017104879-appb-100034
    中的声音信号分了p帧矢量,即{T(1):T(2):…:T(n)…:T(p)},T(n)为第n帧的语音特征矢量,1≤n≤p,参考样本中有q帧矢量,即{R(1):R(2):…:R(m)…:R(q)},R(m)为第m帧的语音特征矢量,1≤m≤q,则动态时间规整DTW算法利用时间规整函数j=w(i)完成待测试矢量与模板矢量时间轴的映射,且规整函数w满足式(XX):
    Figure PCTCN2017104879-appb-100035
    在式(XX)中,d[T(i),R(w(j))]是待测试矢量T(i)与参考模板矢量R(j)之间的距离;T(i)表示T中第i帧的语音特征矢量;R(w(j))表示R中第j帧的语音特征矢量;D表示待测试矢量与参考样本矢量之间的最小距离;
    利用DTW将待测试声音模板与所有参考样本模板进行匹配后,匹配距离最小的参考样本模板就是独立分量识别的结果,当4路待测试声音匹配的距离最小时所用参考模板为同一个参考模板时,则麦克风阵列信号采集的4路信号为单一声源,麦克风阵列信号采集的4路信号为多个声源。
  9. 一种实现权利要求1或权利要求4-8任一所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法的实现系统,其特征在于,包括4个麦克风与电压放大抬高电路模块、存储模块、算法处理和系统控制模块以及显示模块,所述4个麦克风与电压放大抬高电路模块均连接所述存储模块,所述存储模块、所述算法处理和系统控制模块、所述显示模块依次连接;
    所述4个麦克风与电压放大抬高电路模块实时获取声音信号;所述存储模块用于存储获取的声音信号和时间信号;所述算法处理和系统控制模块通过基于渐进串行正交化盲源分离算法分离采集到的混合声音信号,通过选择TDOA声音定位算法计算时延,并列出方程组求解出声源位置;所述显示模块用于显示声源位置。
  10. 根据权利要求9所述的实现系统,所述算法处理和系统控制模块为STM32开发平台;所述显示模块为液晶显示屏。
PCT/CN2017/104879 2017-09-29 2017-09-30 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 WO2019061439A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710911515.0A CN107644650B (zh) 2017-09-29 2017-09-29 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统
CN201710911515.0 2017-09-29

Publications (1)

Publication Number Publication Date
WO2019061439A1 true WO2019061439A1 (zh) 2019-04-04

Family

ID=61112147

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/104879 WO2019061439A1 (zh) 2017-09-29 2017-09-30 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统

Country Status (2)

Country Link
CN (1) CN107644650B (zh)
WO (1) WO2019061439A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108648758B (zh) * 2018-03-12 2020-09-01 北京云知声信息技术有限公司 医疗场景中分离无效语音的方法及系统
CN108922557A (zh) * 2018-06-14 2018-11-30 北京联合大学 一种聊天机器人的多人语音分离方法及系统
CN108877831B (zh) * 2018-08-28 2020-05-15 山东大学 基于多标准融合频点筛选的盲源分离快速方法及系统
CN110888112B (zh) * 2018-09-11 2021-10-22 中国科学院声学研究所 一种基于阵列信号的多目标定位识别方法
CN109671439B (zh) * 2018-12-19 2024-01-19 成都大学 一种智能化果林鸟害防治设备及其鸟类定位方法
CN109741759B (zh) * 2018-12-21 2020-07-31 南京理工大学 一种面向特定鸟类物种的声学自动检测方法
CN110007276B (zh) * 2019-04-18 2021-01-12 太原理工大学 一种声源定位方法及系统
CN110361695B (zh) * 2019-06-06 2021-06-15 杭州未名信科科技有限公司 分置式声源定位系统和方法
CN111856401A (zh) * 2020-07-02 2020-10-30 南京大学 一种基于互谱相位拟合的时延估计方法
CN111787609A (zh) * 2020-07-09 2020-10-16 北京中超伟业信息安全技术股份有限公司 基于人体声纹特征和麦克风基站的人员定位系统及方法
CN114088332B (zh) * 2021-11-24 2023-08-22 成都流体动力创新中心 一种用于旋转叶片声音信号提取的风洞背景噪声修正方法
CN114220454B (zh) * 2022-01-25 2022-12-09 北京荣耀终端有限公司 一种音频降噪方法、介质和电子设备
CN115902776B (zh) * 2022-12-09 2023-06-27 中南大学 一种基于被动式声音信号的声源定位方法
CN116866124A (zh) * 2023-07-13 2023-10-10 中国人民解放军战略支援部队航天工程大学 一种基于基带信号时间结构的盲分离方法
CN118016102A (zh) * 2024-04-08 2024-05-10 湖北经济学院 一种基于非调制声音信号的定位方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021405A (zh) * 2012-12-05 2013-04-03 渤海大学 基于music和调制谱滤波的语音信号动态特征提取方法
CN103258533A (zh) * 2013-05-27 2013-08-21 重庆邮电大学 远距离语音识别中的模型域补偿新方法
CN104766093A (zh) * 2015-04-01 2015-07-08 中国科学院上海微系统与信息技术研究所 一种基于麦克风阵列的声目标分类方法
US20160358606A1 (en) * 2015-06-06 2016-12-08 Apple Inc. Multi-Microphone Speech Recognition Systems and Related Techniques
CN106646376A (zh) * 2016-12-05 2017-05-10 哈尔滨理工大学 基于加权修正参数的p范数噪声源定位识别方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002061732A1 (en) * 2001-01-30 2002-08-08 Thomson Licensing S.A. Geometric source separation signal processing technique
US6865490B2 (en) * 2002-05-06 2005-03-08 The Johns Hopkins University Method for gradient flow source localization and signal separation
US8073690B2 (en) * 2004-12-03 2011-12-06 Honda Motor Co., Ltd. Speech recognition apparatus and method recognizing a speech from sound signals collected from outside
RU2565338C2 (ru) * 2010-02-23 2015-10-20 Конинклейке Филипс Электроникс Н.В. Определение местоположения аудиоисточника
CN101957443B (zh) * 2010-06-22 2012-07-11 嘉兴学院 声源定位方法
CN104053107B (zh) * 2014-06-06 2018-06-05 重庆大学 一种用于噪声环境下声源分离和定位方法
CN105872366B (zh) * 2016-03-30 2018-08-24 南昌大学 一种基于fastica算法的盲源分离技术控制聚焦系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103021405A (zh) * 2012-12-05 2013-04-03 渤海大学 基于music和调制谱滤波的语音信号动态特征提取方法
CN103258533A (zh) * 2013-05-27 2013-08-21 重庆邮电大学 远距离语音识别中的模型域补偿新方法
CN104766093A (zh) * 2015-04-01 2015-07-08 中国科学院上海微系统与信息技术研究所 一种基于麦克风阵列的声目标分类方法
US20160358606A1 (en) * 2015-06-06 2016-12-08 Apple Inc. Multi-Microphone Speech Recognition Systems and Related Techniques
CN106646376A (zh) * 2016-12-05 2017-05-10 哈尔滨理工大学 基于加权修正参数的p范数噪声源定位识别方法

Also Published As

Publication number Publication date
CN107644650A (zh) 2018-01-30
CN107644650B (zh) 2020-06-05

Similar Documents

Publication Publication Date Title
WO2019061439A1 (zh) 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统
CN102103200B (zh) 一种分布式非同步声传感器的声源空间定位方法
CN109188362B (zh) 一种麦克风阵列声源定位信号处理方法
CN104360310B (zh) 一种多目标近场源定位方法和装置
CN109448389B (zh) 一种汽车鸣笛智能检测方法
WO2020024816A1 (zh) 音频信号处理方法、装置、设备和存储介质
CN110534126B (zh) 一种基于固定波束形成的声源定位和语音增强方法及系统
CN111798869B (zh) 一种基于双麦克风阵列的声源定位方法
CN103854660A (zh) 一种基于独立成分分析的四麦克语音增强方法
CN113702909A (zh) 一种基于声音信号到达时间差的声源定位解析解计算方法及装置
CN107202559B (zh) 基于室内声学信道扰动分析的物体识别方法
CN107167770A (zh) 一种混响条件下的麦克风阵列声源定位装置
CN109597021B (zh) 一种波达方向估计方法及装置
CN108089146B (zh) 一种对预估角误差鲁棒的高分辨宽带波达方向估计方法
CN103837858B (zh) 一种用于平面阵列的远场波达角估计方法及系统
Huang et al. One-dimensional MUSIC-type algorithm for spherical microphone arrays
EP1682923A1 (fr) Procede de localisation d un ou de plusieurs emetteurs
Hu et al. Decoupled direction-of-arrival estimations using relative harmonic coefficients
CN116559778B (zh) 一种基于深度学习的车辆鸣笛定位方法及系统
CN116910690A (zh) 一种基于数据融合的目标分类系统
Hu et al. Evaluation and comparison of three source direction-of-arrival estimators using relative harmonic coefficients
CN111968671B (zh) 基于多维特征空间的低空声目标综合识别方法及装置
CN112666520A (zh) 一种可调响应时频谱声源定位方法及系统
CN110426711B (zh) 一种基于极性零点检测的时延估计方法及系统
Yang et al. A Review of Sound Source Localization Research in Three-Dimensional Space

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17927052

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17927052

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17927052

Country of ref document: EP

Kind code of ref document: A1