WO2019061439A1 - 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 - Google Patents
一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 Download PDFInfo
- Publication number
- WO2019061439A1 WO2019061439A1 PCT/CN2017/104879 CN2017104879W WO2019061439A1 WO 2019061439 A1 WO2019061439 A1 WO 2019061439A1 CN 2017104879 W CN2017104879 W CN 2017104879W WO 2019061439 A1 WO2019061439 A1 WO 2019061439A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sound
- signal
- sound source
- delay
- algorithm
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000000926 separation method Methods 0.000 title claims abstract description 44
- 230000000750 progressive effect Effects 0.000 title claims abstract description 33
- 230000005236 sound signal Effects 0.000 claims abstract description 81
- 238000005070 sampling Methods 0.000 claims abstract description 60
- 238000005311 autocorrelation function Methods 0.000 claims abstract description 8
- 230000004807 localization Effects 0.000 claims description 34
- 238000001228 spectrum Methods 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 25
- 238000004364 calculation method Methods 0.000 claims description 16
- 239000013074 reference sample Substances 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000005314 correlation function Methods 0.000 claims description 10
- 230000002087 whitening effect Effects 0.000 claims description 9
- 230000003321 amplification Effects 0.000 claims description 8
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 6
- 230000001934 delay Effects 0.000 claims description 5
- 230000021615 conjugation Effects 0.000 claims description 3
- 238000012899 de-mixing Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 239000004973 liquid crystal related substance Substances 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012847 principal component analysis method Methods 0.000 claims description 3
- 239000000523 sample Substances 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 230000037433 frameshift Effects 0.000 claims 1
- 238000009432 framing Methods 0.000 claims 1
- 230000000875 corresponding effect Effects 0.000 description 12
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
- G01S5/20—Position of source determined by a plurality of spaced direction-finders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0272—Voice signal separating
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/24—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being the cepstrum
Definitions
- the invention relates to an improved sound source localization method based on progressive serial orthogonalization blind source separation algorithm and an implementation system thereof, and belongs to the technical field of sound source localization.
- Sound is an important carrier of information dissemination in nature.
- people can not only obtain the voice information carried by the sound, but also obtain the sound bearing according to the characteristics of the sound propagation and the propagation path itself.
- the positioning method for the unknown target position mainly relied on radio, laser, ultrasonic, etc., and the position information of the measured object was analyzed and calculated by actively transmitting the detection signal and receiving the reflected wave reflected by the measured object. Because it is an active way to detect, both transmission and reception use pre-defined frequency waves, so it is not susceptible to natural environment interference, and has high precision and high anti-interference characteristics. However, active positioning requires a strong transmit power, which can not be applied in low power or certain energy limiting environments.
- the sound source positioning adopts the passive principle, which is easy to hide, uses widely used sound waves, and has low equipment cost and low power consumption, so it has been widely concerned and applied.
- Blind source separation technology is a signal processing method developed in the 1990s. It is based on the statistical characteristics of the source signal without knowing the parameters of the source signal and the transmission channel. Only the observed signals recover the components of the source signal. process.
- the "source” here refers to the original signal, that is, the independent component; the "blind” one is that the source signal cannot be observed, and the other is that the mixing mode of the source signal is unknown. Therefore, in the case where the source signal and the transmission channel parameters are unknown, Blind source separation techniques are used to process mixed sound signals.
- the progressive serial orthogonal blind source separation algorithm is a kind of blind source separation algorithm. The independent components are found by the fixed point iteration of progressive orthogonalization.
- Sound source localization based on arrival delay Suppose that a sound wave propagating in air at a constant speed has a different phase to a pair of receivers located at different positions. According to the phase difference of the sound signal received by the receiver, the time difference of the sound to each receiving end is obtained by a delay algorithm. And then find the location of the sound source.
- the positioning algorithm has the following advantages: First, the device requirements are not high; second, the steps are simple, the calculation amount is small; second, it is convenient to combine with other systems that need to locate data.
- Chinese patent document CN104181506A discloses a sound source localization method based on improved PHAT weighted time delay estimation, which acquires 4 channels of sound signals by a microphone array, converts them into digital signals through A/D sampling circuits, and improves the PHAT weighted by improved PHAT.
- the cross-correlation function method performs time delay estimation algorithm processing, obtains the time delay estimation value, and combines the spatial position of the placed microphone array, and solves the nonlinear equations by iterative method to obtain the relative position of the sound source.
- the system described in this patent does not recognize multiple sound sources and does not distinguish directional noise.
- Chinese patent document CN 104614069A discloses a power device fault sound detection method based on joint approximation diagonalized blind source separation algorithm, and the specific steps include: (1) using a microphone array; (2) adopting a joint approximation diagonalization blind source separation algorithm For the step (1), the sound signal collected by the microphone array is used to separate the independent sound source signals; (3) the Mel frequency cepstrum coefficient MFCC of the independent sound source signal is extracted as the sound feature parameter, and the sound signal is identified by the pattern matching algorithm, which is to be tested. After the sound template is matched with all the reference sample templates, the reference sample template with the smallest matching distance is the result of the working sound recognition of the power device.
- the performance of the joint approximation diagonalization algorithm used in this patent is greatly affected by the number of covariance matrices. When the number of matrices is larger, the computation is more complicated.
- the present invention proposes an improved sound source localization method based on the progressive serial orthogonalization blind source separation algorithm
- the present invention also proposes an implementation system for the above improved sound source localization method.
- An improved sound source localization method based on progressive serial orthogonal blind source separation algorithm including the following steps:
- step (2) Separating the sound signals collected in step (1) by using a progressive serial orthogonal blind source separation algorithm to obtain respective independent sound source signals;
- step (3) Extracting the Mel frequency cepstral coefficient (MFCC) as the sound characteristic parameter for each independent sound source signal obtained in step (2), identifying the sound signal by the pattern matching algorithm, and selecting the independent sound source signal of the sound to be positioned. ;
- MFCC Mel frequency cepstral coefficient
- step (3) if it is a single sound source, it proceeds to step (5); if it is a plurality of sound sources, the time delay is calculated by the TDOA algorithm to solve the sound source position;
- First coarse positioning obtain the envelope of the signal, low-resolution sampling, roughly calculate the delay by the generalized autocorrelation function method, and time-shift the signal according to the number of points that are roughly positioned; fine positioning: high-resolution sampling The delay is calculated by the generalized autocorrelation function method, and the precise delay is obtained to solve the sound source position.
- the accuracy of the delay estimation is limited by the sampling frequency.
- the higher the required precision the higher the sampling frequency required.
- the high sampling frequency brings extremely high sampling points.
- the amount of computation is also greater.
- the coarse positioning fine positioning algorithm the signal is firstly subjected to a certain time domain shift using low resolution, and then high resolution is used for high precision time delay calibration.
- this algorithm can achieve the calculation accuracy of high-resolution sampling.
- this algorithm has only one time domain shift, only need to be adjusted during high-precision calibration. The shorter effective time can calculate the delay and reduce the amount of algorithm operation. Based on the above principle, the algorithm can solve the distance limitation between the sampling MIC. When the distance exceeds the effective duration, only one coarse positioning is needed. Time domain shifting allows you to calculate precise delays.
- an accurate delay is obtained according to the step (5), and the steps are as follows:
- step (3) Set 4 sound signals by step (3), that is, x 1 (t), x 2 (t), x 3 (t), x 4 (t), where t is the serial number of the sampling point in the digital signal.
- the length is N, and the 4 channels of sound signals are windowed and filtered to eliminate noise;
- N 1 is an integer greater than 2n less than N;
- N 1 is the signal length, Fs is the sampling frequency ;
- the generalized autocorrelation is used to obtain the precise delay point n′′ 12 , that is, the signals z 1 (t) and z 2 (t) are Fourier transformed into the frequency domain, and the PHAT weights the cross-power spectrum, and then the Fourier transform is inversely transformed into the time domain.
- the cross-correlation function is obtained, and the time corresponding to the maximum time-point number of the cross-correlation is two-way delay estimation n′′ 13 , n′′ 13 and n′′ 14 are consistent with the calculation method of n′′ 12 ;
- the delay is calculated by the TDOA algorithm, and the steps are as follows:
- step (2) obtain the independent component that needs to be located as y i (t), i is an integer and 1 ⁇ i ⁇ 4, t is the serial number of the sampling point in the digital signal, and y i (t), x 1 (
- the five signals of t), x 2 (t), x 3 (t), and x 4 (t) are windowed and filtered, and then Fourier transformed into the frequency domain to obtain frequency domain signals Y i (k), X 1 (k), X 2 (k), X 3 (k), X 4 (k), where k is the sequence number of the digital signal sample point corresponding to t;
- n corresponds to The delay is the delay estimate t i1 , t i2 , t of the 4-way sound signal x 1 (t), x 2 (t), x 3 (t), x 4 (t) and the reference signal y i (t) I3 and t i4 , let R i1 (n) take the maximum value of n as n i1 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i1 >N/2,
- R i2 (n) take the maximum value of n as n i2 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i2 >N/2, then If n i2 ⁇ N/2, then
- R i3 (n) take the maximum value of n as n i3 , the number of points of the sound signal taken is N, and the sampling frequency is Fs. If n i3 >N/2, then If n i3 ⁇ N/2, then
- R i4 (n) take the maximum value of n as n i4 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i4 >N/2, then If n i4 ⁇ N/2, then
- solving the sound source position includes: setting the sound source position coordinate to (x, y, z), and obtaining the delay parameter, after passing the formula (VIII) ) Find the location coordinates of the sound source:
- the microphone array is: (0, 0, 0), (a, 0, 0), (0, a, 0), (0, 0, a) are selected in a three-dimensional Cartesian coordinate system.
- a is a fixed parameter, indicating three coordinates (a, 0, 0), (0, a, 0), (0, 0, a) to the coordinate system origin ( 0,0,0) The distance of the position microphone.
- step (2) the sound signals collected in step (1) are separated by a progressive serial orthogonal blind source separation algorithm to obtain respective independent sound source signals; the steps are as follows:
- the whitening process uses the principal component analysis method to decorrelate and scale the signal.
- the linear whitening transformation V is as shown in equation (XV):
- Equation (XV) matrix E is a covariance matrix
- the unit norm feature vector is a column
- D diag(d 1 , d 2 , d 3 , d 4 ) is a feature matrix of the diagonal element of the eigenvalue of C;
- step a Calculate the number of independent components of the observed signal z(t), denoted as m, and m ⁇ 4; because the microphone array in step a consists of 4 microphones, 4 sets of sound signals are collected, according to the principle of blind source separation, the number of independent components Not more than the number of observed signals.
- step 6 Check the standardized w p in step 5 to see if it converges, if it has not converge, return to step 4;
- step 7p is updated to p+1, if p ⁇ m, return to step 4, otherwise, proceed to step 8;
- the m independent components of the microphone array are obtained by blind source separation, that is, independent sound source signals.
- the step (3) extracts the Mel frequency cepstral coefficient (MFCC) for each of the obtained independent sound source signals.
- MFCC Mel frequency cepstral coefficient
- the source signal y(t) after the pre-emphasis processing is framed, the frame length is 10ms-30ms, and the frame is shifted to 1/2-1/3 of the frame length; the characteristic change between the frame and the frame can be avoided. ;
- Window processing for each frame of the signal can increase the continuity of the left and right ends of the frame.
- the window function is a Hamming window. The formula is
- step 10 performing fast Fourier transform (FFT) transformation on each frame of the signal processed in step 9, shifting the signal from the time domain to the frequency domain, obtaining the spectrum of the signal, and then taking the square of the modulus as the discrete power spectrum S(k);
- FFT fast Fourier transform
- Equation (XX) d[T(i), R(w(j))] is the distance between the vector T(i) to be tested and the reference template vector R(j); T(i) represents T a speech feature vector of the i-th frame; R(w(j)) represents a speech feature vector of the j-th frame in R; D represents a minimum distance between the vector to be tested and the reference sample vector;
- the reference sample template with the smallest matching distance is the result of independent component recognition.
- the reference template used is the same reference template.
- the four signals collected by the microphone array signal are a single sound source, and the four signals collected by the microphone array signal are multiple sound sources. You can select the independent sound source information you want to locate according to your requirements.
- An implementation system for realizing the above sound source localization method comprising four microphones and voltage amplification and elevation circuit modules, a storage module, An algorithm processing and system control module and a display module, wherein the four microphones and the voltage amplification and elevation circuit module are connected to the storage module, and the storage module, the algorithm processing and the system control module, and the display module are sequentially connected;
- the four microphones and the voltage amplification and elevation circuit module acquire sound signals in real time; the storage module is configured to store the acquired sound signal and the time signal; the algorithm processing and the system control module pass the blind source based on progressive serial orthogonalization
- the separation algorithm separates the collected mixed sound signals, calculates a time delay by selecting a TDOA sound localization algorithm, and lists the equations to solve the sound source position; the display module is used to display the sound source position.
- the algorithm processing and system control module is a STM32 development platform; the display module is a liquid crystal display.
- the invention uses the TDOA algorithm to calculate the time delay to obtain the sound source position.
- the separated signal is a multi-sound source
- the separated target signal is directly correlated with the mixed signal to calculate the delay, the calculation amount is small, and the calculation speed is fast; when the signal is collected When it is a single sound source, the improved TDOA algorithm is used for delay calculation, which can improve the accuracy to a certain extent and reduce the amount of algorithm operation.
- the invention adopts a passive positioning method, a passive principle, and has low power consumption.
- the invention combines blind source separation and sound source localization to make up for the insufficiency of the previous sound source localization to identify multiple sound sources.
- FIG. 1 is a structural block diagram of an implementation system of an improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to the present invention.
- FIG. 2 is a schematic flow chart of an improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to the present invention.
- FIG. 3 is a schematic flow chart of an improved TDOA algorithm of the present invention.
- An improved sound source localization method based on progressive serial orthogonal blind source separation algorithm includes the following steps:
- the microphone array is: selecting (0, 0, 0), (a, 0, 0), (0, a, 0), (0, in the three-dimensional Cartesian coordinate system 0, a) Place the microphone in four positions to obtain the microphone array, a is a fixed parameter, indicating three coordinates (a, 0, 0), (0, a, 0), (0, 0, a) to The distance from the microphone at the origin of the coordinate system (0,0,0).
- step (2) Using the progressive serial orthogonalization blind source separation algorithm to separate the sound signals collected in step (1) to obtain independent sound source signals; for sound localization in complex environments, using sound source separation technology, The target sound source is extracted from the ambient mixed sound signal, thereby improving the accuracy of sound localization in a complex environment.
- step (3) Extracting the Mel frequency cepstral coefficient (MFCC) as the sound characteristic parameter for each independent sound source signal obtained in step (2), identifying the sound signal by the pattern matching algorithm, and selecting the independent sound source signal of the sound to be positioned. ;
- MFCC Mel frequency cepstral coefficient
- step (3) if it is a single sound source, it proceeds to step (5); if it is a plurality of sound sources, the time delay is calculated by the TDOA algorithm to solve the sound source position;
- First coarse positioning obtain the envelope of the signal, low-resolution sampling, roughly calculate the delay by the generalized autocorrelation function method, and time-shift the signal according to the number of points that are roughly positioned; fine positioning: high-resolution sampling The delay is calculated by the generalized autocorrelation function method, and the precise delay is obtained to solve the sound source position.
- the accuracy of the delay estimation is limited by the sampling frequency.
- the higher the required precision the higher the sampling frequency required.
- the high sampling frequency brings extremely high sampling points.
- the amount of computation is also greater.
- the coarse positioning fine positioning algorithm the signal is firstly subjected to a certain time domain shift using low resolution, and then high resolution is used for high precision time delay calibration.
- this algorithm can achieve the calculation accuracy of high-resolution sampling.
- this algorithm has only one time domain shift, only need to be adjusted during high-precision calibration. The shorter effective time can calculate the delay and reduce the amount of algorithm operation. Based on the above principle, the algorithm can solve the distance limitation between the sampling MIC. When the distance exceeds the effective duration, only one coarse positioning is needed. Time domain shifting allows you to calculate precise delays.
- An improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to Embodiment 1 is characterized in that an accurate delay is obtained according to step (5), as shown in FIG. Location, including the steps below:
- step (3) Set 4 sound signals by step (3), that is, x 1 (t), x 2 (t), x 3 (t), x 4 (t), where t is the serial number of the sampling point in the digital signal.
- the length is N, and the 4 channels of sound signals are windowed and filtered to eliminate noise;
- N 1 is an integer greater than 2n less than N;
- N 1 is the signal length, Fs is the sampling frequency ;
- the generalized autocorrelation is used to obtain the precise delay point n′′ 12 , that is, the signals z 1 (t) and z 2 (t) are Fourier transformed into the frequency domain, and the PHAT weights the cross-power spectrum, and then the Fourier transform is inversely transformed into the time domain.
- the cross-correlation function is obtained, and the time corresponding to the maximum time-point number of the cross-correlation is two-way delay estimation n′′ 13 , n′′ 13 and n′′ 14 are consistent with the calculation method of n′′ 12 ;
- step (2) obtain the independent component that needs to be located as y i (t), i is an integer and 1 ⁇ i ⁇ 4, t is the serial number of the sampling point in the digital signal, and y i (t), x 1 (
- the five signals of t), x 2 (t), x 3 (t), and x 4 (t) are windowed and filtered, and then Fourier transformed into the frequency domain to obtain frequency domain signals Y i (k), X 1 (k), X 2 (k), X 3 (k), X 4 (k), where k is the sequence number of the digital signal sample point corresponding to t;
- n corresponds to The delay is the delay estimate t i1 , t i2 , t of the 4-way sound signal x 1 (t), x 2 (t), x 3 (t), x 4 (t) and the reference signal y i (t) I3 and t i4 , let R i1 (n) take the maximum value of n as n i1 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i1 >N/2,
- R i2 (n) take the maximum value of n as n i2 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i2 >N/2, then If n i2 ⁇ N/2, then
- R i3 (n) take the maximum value of n as n i3 , the number of points of the sound signal taken is N, and the sampling frequency is Fs. If n i3 >N/2, then If n i3 ⁇ N/2, then
- R i4 (n) take the maximum value of n as n i4 , the number of points of the sound signal taken is N, and the sampling frequency is Fs, if n i4 >N/2, then If n i4 ⁇ N/2, then
- An improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to Embodiment 1 is characterized in that, in step (2), a progressive serial orthogonalization blind source separation algorithm is adopted.
- Step (1) The collected sound signals are separated to obtain respective independent sound source signals; the steps are as follows:
- the whitening process uses the principal component analysis method to decorrelate and scale the signal.
- the linear whitening transformation V is as shown in equation (XV):
- Equation (XV) matrix E is a covariance matrix
- the unit norm feature vector is a column
- D diag(d 1 , d 2 , d 3 , d 4 ) is a feature matrix of the diagonal element of the eigenvalue of C;
- step a Calculate the number of independent components of the observed signal z(t), denoted as m, and m ⁇ 4; because the microphone array in step a consists of 4 microphones, 4 sets of sound signals are collected, according to the principle of blind source separation, the number of independent components Not more than the number of observed signals.
- step 6 Check the standardized w p in step 5 to see if it converges, if it has not converge, return to step 4;
- step 7p is updated to p+1, if p ⁇ m, return to step 4, otherwise, proceed to step 8;
- the m independent components of the microphone array are obtained by blind source separation, that is, independent sound source signals.
- the frequency cepstral coefficient (MFCC) is used as the sound characteristic parameter, and the sound signal is identified by the pattern matching algorithm, and the independent sound source signal of the sound to be positioned is selected; the steps are as follows:
- the source signal y(t) after the pre-emphasis processing is framed, the frame length is 10ms-30ms, and the frame is shifted to 1/2-1/3 of the frame length; the characteristic change between the frame and the frame can be avoided. ;
- Window processing for each frame of the signal can increase the continuity of the left and right ends of the frame.
- the window function is a Hamming window. The formula is
- step 10 performing fast Fourier transform (FFT) transformation on each frame of the signal processed in step 9, shifting the signal from the time domain to the frequency domain, obtaining the spectrum of the signal, and then taking the square of the modulus as the discrete power spectrum S(k);
- FFT fast Fourier transform
- Equation (XX) d[T(i), R(w(j))] is the distance between the vector T(i) to be tested and the reference template vector R(j); T(i) represents T a speech feature vector of the i-th frame; R(w(j)) represents a speech feature vector of the j-th frame in R; D represents a minimum distance between the vector to be tested and the reference sample vector;
- the reference sample template with the smallest matching distance is the result of independent component recognition.
- the reference template used is the same reference template.
- the four signals collected by the microphone array signal are a single sound source, and the four signals collected by the microphone array signal are multiple sound sources. You can select the independent sound source information you want to locate according to your requirements.
- FIG. 1 An improved sound source localization method based on a progressive serial orthogonalization blind source separation algorithm according to any one of embodiments 1-5, wherein the sound source localization method is implemented as shown in FIG.
- Microphone and voltage amplification and elevation circuit module, storage module, algorithm processing and system control module and display module, four microphones and voltage amplification and elevation circuit modules are connected to the storage module, and the storage module, algorithm processing, system control module and display module are in turn connection;
- microphones and voltage amplification and elevation circuit modules acquire sound signals in real time; storage modules are used to store acquired sound signals and time signals; algorithm processing and system control modules separate and collect acquired by progressive serial orthogonal blind source separation algorithm The sound signal is mixed, the delay is calculated by selecting the TDOA sound localization algorithm, and the equations are listed to solve the sound source position; the display module is used to display the sound source position.
- the algorithm processing and system control module is the STM32 development platform; the display module is a liquid crystal display.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Circuit For Audible Band Transducer (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
Abstract
Description
Claims (10)
- 一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,包括步骤如下:(1)通过麦克风阵列采集声音信号并存储;(2)采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;(3)对步骤(2)得到的每个独立声源信号,提取梅尔频率倒谱系数作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;(4)根据步骤(3)中模式匹配的结果,如果为单一声源,则进入步骤(5);如果为多个声源,则通过TDOA算法计算时延,求解声源位置;(5)先粗定位:求取信号的包络,低分辨率采样,通过广义自相关函数法粗略计算时延,根据粗略定位的点数对信号进行时域搬移;再细定位:高分辨率采样,通过广义自相关函数法计算时延,得到精确时延,求解声源位置。
- 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,根据所述步骤(5)得到精确时延,包括步骤如下:A、设定通过步骤(3)获取4路声音信号,即x1(t)、x2(t)、x3(t)、x4(t),t为数字信号中采样点的序号,长度为N,将4路声音信号进行加窗滤波处理,消除噪声;B、对4路信号进行包络提取,只取包络的上半部分为有效信号,以Fs/n的频率进行抽点采样,得x′1(t)、x′2(t)、x′3(t)、x′4(t),Fs为盲源分离时的采样频率,n为大于1的整数;C、对x′1(t)、x′2(t)、x′3(t)、x′4(t)进行傅立叶变换到频域,即X′1(k)、X′2(k)、X′3(k)、X′4(k),其中k为与t对应的数字信号中采样点的序号,t、k均为整数;D、将x′1(t)作为基准信号,分别计算X′1(k)与X′2(k)、X′1(k)与X′3(k)、X′1(k)与X′4(k)的互功率谱G′12(k)、G′13(k)、G′14(k),对互功率谱G′12(k)、G′13(k)、G′14(k)进行PHAT加权操作,如式(Ⅰ)、式(Ⅱ)、式(Ⅲ)所示:E、将互功率谱G′12(k)、G′13(k)、G′14(k)逆变换到频域,得到对应的广义互相关函数R′12(t)、R′13(t)、R′14(t);当R′12(t)、R′13(t)、R′14(t)分别取最大值时n所对应的时延即为3路声音信号x′2(t)、x′3(t)、x′4(t)与基准信号x′1(t)的时延估计t′12、t′13、t′14;设R′1s(t)取最大值时t的值为n′1s,s=2、3、4,所取声音信号的点数为N′=fix(N/n),采样频率为Fs/n,若n′1s>N′/2,则n′1s更新为n′1s-N′-1;若n′1s≤N′/2,则n′1s不变;由此计算得到n′12、n′13、n′14;F、若n′1s≥0,将x1(t)在时域上向左平移n′1s*n个点;若n′1s<0,xs(t)在时域上向右平移n′1s*n个点;取x1(t)、xs(t)前N1个点信号为z(t)、zs(t),N1为大于2n小于N的整数;N1为信号长度,Fs为采样频率;按照步骤C-E采用广义自相关求取精确时延点数n″12,即将信号z1(t)、z2(t)傅立叶变换到频域,PHAT加权计算互功率谱,然后傅立叶反变换到时域求得互相关函数,取互相关最大时点数所对应的时间为两路的时延估计n″13,n″13和n″14与n″12计算方法一致;
- 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(4),如果为多个声源,则通过TDOA算法计算时延,包括步骤如下:a、设定步骤(2)获取需要进行定位的独立分量为yi(t),i为整数且1≤i≤4,t为数字信号中采样点的序号,将yi(t)、x1(t)、x2(t)、x3(t)、x4(t)这5路信号进行加窗滤波处理,再经傅立叶变换到频域,得到频域信号Yi(k)、X1(k)、X2(k)、X3(k)、X4(k),k为与t对应的数字信号采样点的序号;b、将独立分量yi(t)作为基准信号,分别计算Yi(k)与X1(k)、Yi(k)与X2(k)、Yi(k)与X3(k)、Yi(k)与X4(k)的互功率谱,即Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k),对互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)进行PHAT加权操作,如式(Ⅳ)、(Ⅴ)、(Ⅵ)、(Ⅶ)所示:c、将互功率谱Gi1(k)、Gi2(k)、Gi3(k)、Gi4(k)逆变换到频域,得到对应的广义互相关函数Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n),当Ri1(n)、Ri2(n)、Ri3(n)、Ri4(n)分别取最大值时,n所对应的时延即为4路声音信号x1(t)、x2(t)、x3(t)、x4(t)与基准信号yi(t)的时延估计ti1、ti2、ti3、ti4,设Ri1(n)取最大值时的n的值为ni1,所取声音信号的点数为N,采样频率为Fs,若ni1>N/2,则若ni1≤N/2,则d、将ti1作为基准延时,则t12=ti1-ti2表示x1(t)相对于x2(t)的延时,t13=ti1-ti3表示x1(t)相对于x3(t)的延时,t14=ti1-ti4表示x1(t)相对于x4(t)的延时,得到x1(t)相对于x2(t)、x3(t)、x4(t)的延时t12、t13、t14。
- 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述麦克风阵列为:在三维直角坐标系下选择(0,0,0),(a,0,0),(0,a,0),(0,0,a)四个位置摆放麦克风,得到所述麦克风阵列,a为固定参数,表示三个坐标(a,0,0),(0,a,0),(0,0,a)到坐标系原点(0,0,0)位置麦克风的距离。
- 根据权利要求1所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(1),通过麦克风阵列采集的声音信号即混合声音信号x(t),x(t)=[x1(t),x2(t),x3(t),x4(t)],x1(t)、x2(t)、x3(t)、x4(t)分别如式(Ⅸ)、(Ⅹ)、(Ⅺ)、(Ⅻ)所示:x1(t)=a11s1+a12s2+a13s3+a14s4 (Ⅸ)x2(t)=a21s1+a22s2+a23s3+a24s4 (Ⅹ)x3(t)=a31s1+a32s2+a33s3+a34s4 (Ⅺ)x4(t)=a41s1+a42s2+a43s3+a44s4(Ⅻ)式(Ⅰ)中,s1,s2,s3,s4为4个独立声源发出的声音信号,aij(i=1,2,3,4;j=1,2,3,4)是实系数。
- 根据权利要求6所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,步骤(2)中,采用基于渐进串行正交化盲源分离算法对步骤(1)采集到的声音信号分离,得到各个独立声源信号;包括步骤如下:白化处理采用主分量分析方法,对信号进行去相关和缩放,线性白化变换V如式(XV)所示:③计算观测信号z(t)的独立成分个数,记为m,且m≤4;选择具有单位范数的初始化向量wp,p=1,2,…,m,令p=1;④对wp进行如式(XVI)所示的迭代运算:式(XVI)中,函数g为g1(y)、g2(y)或g3(y);g1(y)=tanh(a1y),g2(y)=y*exp(-y^2/2),g3(y)=y^3;⑤对步骤④中迭代后的wp进行正交化和标准化,正交化方法如式(XVII)所示:对wp标准化,即除以其范数,如式(XVIII)所示:wp=wp/norm(wp) (XVIII)⑥对步骤⑤中标准化后的wp进行检测,看其是否收敛,如果尚未收敛,则返回步骤④;⑦p更新为p+1,如果p≤m,返回步骤④,否则,进入步骤⑧;⑧通过步骤③~⑦的循环计算,得到解混矩阵W={w1,w2,…,wm}T,m≤4;由式(XIX)得到源信号y(t):y(t)=Wx(t) (XIX)式(XIX)中,y(t)=[y1(t),y2(t),…yi(t)…,ym(t)],i=1,2,…,m,分别为麦克风阵列采集声音信号经过盲源分离后得到的m个独立分量,即独立声源信号。
- 根据权利要求7所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法,其特征在于,所述步骤(3),对得到的每个独立声源信号,提取梅尔频率倒谱系数作为声音特征参数,通过模式匹配算法识别声音信号,选取需要定位的声音的独立声源信号;包括步骤如下:⑨对步骤⑧中分离出的源信号y(t)进行如下处理:对源信号y(t)做预加重处理,即将源信号y(t)通过一个高通滤波器,该高通滤波器的传递函数为;H(z)=1-μz-1,0.9≤μ≤1.0;对预加重处理后的源信号y(t)做分帧处理,帧长为10ms-30ms,帧移为帧长的1/2-1/3;⑩对步骤⑨处理后的每帧信号进行快速傅立叶变换,将信号从时域转到频域,得到信号的频谱,再取模的平方作为离散功率谱S(k);将每帧的频谱参数通过梅尔刻度滤波器,梅尔刻度滤波器包括V个三角形带通滤波器,20≤ V≤30,得到V个参数Pv,v=0,1,…,v-1;将每个频带的输出取对数,得到Lv,v=0,1,…,v-1;将得到的V个参数进行离散余弦变换,得到Dv,v=0,1,…,v-1;去掉D0,取D1,D2,…,Dk作为MFCC的参数;步骤中的声音信号分了p帧矢量,即{T(1):T(2):…:T(n)…:T(p)},T(n)为第n帧的语音特征矢量,1≤n≤p,参考样本中有q帧矢量,即{R(1):R(2):…:R(m)…:R(q)},R(m)为第m帧的语音特征矢量,1≤m≤q,则动态时间规整DTW算法利用时间规整函数j=w(i)完成待测试矢量与模板矢量时间轴的映射,且规整函数w满足式(XX):在式(XX)中,d[T(i),R(w(j))]是待测试矢量T(i)与参考模板矢量R(j)之间的距离;T(i)表示T中第i帧的语音特征矢量;R(w(j))表示R中第j帧的语音特征矢量;D表示待测试矢量与参考样本矢量之间的最小距离;利用DTW将待测试声音模板与所有参考样本模板进行匹配后,匹配距离最小的参考样本模板就是独立分量识别的结果,当4路待测试声音匹配的距离最小时所用参考模板为同一个参考模板时,则麦克风阵列信号采集的4路信号为单一声源,麦克风阵列信号采集的4路信号为多个声源。
- 一种实现权利要求1或权利要求4-8任一所述的一种基于渐进串行正交化盲源分离算法的改进声源定位方法的实现系统,其特征在于,包括4个麦克风与电压放大抬高电路模块、存储模块、算法处理和系统控制模块以及显示模块,所述4个麦克风与电压放大抬高电路模块均连接所述存储模块,所述存储模块、所述算法处理和系统控制模块、所述显示模块依次连接;所述4个麦克风与电压放大抬高电路模块实时获取声音信号;所述存储模块用于存储获取的声音信号和时间信号;所述算法处理和系统控制模块通过基于渐进串行正交化盲源分离算法分离采集到的混合声音信号,通过选择TDOA声音定位算法计算时延,并列出方程组求解出声源位置;所述显示模块用于显示声源位置。
- 根据权利要求9所述的实现系统,所述算法处理和系统控制模块为STM32开发平台;所述显示模块为液晶显示屏。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710911515.0A CN107644650B (zh) | 2017-09-29 | 2017-09-29 | 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 |
CN201710911515.0 | 2017-09-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019061439A1 true WO2019061439A1 (zh) | 2019-04-04 |
Family
ID=61112147
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/104879 WO2019061439A1 (zh) | 2017-09-29 | 2017-09-30 | 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107644650B (zh) |
WO (1) | WO2019061439A1 (zh) |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108648758B (zh) * | 2018-03-12 | 2020-09-01 | 北京云知声信息技术有限公司 | 医疗场景中分离无效语音的方法及系统 |
CN108922557A (zh) * | 2018-06-14 | 2018-11-30 | 北京联合大学 | 一种聊天机器人的多人语音分离方法及系统 |
CN108877831B (zh) * | 2018-08-28 | 2020-05-15 | 山东大学 | 基于多标准融合频点筛选的盲源分离快速方法及系统 |
CN110888112B (zh) * | 2018-09-11 | 2021-10-22 | 中国科学院声学研究所 | 一种基于阵列信号的多目标定位识别方法 |
CN109671439B (zh) * | 2018-12-19 | 2024-01-19 | 成都大学 | 一种智能化果林鸟害防治设备及其鸟类定位方法 |
CN109741759B (zh) * | 2018-12-21 | 2020-07-31 | 南京理工大学 | 一种面向特定鸟类物种的声学自动检测方法 |
CN110007276B (zh) * | 2019-04-18 | 2021-01-12 | 太原理工大学 | 一种声源定位方法及系统 |
CN110361695B (zh) * | 2019-06-06 | 2021-06-15 | 杭州未名信科科技有限公司 | 分置式声源定位系统和方法 |
CN111856401A (zh) * | 2020-07-02 | 2020-10-30 | 南京大学 | 一种基于互谱相位拟合的时延估计方法 |
CN111787609A (zh) * | 2020-07-09 | 2020-10-16 | 北京中超伟业信息安全技术股份有限公司 | 基于人体声纹特征和麦克风基站的人员定位系统及方法 |
CN114088332B (zh) * | 2021-11-24 | 2023-08-22 | 成都流体动力创新中心 | 一种用于旋转叶片声音信号提取的风洞背景噪声修正方法 |
CN114220454B (zh) * | 2022-01-25 | 2022-12-09 | 北京荣耀终端有限公司 | 一种音频降噪方法、介质和电子设备 |
CN115902776B (zh) * | 2022-12-09 | 2023-06-27 | 中南大学 | 一种基于被动式声音信号的声源定位方法 |
CN116866124A (zh) * | 2023-07-13 | 2023-10-10 | 中国人民解放军战略支援部队航天工程大学 | 一种基于基带信号时间结构的盲分离方法 |
CN118016102A (zh) * | 2024-04-08 | 2024-05-10 | 湖北经济学院 | 一种基于非调制声音信号的定位方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103021405A (zh) * | 2012-12-05 | 2013-04-03 | 渤海大学 | 基于music和调制谱滤波的语音信号动态特征提取方法 |
CN103258533A (zh) * | 2013-05-27 | 2013-08-21 | 重庆邮电大学 | 远距离语音识别中的模型域补偿新方法 |
CN104766093A (zh) * | 2015-04-01 | 2015-07-08 | 中国科学院上海微系统与信息技术研究所 | 一种基于麦克风阵列的声目标分类方法 |
US20160358606A1 (en) * | 2015-06-06 | 2016-12-08 | Apple Inc. | Multi-Microphone Speech Recognition Systems and Related Techniques |
CN106646376A (zh) * | 2016-12-05 | 2017-05-10 | 哈尔滨理工大学 | 基于加权修正参数的p范数噪声源定位识别方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002061732A1 (en) * | 2001-01-30 | 2002-08-08 | Thomson Licensing S.A. | Geometric source separation signal processing technique |
US6865490B2 (en) * | 2002-05-06 | 2005-03-08 | The Johns Hopkins University | Method for gradient flow source localization and signal separation |
US8073690B2 (en) * | 2004-12-03 | 2011-12-06 | Honda Motor Co., Ltd. | Speech recognition apparatus and method recognizing a speech from sound signals collected from outside |
RU2565338C2 (ru) * | 2010-02-23 | 2015-10-20 | Конинклейке Филипс Электроникс Н.В. | Определение местоположения аудиоисточника |
CN101957443B (zh) * | 2010-06-22 | 2012-07-11 | 嘉兴学院 | 声源定位方法 |
CN104053107B (zh) * | 2014-06-06 | 2018-06-05 | 重庆大学 | 一种用于噪声环境下声源分离和定位方法 |
CN105872366B (zh) * | 2016-03-30 | 2018-08-24 | 南昌大学 | 一种基于fastica算法的盲源分离技术控制聚焦系统 |
-
2017
- 2017-09-29 CN CN201710911515.0A patent/CN107644650B/zh active Active
- 2017-09-30 WO PCT/CN2017/104879 patent/WO2019061439A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103021405A (zh) * | 2012-12-05 | 2013-04-03 | 渤海大学 | 基于music和调制谱滤波的语音信号动态特征提取方法 |
CN103258533A (zh) * | 2013-05-27 | 2013-08-21 | 重庆邮电大学 | 远距离语音识别中的模型域补偿新方法 |
CN104766093A (zh) * | 2015-04-01 | 2015-07-08 | 中国科学院上海微系统与信息技术研究所 | 一种基于麦克风阵列的声目标分类方法 |
US20160358606A1 (en) * | 2015-06-06 | 2016-12-08 | Apple Inc. | Multi-Microphone Speech Recognition Systems and Related Techniques |
CN106646376A (zh) * | 2016-12-05 | 2017-05-10 | 哈尔滨理工大学 | 基于加权修正参数的p范数噪声源定位识别方法 |
Also Published As
Publication number | Publication date |
---|---|
CN107644650A (zh) | 2018-01-30 |
CN107644650B (zh) | 2020-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019061439A1 (zh) | 一种基于渐进串行正交化盲源分离算法的改进声源定位方法及其实现系统 | |
CN102103200B (zh) | 一种分布式非同步声传感器的声源空间定位方法 | |
CN109188362B (zh) | 一种麦克风阵列声源定位信号处理方法 | |
CN104360310B (zh) | 一种多目标近场源定位方法和装置 | |
CN109448389B (zh) | 一种汽车鸣笛智能检测方法 | |
WO2020024816A1 (zh) | 音频信号处理方法、装置、设备和存储介质 | |
CN110534126B (zh) | 一种基于固定波束形成的声源定位和语音增强方法及系统 | |
CN111798869B (zh) | 一种基于双麦克风阵列的声源定位方法 | |
CN103854660A (zh) | 一种基于独立成分分析的四麦克语音增强方法 | |
CN113702909A (zh) | 一种基于声音信号到达时间差的声源定位解析解计算方法及装置 | |
CN107202559B (zh) | 基于室内声学信道扰动分析的物体识别方法 | |
CN107167770A (zh) | 一种混响条件下的麦克风阵列声源定位装置 | |
CN109597021B (zh) | 一种波达方向估计方法及装置 | |
CN108089146B (zh) | 一种对预估角误差鲁棒的高分辨宽带波达方向估计方法 | |
CN103837858B (zh) | 一种用于平面阵列的远场波达角估计方法及系统 | |
Huang et al. | One-dimensional MUSIC-type algorithm for spherical microphone arrays | |
EP1682923A1 (fr) | Procede de localisation d un ou de plusieurs emetteurs | |
Hu et al. | Decoupled direction-of-arrival estimations using relative harmonic coefficients | |
CN116559778B (zh) | 一种基于深度学习的车辆鸣笛定位方法及系统 | |
CN116910690A (zh) | 一种基于数据融合的目标分类系统 | |
Hu et al. | Evaluation and comparison of three source direction-of-arrival estimators using relative harmonic coefficients | |
CN111968671B (zh) | 基于多维特征空间的低空声目标综合识别方法及装置 | |
CN112666520A (zh) | 一种可调响应时频谱声源定位方法及系统 | |
CN110426711B (zh) | 一种基于极性零点检测的时延估计方法及系统 | |
Yang et al. | A Review of Sound Source Localization Research in Three-Dimensional Space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17927052 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17927052 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19.10.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17927052 Country of ref document: EP Kind code of ref document: A1 |