CN111610491B - Sound source positioning system and method - Google Patents
Sound source positioning system and method Download PDFInfo
- Publication number
- CN111610491B CN111610491B CN202010469457.2A CN202010469457A CN111610491B CN 111610491 B CN111610491 B CN 111610491B CN 202010469457 A CN202010469457 A CN 202010469457A CN 111610491 B CN111610491 B CN 111610491B
- Authority
- CN
- China
- Prior art keywords
- microphone
- sound pressure
- microphones
- plane
- sound source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
- G01S5/20—Position of source determined by a plurality of spaced direction-finders
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The application relates to a sound source positioning system and a sound source positioning method. According to the sound source positioning method, the first time difference between the sound pressure signal acquired by each first microphone and the sound pressure signals acquired by other first microphones is acquired, and the angle information of the projection point of the sound source in the first plane is acquired according to the first time difference. And acquiring a second time difference between the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each first microphone, and acquiring angle information of the sound source in a second plane according to the second time difference, wherein the second plane is perpendicular to the first plane, and the second plane passes through the projection point of the sound source in the first plane. The number and the structural complexity of the first microphones can be reduced by utilizing the second microphones to participate in positioning calculation.
Description
Technical Field
The present application relates to the field of noise testing technologies, and in particular, to a sound source localization system and method.
Background
Current systems for monitoring ambient noise use a measuring microphone to make measurements of the sound signal. Since the measuring microphone only records the strength of the sound and cannot distinguish the direction of the sound, the general environmental noise monitoring device only records the sound pressure level of the noise (i.e. the size of the noise) and cannot distinguish the source direction of the sound, thereby being not beneficial to further research and treatment of the noise.
In the conventional technical scheme, the number of microphones used in the noise positioning method is large, so that the structure of the noise positioning device is complex.
Disclosure of Invention
Based on this, the present application provides a sound source localization system and method to reduce the number of microphones and further reduce the structural complexity.
A sound source localization system, comprising:
the microphone comprises a plurality of first microphones, a plurality of second microphones and a microphone, wherein the plurality of first microphones are arrayed in a first plane, the distance between any two adjacent first microphones is smaller than a preset value, and the first microphones are used for acquiring sound pressure signals;
the second microphone is arranged on a central axis perpendicular to the first plane, the distance from the second microphone to the first plane is larger than the preset value, and the second microphone is used for synchronously acquiring sound pressure signals with the plurality of first microphones; and
the processor is electrically connected with the first microphones and the second microphones respectively, and is used for acquiring a first time difference between a sound pressure signal acquired by each first microphone and sound pressure signals acquired by other first microphones and acquiring angle information of a projection point of a sound source in the first plane according to the first time difference, and is also used for acquiring a second time difference between the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each first microphone and acquiring angle information of the sound source in the second plane according to the second time difference;
wherein the second plane is perpendicular to the first plane, and the second plane passes through the projection point of the sound source in the first plane.
In one embodiment, the first microphones are arranged in a circular ring at equal intervals in the first plane.
In one embodiment, the distance between any two adjacent first microphones is smaller than a preset value, and the distance between the second microphone and the first plane is larger than the preset value.
In one embodiment, the number of the first microphones is greater than or equal to 4.
A sound source localization method using the sound source localization system according to any one of the above embodiments, comprising:
synchronously acquiring sound pressure signals by utilizing a plurality of first microphones and second microphones;
the processor acquires a first time difference between the sound pressure signal acquired by each first microphone and the sound pressure signals acquired by other first microphones, and acquires angle information of a projection point of a sound source in the first plane according to the first time difference;
the processor acquires a second time difference between the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each first microphone, and acquires angle information of a sound source in a second plane according to the second time, wherein the second plane is perpendicular to the first plane, and the second plane passes through the projection point of the sound source in the first plane.
In one embodiment, the phase difference of the sound pressure signals is used to represent the first time difference, and the method for obtaining the phase difference of the sound pressure signals is a frequency spectrum or other algorithm.
In one embodiment, the sound source azimuth is obtained by a late accumulation algorithm, a multi-signal classification algorithm, a fairness algorithm, or other beamforming algorithm using the phase difference of the sound pressure signals.
In one embodiment, the step of obtaining the second time difference comprises:
respectively acquiring a cross-correlation curve of the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each first microphone;
and determining a maximum correlation point in the cross-correlation curve according to the cross-correlation curve, wherein the time position of the maximum correlation point is used as the second time difference.
In one embodiment, the step of obtaining a cross-correlation curve between the sound pressure signal collected by the second microphone and the sound pressure signal collected by each of the first microphones respectively comprises:
judging whether the sound pressure signals collected by the second microphone and the sound pressure signals collected by each first microphone are in periodic correlation or not according to the shape of the sound pressure signals collected by the second microphone and the shape of the sound pressure signals collected by each first microphone;
when the sound pressure signal collected by the second microphone is not periodically correlated with the sound pressure signal collected by each of the first microphones, the step of determining the maximum correlation point in the cross-correlation curve according to the cross-correlation curve is executed.
In one embodiment, when there is a periodic correlation between the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each of the first microphones, the non-stationary segment of the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each of the first microphones is searched again, and the step of determining whether there is a periodic correlation between the sound pressure signal acquired by the second microphone and the sound pressure signal acquired by each of the first microphones is performed according to the shape of the sound pressure signal acquired by the second microphone and the shape of the sound pressure signal acquired by each of the first microphones.
The sound source positioning system comprises a plurality of first microphones, a plurality of second microphones and a processor. The first microphones are arrayed in a first plane. Each of the first microphones is configured to acquire an acoustic pressure signal. The second microphone is arranged on a central axis perpendicular to the first plane. The second microphone is used for synchronously acquiring sound pressure signals with the first microphones. The processor is electrically connected with the plurality of first microphones and the plurality of second microphones respectively. The processor is used for acquiring the time difference between the sound pressure signal acquired by each first microphone and the sound pressure signals acquired by other first microphones, and further acquiring the angle information of the projection point of the sound source in the first plane. The processor is further configured to obtain a time difference between the sound pressure signal collected by the second microphone and the sound pressure signal collected by each of the first microphones, and further obtain angle information of the sound source in a second plane. The second plane is perpendicular to the first plane. And the second plane passes through the projected point of the sound source in the first plane. The number and the structural complexity of the first microphones can be reduced by utilizing the second microphones to participate in positioning calculation.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the conventional technologies of the present application, the drawings used in the descriptions of the embodiments or the conventional technologies will be briefly introduced below, it is obvious that the drawings in the following descriptions are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a structural layout diagram of a sound source localization system according to an embodiment of the present application;
FIG. 2 is a schematic diagram of position parameters of microphones and sound sources provided in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of a sound source localization method according to an embodiment of the present application;
fig. 4 is a flowchart of a sound source localization method according to an embodiment of the present application.
Description of the main element reference numerals
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of embodiments in many different forms than those described herein and those skilled in the art will be able to make similar modifications without departing from the spirit of the application and it is therefore not intended to be limited to the embodiments disclosed below.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first acquisition module may be referred to as a second acquisition module, and similarly, a second acquisition module may be referred to as a first acquisition module, without departing from the scope of the present application. The first acquisition module and the second acquisition module are both acquisition modules, but are not the same acquisition module.
It will be understood that when an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, the present application provides a sound source localization system. The sound source localization system comprises a plurality of first microphones 10, second microphones 20 and a processor 30.
A plurality of the first microphones 10 are arrayed in the first plane 101. The first microphone 10 is used to acquire an acoustic pressure signal. The second microphone 20 is disposed within a central axis perpendicular to the first plane 101. The second microphone 20 is used to acquire a sound pressure signal in synchronization with the plurality of first microphones 10. The processor 30 is electrically connected to the plurality of first microphones 10 and the plurality of second microphones 20, respectively. The processor 30 is configured to obtain a first time difference between the sound pressure signal collected by each of the first microphones 10 and the sound pressure signals collected by the other first microphones 10, and obtain angle information of a projection point of a sound source in the first plane 101 according to the first time. The processor 30 is further configured to obtain a second time difference between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10, and obtain angle information of the sound source in the second plane 102 according to the second time difference. The second plane 102 is perpendicular to the first plane 101. And the second plane 102 passes the projection point of the sound source in the first plane 101.
It is understood that the structure of the processor 30 is not limited in particular, as long as the processor 30 can obtain a first time difference between the sound pressure signal collected by each of the first microphones 10 and the sound pressure signals collected by the other first microphones 10, and further obtain the angle information of the projection point of the sound source in the first plane 101, and can obtain a second time difference between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10, and further obtain the angle information of the sound source in the second plane 102 according to the second time difference. Alternatively, the processor 30 may be a microprocessor or a single chip microcomputer.
It will be appreciated that the first microphone 10 and the second microphone 20 are synchronized for sound signal acquisition, and therefore the time difference between the signals can be used for sound source orientation determination. In order to achieve synchronization of signal acquisition by the first microphone 10 and the second microphone 20, the same crystal clock may be used to drive analog-to-digital conversion of the first microphone 10 and the second microphone 20 in terms of hardware design.
It is to be understood that the structure, installation position and model of the first and second microphones 10 and 20 are not particularly limited as long as sound pressure signals can be collected. The sound pressure signal is an electrical signal converted from the monitored sound signal.
In an alternative embodiment, in order to measure the sound pressure level of the sound source, a microphone with higher measurement accuracy is required, and thus the second microphone 20 may employ a condenser microphone to monitor the sound pressure level of the sound source. At this time, in order to exclude the influence of other components of the sound source localization system on the sound field, the second microphone 20 may be disposed at the uppermost part of the whole system (see fig. 1). At this time, the sound pressure signal collected by the second microphone 20 is used to determine the sound source bearing. The sound pressure signal picked up by the second microphone 20 is also used to determine the sound pressure level of the sound source. The sound pressure signal acquired by the first microphone 10 is used only for determining the sound source orientation. The first microphone 10 is mainly used for sound source azimuth calculation, and therefore, a condenser microphone may be used, and other low-cost microphones such as a microphone based on MEMS (micro electro Mechanical Systems) principle may be used. Since the sound pressure signals collected by the second microphone 20 are also used for determining the sound source orientation, the first microphone 10 is only required to be arranged in the first plane 101, and the first microphone 10 is not required to be arranged in the vertical direction, so that the number of the first microphones 10 can be reduced, the cost is effectively reduced, the structural complexity is reduced, and the calculation amount is reduced.
It is to be understood that the number of the first microphones 10 and the number of the second microphones 20 are not particularly limited. The greater the number of the first microphones 10 and the greater the number of the second microphones 20, the higher the monitoring accuracy of the sound source azimuth. In one embodiment, the number of the first microphones 10 is greater than or equal to 4. As shown in fig. 1, a second microphone 20 is included as the measuring microphone and 8 first microphones 10 are included as said first microphones 10.
It is understood that the angle information of the projection point of the sound source in the first plane 101 may be a deflection angle of the projection point of the sound source relative to a preset point in the first plane 101. The angle information of the sound source in the second plane 102 may be a deflection angle of the sound source with respect to a predetermined point in the first plane 101.
In one embodiment, the first microphone 10 is located below the second microphone 20. The first microphones 10 are located in the same horizontal plane. The first microphones 10 may be generally uniformly arranged in a circle, however, it is understood that the arrangement of the first microphones 10 is not limited to a circle. The arrangement of the first microphones 10 is not limited to a uniform arrangement.
Alternatively, referring to fig. 2, a plurality of the first microphones 10 are arranged in a circular ring at equal intervals in the first plane 101. At this time, a predetermined point in the first plane 101 may be a center of the circular ring. The angle information of the projection point of the sound source in the first plane 101 may be regarded as a horizontal angle. The angular information of the sound source in the second plane 102 may be considered as a vertical angle. In the environmental noise monitoring, the orientation of the main noise is recorded. The noise bearing can be described by a horizontal angle and a vertical angle. The horizontal angle is an angle in the range of 0 to 360 on the horizontal plane centered on the center of the circular ring shape to determine from which direction the sound source comes in the horizontal plane. The vertical angle is an angle within a range of-180 degrees to 180 degrees from the ground to the sky with the center of the circle as the center, and can be used for judging whether the sound source is from the ground or the sky. The basic parameters are shown in fig. 2, the first microphone 10 being arranged in the xy horizontal plane composed of the X-axis and the Y-axis. The second microphone 20 is located on the Z-axis perpendicular to the OXY horizontal plane. The second microphone 20 is located at a distance D from the first plane 101. The sound source is located at any point a in space. The sound source projects at the point A' of the OXY plane. The sound source's orientation in the xyz coordinate system is represented by a horizontal angle theta and a vertical angle phi.
In practical applications, the horizontal angle needs high precision, but the vertical angle does not need high precision as long as the noise can be distinguished from the air or the ground, so the number of the second microphones 20 can be one. The sky noise is airplane noise, the ground noise generally comprises traffic, construction and life noise, and the actual noise source can be further determined by combining a horizontal angle, the surrounding environment of a monitoring point and other signal identification means.
In an optional embodiment, the distance between any two adjacent first microphones 10 is smaller than a preset value, and the distance between the second microphone 20 and the first plane 101 is greater than the preset value. The preset value is half of the wavelength of the maximum frequency noise in the monitoring noise. The first microphone 10 is far away from the second microphone 20, so that the sound field near the second microphone 20 is prevented from being interfered, and the measurement accuracy of the second microphone 20 is ensured.
The horizontal angle θ of the sound source (angle information of the projection point of the sound source in the first plane 101) is determined by all the first microphones 10. Since the times at which the sound emitted from the sound source reaches the respective microphones are different, the angle θ of the sound source in the horizontal plane can be determined by the time difference of the sound signals received by the respective first microphones 10. The pitch of the first microphones 10 is less than half of the wavelength of the noise to be measured, and the first time difference of the sound signals received by the first microphones 10 can be represented by a phase difference. The angle θ of the sound source in the horizontal plane can be calculated by any beamforming algorithm for noise source localization. The beamforming algorithm includes a delay accumulation algorithm, a MUSIC algorithm (Multiple signal classification algorithm), an SRP algorithm (fairness algorithm), and the like. According to such an algorithm, the number of said first microphones 10 determines the accuracy of the horizontal angle θ.
The first microphone 10 and the second microphone 20 are used simultaneously to determine the vertical angle phi of the sound source (angular information of the sound source in the second plane 102). The time difference of arrival of the sound source signal at said first microphone 10 and said second microphone 20 is also different in the vertical direction. To ensure that the sound field around the second microphone 20 is not affected, the first microphone 10 must be spaced from it by a sufficient distance D, so the distance D may be greater than half the sound wavelength λ. In this case, the time difference is calculated using the phase difference, which causes an aliasing phenomenon, and thus the time difference cannot be calculated directly using the phase difference. The present application uses a cross-correlation method to calculate the time difference. And calculating a time difference tau by using a cross-correlation method, respectively calculating a cross-correlation curve of the signal collected by the second microphone 20 and the signal collected by each first microphone 10, and taking the time position of the maximum correlation point position in the cross-correlation curve as the time difference tau.
The cross-correlation calculation formula is as follows:
the cross-correlation is a function of the time delay τ, where x (T) is the sound pressure signal acquired by the second microphone 20, y (T) is the sound pressure signal acquired by the first microphone 10, and T is the length of time of the signal involved in the calculation, which should be greater than the maximum possible time difference. In the cross-correlation curve, the time position τ corresponding to the maximum point is the time difference between the arrival of the noise at the first microphone 10 and the arrival of the noise at the second microphone 20.
The time difference between the first microphone 10 and the second microphone 20 is calculated using a cross-correlation algorithm, and the vertical angle is directly calculated using the calculation result of the horizontal angle. And determining the source direction of the main noise according to the horizontal angle and the vertical angle.
For a completely stable periodic sound signal, such as a stable sinusoidal signal, the cross-correlation has the characteristic of periodic correlation, and a plurality of maximum correlation points occur periodically in the cross-correlation curve, and the intervals are the periods of the sinusoids, so that the unique maximum correlation points cannot be determined. Therefore, it is necessary to continuously monitor the noise, find the non-stationary phase of the noise, such as the beginning or ending phase of the noise, perform the cross-correlation calculation on the non-stationary signal, and determine the time position of the maximum correlation point.
Since the horizontal angle θ has been calculated, it can be considered that the sound source is in the vertical plane passing through the horizontal angle, and assuming that the noise arrives at the present environment monitoring apparatus in the form of parallel waves, and the time difference τ is obtained, the vertical angle Φ is calculated as follows:
in the coordinate system shown in fig. 2, the source direction of noise is expressed by a horizontal angle θ and a vertical angle Φ. Firstly, a beam forming algorithm is used for determining a horizontal angle of a sound source, and on the basis, a cross-correlation algorithm is used for determining a vertical angle, so that spatial sound source positioning calculation is realized, a sufficient distance between the first microphone 10 and the second microphone 20 can be allowed, the first microphone 10 is prevented from influencing a sound field near the second microphone 20, and the accuracy of a measurement result is ensured. Using the cross-correlation to calculate the time difference between the second microphone 20 and the first microphone 10 ensures that the first microphone 10 can be moved away from and without affecting the second microphone 20, ensuring the accuracy of the measurement of the second microphone 20.
The sound source localization system includes a plurality of first microphones 10, a plurality of second microphones 20, and a processor 30. The first microphones 10 are arranged in an array in a first plane 101. Each of the first microphones 10 is adapted to pick up an acoustic pressure signal. The second microphone 20 is disposed in a central axis perpendicular to the first plane 101. The second microphone 20 is used to acquire a sound pressure signal in synchronization with the plurality of first microphones 10. The processor 30 is electrically connected to the plurality of first microphones 10 and the plurality of second microphones 20, respectively. The processor 30 is configured to obtain a first time difference between the sound pressure signal collected by each first microphone 10 and the sound pressure signals collected by the other first microphones 10, and further obtain angle information of a projection point of a sound source in the first plane 101. The processor 30 is further configured to obtain a second time difference between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10, and further obtain angle information of the sound source in a second plane 102, where the second plane 102 is perpendicular to the first plane 101, and the second plane 102 passes through the projection point of the sound source in the first plane 101. The present application utilizes the second microphone 20 to participate in the positioning calculation, which can reduce the number and structural complexity of the first microphones 10.
Referring to fig. 3, the present application provides a sound source localization method. The sound source positioning method is realized by using the sound source positioning system in any one of the above embodiments. The sound source localization method includes:
and S10, synchronously acquiring sound pressure signals by using the first microphones 10 and the second microphones 20.
In step S10, the first microphone 10 and the second microphone 20 acquire sound signals in synchronization with each other, and thus the sound source direction can be determined using the time difference between the signals. In order to achieve synchronization of signal acquisition by the first microphone 10 and the second microphone 20, the same crystal clock may be used to drive analog-to-digital conversion of the first microphone 10 and the second microphone 20 in terms of hardware design.
And S20, the processor acquires a first time difference between the sound pressure signal acquired by each first microphone 10 and the sound pressure signals acquired by other first microphones 10, and acquires angle information of a projection point of a sound source in the first plane 101 according to the first time difference.
In step S20, the angle information of the projection point of the sound source in the first plane 101 may be a deflection angle of the projection point of the sound source relative to a preset point in the first plane 101. In one embodiment, the phase difference of the sound pressure signals is used to represent a first time difference between the sound pressure signal acquired by each of the first microphones 10 and the sound pressure signals acquired by the other first microphones 10. The method for acquiring the phase difference of the sound pressure signals is a frequency spectrum or other algorithms. In one embodiment, the sound source azimuth is obtained by a delay accumulation algorithm, a multi-signal classification algorithm, a fairness algorithm, or other beamforming algorithm using the phase difference of the sound pressure signals.
And S30, acquiring a second time difference between the sound pressure signal acquired by the second microphone 20 and the sound pressure signal acquired by each first microphone 10, and acquiring angle information of a sound source in a second plane 102 according to the second time difference, wherein the second plane 102 is perpendicular to the first plane 101, and the second plane 102 passes through the projection point of the sound source in the first plane 101.
In step S30, the angle information of the sound source in the second plane 102 may be a deflection angle of the sound source relative to a preset point in the first plane 101.
In this embodiment, the sound source positioning method obtains the angle information of the projection point of the sound source in the first plane 101 by obtaining the first time difference between the sound pressure signal collected by each first microphone 10 and the sound pressure signals collected by the other first microphones 10. And by obtaining a second time difference between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10, further obtaining angle information of the sound source in a second plane 102, wherein the second plane 102 is perpendicular to the first plane 101, and the second plane 102 passes through the projection point of the sound source in the first plane 101. The present application utilizes the second microphone 20 to participate in the positioning calculation, which can reduce the number and structural complexity of the first microphones 10.
Referring to fig. 4 together, in one embodiment, the step of obtaining the second time difference between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10 includes:
a cross-correlation curve of the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10 is obtained. And determining a maximum correlation point in the cross-correlation curve according to the cross-correlation curve, wherein the time position of the maximum correlation point is used as the second time difference.
The first microphone 10 and the second microphone 20 are used simultaneously to determine the vertical angle phi of the sound source (angular information of the sound source in the second plane 102). The time difference of arrival of the sound source signal at said first microphone 10 and said second microphone 20 is also different in the vertical direction. To ensure that the sound field around the second microphone 20 is not affected, the first microphone 10 must be spaced a sufficient distance D from it, so the distance D may be greater than half the sound wavelength λ. In this case, the time difference is calculated using the phase difference, and thus, the aliasing phenomenon occurs, and thus the phase difference cannot be directly used for calculation. The present application uses a cross-correlation method to calculate the time difference. And calculating a time difference tau by using a cross-correlation method, respectively calculating a cross-correlation curve of the signal collected by the second microphone 20 and the signal collected by each first microphone 10, and taking the time position of the maximum correlation point position in the cross-correlation curve as the time difference tau.
In one embodiment, the step of obtaining the cross-correlation curve of the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10 includes:
according to the shape of the sound pressure signal collected by the second microphone 20 and the shape of the sound pressure signal collected by each of the first microphones 10, it is determined whether there is a periodic correlation between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10. When the sound pressure signal collected by the second microphone 20 is not periodically correlated with the sound pressure signal collected by each of the first microphones 10, the step of determining the maximum correlation point in the cross-correlation curve according to the cross-correlation curve is performed.
When there is a periodic correlation between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10, a non-stationary segment of the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10 is searched again, and the step of determining whether there is a periodic correlation between the sound pressure signal collected by the second microphone 20 and the sound pressure signal collected by each of the first microphones 10 according to the shape of the sound pressure signal collected by the second microphone 20 and the shape of the sound pressure signal collected by each of the first microphones 10 is performed.
For a completely stable periodic sound signal, such as a stable sinusoidal signal, the cross-correlation has a characteristic of periodic correlation, and a plurality of maximum correlation points occur periodically in the cross-correlation curve, and the intervals are periods of the sinusoid, so that a unique maximum correlation point cannot be determined. Therefore, it is necessary to continuously monitor the noise, find the non-stationary phase of the noise, such as the beginning or ending phase of the noise, perform the cross-correlation calculation on the non-stationary signal, and determine the time position of the maximum correlation point.
In this embodiment, a beam forming algorithm is used to determine a horizontal angle of a sound source, and a cross-correlation algorithm is used to determine a vertical angle on the basis of the horizontal angle, so that spatial sound source positioning calculation is realized, a sufficient distance between the first microphone 10 and the second microphone 20 is allowed, the first microphone 10 is prevented from affecting a sound field near the second microphone 20, and accuracy of a measurement result is ensured. The use of cross correlation to calculate the time difference between the second microphone 20 and the first microphone 10 ensures that the first microphone 10 can be moved away from and does not affect the second microphone 20, ensuring the accuracy of the measurement of the second microphone 20.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several implementation modes of the present application, and the description thereof is specific and detailed, but not construed as limiting the scope of the claims. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, and these are all within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.
Claims (6)
1. A sound source localization system, comprising:
the first microphones are arranged in a circular ring manner at equal intervals in a first plane, the distance between any two adjacent first microphones is smaller than a preset value, and the first microphones are used for acquiring sound pressure signals; the preset value is half of the wavelength of the maximum frequency noise in the monitoring noise;
the second microphone is arranged on a central axis perpendicular to the first plane, the distance from the second microphone to the first plane is greater than the preset value, and the second microphone is used for synchronously acquiring sound pressure signals with the first microphones; and
the processor is electrically connected with the first microphones and the second microphones respectively and used for acquiring phase differences between the sound pressure signals acquired by each first microphone and the sound pressure signals acquired by other first microphones and acquiring angle information of a projection point of a sound source in the first plane according to the phase differences, the processor is further used for acquiring a cross-correlation curve between the sound pressure signals acquired by the second microphones and the sound pressure signals acquired by each first microphone, and determining a maximum correlation point in the cross-correlation curve according to the cross-correlation curve, wherein the time position of the maximum correlation point is used as a second time difference; acquiring angle information of the sound source in a second plane according to the second time difference;
wherein the second plane is perpendicular to the first plane and passes through the projection point of the sound source in the first plane.
2. The sound source localization system of claim 1, wherein the number of the first microphones is greater than or equal to 4.
3. A sound source localization method, characterized by using the sound source localization system according to any one of claims 1 to 2, comprising:
synchronously acquiring sound pressure signals by utilizing a plurality of first microphones and second microphones;
the processor acquires a phase difference between the sound pressure signal acquired by each first microphone and the sound pressure signals acquired by other first microphones, and acquires angle information of a projection point of a sound source in the first plane according to the phase difference;
the processor acquires a cross-correlation curve of the sound pressure signals acquired by the second microphone and the sound pressure signals acquired by each first microphone, and determines a maximum correlation point in the cross-correlation curve according to the cross-correlation curve, wherein the time position of the maximum correlation point is used as the second time difference; and acquiring angle information of a sound source in a second plane according to the second time difference, wherein the second plane is vertical to the first plane, and the second plane passes through the projection point of the sound source in the first plane.
4. The sound source localization method according to claim 3, wherein a sound source azimuth is obtained by a delay accumulation algorithm, a multi-signal classification algorithm, a fairness algorithm, or other beamforming algorithm using the phase difference of the sound pressure signals.
5. The sound source localization method according to claim 3, wherein the step of respectively obtaining cross-correlation curves of the sound pressure signals collected by the second microphones and the sound pressure signals collected by each of the first microphones is preceded by:
judging whether the sound pressure signal acquired by the second microphone is periodically related to the sound pressure signal acquired by each first microphone according to the shape of the sound pressure signal acquired by the second microphone and the shape of the sound pressure signal acquired by each first microphone;
when the sound pressure signal collected by the second microphone is not periodically correlated with the sound pressure signal collected by each of the first microphones, the step of determining the maximum correlation point in the cross-correlation curve according to the cross-correlation curve is executed.
6. The sound source localization method according to claim 5, wherein when there is a periodic correlation between the sound pressure signal collected by the second microphone and the sound pressure signal collected by each of the first microphones, a non-stationary section of the sound pressure signal collected by the second microphone and the sound pressure signal collected by each of the first microphones is searched again, and the step of determining whether there is a periodic correlation between the sound pressure signal collected by the second microphone and the sound pressure signal collected by each of the first microphones is performed based on the shape of the sound pressure signal collected by the second microphone and the shape of the sound pressure signal collected by each of the first microphones.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010469457.2A CN111610491B (en) | 2020-05-28 | 2020-05-28 | Sound source positioning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010469457.2A CN111610491B (en) | 2020-05-28 | 2020-05-28 | Sound source positioning system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111610491A CN111610491A (en) | 2020-09-01 |
CN111610491B true CN111610491B (en) | 2022-12-02 |
Family
ID=72201667
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010469457.2A Active CN111610491B (en) | 2020-05-28 | 2020-05-28 | Sound source positioning system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111610491B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112526452B (en) * | 2020-11-24 | 2024-08-06 | 杭州萤石软件有限公司 | Sound source detection method, pan-tilt camera, intelligent robot and storage medium |
CN115184866B (en) * | 2022-09-14 | 2022-11-29 | 中国民航大学 | Positioning method for airport aviation noise source |
CN118112501B (en) * | 2024-04-28 | 2024-07-26 | 杭州爱华智能科技有限公司 | Sound source positioning method and device suitable for periodic signals and sound source measuring device |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100442685C (en) * | 2005-09-02 | 2008-12-10 | 东南大学 | Device for forming annular array beam in VAN |
CN101201399B (en) * | 2007-12-18 | 2012-01-11 | 北京中星微电子有限公司 | Sound localization method and system |
CN103856869A (en) * | 2014-03-12 | 2014-06-11 | 深圳市中兴移动通信有限公司 | Sound effect processing method and camera shooting device |
CN104049235B (en) * | 2014-06-23 | 2016-09-07 | 河北工业大学 | Microphone array in sound source direction device |
CN106291469B (en) * | 2016-10-18 | 2018-11-23 | 武汉轻工大学 | A kind of three-dimensional space source of sound localization method and system |
CN106409308A (en) * | 2016-11-16 | 2017-02-15 | 东方智测(北京)科技有限公司 | Method and device for measuring noise signal in harsh space |
EP3373602A1 (en) * | 2017-03-09 | 2018-09-12 | Oticon A/s | A method of localizing a sound source, a hearing device, and a hearing system |
CN107064878B (en) * | 2017-06-28 | 2019-08-20 | 山东大学 | A kind of sound localization method and its realization system based on high-precision GPS |
CN110388570B (en) * | 2019-07-26 | 2020-06-19 | 吉林大学 | VMD-based self-adaptive noise reduction method and application thereof in water supply pipeline leakage positioning |
CN110620836B (en) * | 2019-09-06 | 2021-01-15 | 中国民航大学 | Mobile phone hearing positioning method |
-
2020
- 2020-05-28 CN CN202010469457.2A patent/CN111610491B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN111610491A (en) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111610491B (en) | Sound source positioning system and method | |
US10145933B2 (en) | Angle determining system and method | |
US20130029684A1 (en) | Sensor network system for acuiring high quality speech signals and communication method therefor | |
Bergamo et al. | Collaborative sensor networking towards real-time acoustical beamforming in free-space and limited reverberance | |
CN102209382A (en) | Wireless sensor network node positioning method based on received signal strength indicator (RSSI) | |
CN109856593B (en) | Sound source direction-finding-oriented miniature intelligent array type acoustic sensor and direction-finding method thereof | |
CN114926378B (en) | Method, system, device and computer storage medium for sound source tracking | |
US11550018B2 (en) | Positioning system and positioning method | |
KR101331833B1 (en) | Method for positioning using time difference of arrival | |
Naeem et al. | Performance analysis of TDOA-based indoor positioning systems using visible LED lights | |
CN108307498A (en) | A kind of localization method and device of WSN nodes | |
CN112996107A (en) | Antenna device, mobile communication interference signal positioning method and system | |
WO2021087728A1 (en) | Differential directional sensor system | |
CN102200573A (en) | Method for determining incoming wave direction of near-field target signal | |
CN102565755A (en) | Method for carrying out radio direction finding on broad band by using measured frequency spectrum data | |
Zhang et al. | Indoor visible light positioning method based on TDOA and fingerprint | |
Song et al. | Acoustic source localization using 10-microphone array based on wireless sensor network | |
Amundson et al. | Radio interferometric angle of arrival estimation | |
KR101403372B1 (en) | Regular polyhedron microphone array system and sound source localization in three-dimensional space with the microphone array | |
CN211978099U (en) | Environmental noise monitoring system | |
US20230055678A1 (en) | Signal Processing Method, Signal Processing Device, And Monitoring System | |
Fischer et al. | A measurement platform for the evaluation of sparse acoustic array geometries | |
Dang et al. | A decentralized localization scheme for swarm robotics based on coordinate geometry and distributed gradient descent | |
CN110658491A (en) | Direction finding system, direction finding method, positioning system and positioning method | |
Shu et al. | A Indoor Positioning System of Bluetooth AOA Using Uniform Linear Array Based on Two-point Position Principle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |