CN114994607A - Acoustic imaging method supporting zooming - Google Patents

Acoustic imaging method supporting zooming Download PDF

Info

Publication number
CN114994607A
CN114994607A CN202210924179.4A CN202210924179A CN114994607A CN 114994607 A CN114994607 A CN 114994607A CN 202210924179 A CN202210924179 A CN 202210924179A CN 114994607 A CN114994607 A CN 114994607A
Authority
CN
China
Prior art keywords
sound source
array
microphone
sound
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210924179.4A
Other languages
Chinese (zh)
Other versions
CN114994607B (en
Inventor
曹祖杨
黄明
侯佩佩
梁友贵
周航
张凯强
张永全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Crysound Electronics Co Ltd
Original Assignee
Hangzhou Crysound Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Crysound Electronics Co Ltd filed Critical Hangzhou Crysound Electronics Co Ltd
Priority to CN202210924179.4A priority Critical patent/CN114994607B/en
Publication of CN114994607A publication Critical patent/CN114994607A/en
Application granted granted Critical
Publication of CN114994607B publication Critical patent/CN114994607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/12Testing dielectric strength or breakdown voltage ; Testing or monitoring effectiveness or level of insulation, e.g. of a cable or of an apparatus, for example using partial discharge measurements; Electrostatic testing
    • G01R31/1209Testing dielectric strength or breakdown voltage ; Testing or monitoring effectiveness or level of insulation, e.g. of a cable or of an apparatus, for example using partial discharge measurements; Electrostatic testing using acoustic measurements

Abstract

An acoustic imaging method supporting zooming belongs to the technical field of acoustic imaging. The method comprises the steps of S01, acquiring sound signals collected by a microphone array in real time; step S02, based on the sound signal, the DOA sound source direction is positioned in the far field mode; step S03, according to the sound source direction, the sound in the sound source direction is scanned linearly and directionally to obtain a linear focusing array, and the distance between the sound source and the center of the microphone array is determined based on the linear focusing array; step S04, based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA popular array in the near-field mode matching the sound source distance is obtained. The invention can detect and obtain the sound source information under the near field mode algorithm in the far field mode, accurately position the sound source and carry out amplification close-up.

Description

Acoustic imaging method supporting zooming
Technical Field
The invention relates to the technical field of acoustic imaging, in particular to an acoustic imaging method supporting zooming.
Background
The existing DOA (Direction of arrival, DOA) acoustic imaging technology uses a popular array with a fixed distance, and a closer distance is used as an array surface of the popular array for effective imaging; in actual use, after an operator finds a sound source through the acoustic imager, the operator needs to go close to the sound source and then confirm the sound source, because the imaging cloud picture covers an area when the sound source is far away from the equipment. Particularly, in a scene of detecting partial discharge, an operator cannot approach the high-voltage cable too much, and if the detection is performed by using an imager realized by the existing DOA acoustic imaging technology, the discharge position cannot be accurately positioned.
The invention patent application CN103308889A discloses a passive sound source two-dimensional DOA estimation method under a complex environment, and specifically discloses a method comprising (1) collecting voice signals in a room by using a uniform circular array; (2) preprocessing a voice signal received by the uniform circular array microphone array by using a spectral subtraction method; (3) estimating the relative time delay of each microphone by adopting an M _ AEDA algorithm; (4) determining a direction coefficient vector according to a direction coefficient formula; (5) the direction coefficient vector is multiplied by the voice signal after the second step of preprocessing correspondingly to be used as an input signal of the minimum variance undistorted response; (6) processing the input signal by adopting a minimum variance distortionless response algorithm; (7) and performing spectral peak search on the output average power to obtain an estimated value of the two-dimensional DOA of the sound source. The invention still adopts the popular array with fixed distance, which can solve the problem of accurate positioning of the sound source under the environment of reverberation and low signal-to-noise ratio but can not solve the problem of accurate positioning of the sound source in long distance.
The invention patent application CN113030983A discloses a near-field point-by-point focusing DOA method based on a sounding side sonar, and specifically discloses a method comprising S1, transmitting and receiving sound waves by adopting a multi-subarray sonar transducer to obtain sonar receiving data; s2, carrying out digital interpolation and filtering on the sonar receiving data to obtain multi-subarray filtering data; s3, forming a beam according to the multi-subarray filtering data; and S4, focusing point by point according to the beam forming result, and estimating the DOA. The method still adopts a popular array with a fixed distance, can realize higher depth sounding precision in a near-field range, and cannot solve the problem of accurate positioning of a remote sound source.
Disclosure of Invention
The invention aims to solve the problem that the sound source is not accurately positioned when an acoustic imager detects remotely, therefore, the invention provides the acoustic imaging method supporting zooming, the sound source does not need to be confirmed in a mode of being close to the sound source, and the condition that an imaging cloud picture covers an area when the distance between the sound source and equipment is far can be avoided; particularly, the method can be used in special inaccessible scenes, such as partial discharge detection scenes, and can accurately position the sound source under the condition that an operator is located at a safe distance.
The invention provides an acoustic imaging method supporting zooming, which comprises the following steps:
step S01, acquiring sound signals collected by the microphone array in real time;
step S02, based on the sound signal, the DOA sound source direction is positioned in the far field mode;
step S03, according to the sound source direction, the sound in the sound source direction is scanned linearly and directionally to obtain a linear focusing array, and the distance between the sound source and the center of the microphone array is determined based on the linear focusing array;
step S04, based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA prevalence array in a near field mode matching the sound source distance is obtained.
The invention combines surface acoustic imaging scanning and linear scanning to calculate the direction and distance of the sound source, and generates a popular array matched with the sound source according to the distance of the sound source, thereby greatly improving the signal-to-noise ratio of DOA imaging and accurately positioning the sound source without approaching the detection sound source.
Preferably, the step S02 includes:
step S21, based on the sound signal, obtaining DOA popular array in far-field mode;
step S22, calculating the cross-power spectrum of each focus point on the DOA popular array in the far-field mode
Figure 100002_DEST_PATH_IMAGE001
Figure 570699DEST_PATH_IMAGE002
To a focus position
Figure 100002_DEST_PATH_IMAGE003
Frequency of
Figure 865414DEST_PATH_IMAGE004
The output of the time-of-day,
Figure 100002_DEST_PATH_IMAGE005
for the sound actually picked up by the microphone array,
Figure 105903DEST_PATH_IMAGE006
is a cross power spectrum; and determining the sound source direction in which the focus point with the maximum cross-power spectrum is positioned as the positioned sound source direction.
Preferably, the step S21 is specifically:
step S21.1, assuming N focusing points of the sound source, and calculating the time delay of the sound source of each focusing point to propagate to the microphone array
Figure 100002_DEST_PATH_IMAGE007
Wherein, the first and the second end of the pipe are connected with each other,
Figure 915596DEST_PATH_IMAGE008
for the delay assumption of the sound source propagating to the mth microphone at the focus point,
Figure 100002_DEST_PATH_IMAGE009
a vector pointing the focus point to the center position of the microphone array,
Figure 266943DEST_PATH_IMAGE010
for the m-th microphone to point to the central position of the microphone arrayThe amount of the compound (A) is,
Figure 100002_DEST_PATH_IMAGE011
a vector from a focusing point to an m-th microphone, and c is a sound propagation speed;
step S21.2, based on the calculated time delay, according to the formula
Figure 212902DEST_PATH_IMAGE012
The output volume at each focus point is calculated, and then a popular array is formed,
wherein, the first and the second end of the pipe are connected with each other,
Figure 100002_DEST_PATH_IMAGE013
to the focus position
Figure 453653DEST_PATH_IMAGE014
Frequency of
Figure 157166DEST_PATH_IMAGE004
The output of the time-of-day,
Figure 100002_DEST_PATH_IMAGE015
is the signal received by the m microphone, is acquired by the microphone array,
Figure 374521DEST_PATH_IMAGE016
actually collecting signals for the microphone array;
preferably, the step S03 includes:
step S31, assuming N focal points of the sound source according to the sound source direction located at step S2, and calculating the time delay of the sound source propagating to the microphone array at the focal point on each array beam,
Figure 100002_DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,
Figure 315932DEST_PATH_IMAGE008
for the delay assumption of the sound source propagating to the mth microphone at the focus point,
Figure 694961DEST_PATH_IMAGE009
a vector pointing the focus point to the center position of the microphone array,
Figure 948088DEST_PATH_IMAGE018
a vector pointing to the center position of the microphone array for the m-th microphone,
Figure 641237DEST_PATH_IMAGE011
a vector from a focusing point to an m-th microphone, and c is a sound propagation speed;
step S32, based on the calculated time delay, according to
Figure 100002_DEST_PATH_IMAGE019
Calculating the output quantity at the focus point on each array beam, and then forming a linear focusing array,
wherein the content of the first and second substances,
Figure 296210DEST_PATH_IMAGE020
to a focus position
Figure 100002_DEST_PATH_IMAGE021
Frequency of
Figure 49402DEST_PATH_IMAGE004
The output of the time-of-day,
Figure 559799DEST_PATH_IMAGE022
is the signal received by the m-th microphone,
Figure 100002_DEST_PATH_IMAGE023
actually collecting signals for the microphone;
step S33, determining a distance between the sound source and the center of the microphone array based on the linear focusing array by a distance formula.
Preferably, the focusing points are focusing points which can be evenly distributed in a camera picture, the number of the focusing points is set according to a camera view angle and equipment calculation force, and the position of each focusing point is determined according to the camera view angle and an initial setting distance.
Preferably, the step S04 includes: assuming that the distance from the sound source to the center of the microphone array is smaller than or equal to the distance from the newly generated DOA array to the center of the microphone array in the near-field mode; based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA prevalence array in a near field mode is obtained.
Preferably, the distance of the sound source from the center of the microphone array is equal to the distance of the newly generated DOA array from the center of the microphone array.
Preferably, the method further includes step S05, forming an acoustic cloud based on the DOA popular array in the near-field mode obtained in step S04.
Preferably, the method further comprises step S05',
feeding the DOA popular array in the near field mode obtained in step S04 to a directional sound pickup device for picking up sound source audio; alternatively, the first and second electrodes may be,
sending the DOA popular array in the near field mode obtained in the step S04 to a voiceprint recognition device, wherein the voiceprint recognition device is used for recognizing voice data to determine the identity of a user; alternatively, the first and second electrodes may be,
the DOA popular array in the near field mode obtained in step S04 is fed to the image pickup apparatus which zooms and enlarges the specified area image in accordance with the position of the sound source to enlarge the sound source in close-up.
Preferably, the method is applied to an imager.
The invention has the following beneficial effects:
the invention relates to an acoustic imaging method supporting zooming, which is characterized in that the direction and distance of a sound source are calculated by mutually combining surface acoustic imaging scanning and linear scanning, a DOA popular array obtained in a far-field mode is converted, and based on a near-field mode algorithm, the DOA popular array in a near-field mode can be obtained without being close to the sound source, namely, the sound source is accurately positioned without being close to the sound source; particularly, when dangerous scenes such as partial discharge and the like are detected, the method can ensure that an operator is positioned at a safe distance to accurately detect the sound source. In addition, the sound source information identified by the method can be sent to a directional sound pickup device, a voiceprint identification device or a camera device to achieve other application purposes, such as increasing the accurate pickup characteristic of a target sound source, providing a target audio with high signal-to-noise ratio for voiceprint identification, and controlling the camera device to zoom and amplify an image of a designated area so as to close up the sound source.
Drawings
FIG. 1 is a flow chart of an acoustic imaging method supporting zoom of the present invention;
fig. 2 is a cloud diagram example of an implementation process of an acoustic imaging method supporting zooming.
Detailed Description
The following are specific embodiments of the present invention and are further described with reference to the drawings, but the present invention is not limited to these embodiments.
Referring to fig. 1, an acoustic imaging method supporting zooming of the present invention includes:
step S01, acquiring sound signals collected by the microphone array in real time;
step S02, based on the sound signal, the DOA sound source direction is positioned in the far-field mode;
step S03, according to the sound source direction, the sound in the sound source direction is scanned linearly and directionally to obtain a linear focusing array, and the distance between the sound source and the center of the microphone array is determined based on the linear focusing array;
step S04, based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA popular array in the near-field mode matching the sound source distance is obtained.
The method is used for processing and analyzing the acquired sound signals, and is applied to sound signal processing equipment such as an imager and the like. When the sound signal in the environment is collected by the microphone array (for example, 128-path microphone array) in step S01, the collected sound signal is sent to the imager for processing.
Far field pattern refers to a sound source position outside the scanned scene, that is, the distance of the sound source from the center of the microphone array is greater than the distance of the DOA epidemic array (DOA epidemic array obtained in far field pattern) from the center of the microphone array (see fig. 2). The near-field mode refers to a sound source position being inside the scanning scene, that is, the distance of the sound source from the center of the microphone array is less than or equal to the distance of the DOA popular array (obtained in the near-field mode) from the center of the microphone array (see fig. 2).
In the prior art, an operator detects a detection device by approaching the detection device to a detection sound source, namely, the device is in a near field mode, the DOA popular array at the moment is only the DOA popular array in the near field mode, a fixed distance is adopted, and the detection mode cannot be suitable for other dangerous detection environments such as partial discharge detection and the like; or, the operator positions the detection device at a far position to detect the sound source, that is, the device is in a far-field mode, and the popular array of DOAs at this time is only the popular array of DOAs in the far-field mode, and the detection accuracy is not high due to the fixed distance, and the operator needs to approach the device to further confirm the position of the detection sound source.
In view of the above problems, the present invention processes a sound signal in a far-field mode by placing a sound signal processing device such as an imager at a far end; based on the sound source direction of the surface acoustic imaging scanning positioning in the far field mode and the distance between the sound source determined by the linear scanning and the center of the microphone array, the DOA popular array in the near field mode is obtained through conversion (at this time, sound signal processing equipment such as an imager does not need to be moved to the inside of a scanning scene, but the DOA popular array is updated through an algorithm, so that the detection effect of the prior art when the sound signal processing equipment such as the imager is moved to the inside of the scanning scene is achieved). According to the invention, the DOA popular array is updated (namely zooming) by the algorithm without changing the actual detection distance, and the popular array with a fixed distance is not used in the acoustic imaging process, so that a sound source can be accurately detected; the problem that the short-distance detection cannot be carried out under a danger detection environment is particularly solved.
The step S02 includes:
step S21, based on the sound signal, obtaining a DOA popular array in a far-field mode; specifically, the step S21 includes,
step S21.1, assuming N focusing points of the sound source, and calculating the time delay of the sound source of each focusing point to propagate to the microphone array
Figure 56639DEST_PATH_IMAGE024
Wherein, in the step (A),
Figure 831697DEST_PATH_IMAGE025
for the delay assumption of the sound source propagating to the mth microphone at the focus point,
Figure 286949DEST_PATH_IMAGE026
a vector pointing the focus point to the center position of the microphone array,
Figure 390034DEST_PATH_IMAGE018
the m-th microphone is directed to a vector at the center position of the microphone array,
Figure 221724DEST_PATH_IMAGE011
taking the vector from a focusing point to the mth microphone, and taking the propagation speed of sound as c, wherein the propagation speed of sound is 340 m/s;
the focusing points are the focusing points which can be evenly distributed in a camera picture, the number of the focusing points is set according to the camera view angle and the equipment calculation force, and the position of each focusing point is determined according to the camera view angle and the initial setting distance. For example, the camera resolution length and width 1/10 is generally selected, such as the camera resolution 640 × 480, where N = 64 × 48 (i.e. 64 rows and 48 columns of focusing points are evenly distributed on the camera screen, and the position of each focusing point may be calculated with a further camera angle and an initial set distance (the camera angle is a parameter of the camera itself, and the initial distance is a distance from the focusing point to the center of the microphone array, and is generally 1 m)).
Step S21.2, based on the calculated time delay, according to the formula
Figure DEST_PATH_IMAGE027
The output volume at each focus point is calculated, and then a popular array is formed,
wherein the content of the first and second substances,
Figure 320130DEST_PATH_IMAGE013
to a focus position
Figure 946283DEST_PATH_IMAGE014
Frequency of
Figure 661299DEST_PATH_IMAGE004
The output of the time-of-day,
Figure 499942DEST_PATH_IMAGE015
is the signal received by the m microphone, is acquired by the microphone array,
Figure 656116DEST_PATH_IMAGE016
the signals are actually collected for the microphone array.
Step S22, calculating the cross-power spectrum of each focus point on the DOA popular array in the far-field mode
Figure 282532DEST_PATH_IMAGE001
Figure 156947DEST_PATH_IMAGE002
To a focus position
Figure 799281DEST_PATH_IMAGE029
Frequency of
Figure 137859DEST_PATH_IMAGE004
The output of the time-of-day,
Figure 371394DEST_PATH_IMAGE030
for the sound actually picked up by the microphone array,
Figure 670788DEST_PATH_IMAGE031
is a cross power spectrum; and determining the sound source direction in which the focus point with the maximum cross-power spectrum is positioned as the positioned sound source direction. Wherein the content of the first and second substances,
Figure 975868DEST_PATH_IMAGE004
is the angular frequency of the sound and,setting as required, for example, the target sound source frequency is concentrated between 10KHz and 20KHz, then a frequency bandwidth of 10KHz to 20KHz is selected, and the frequency resolution is 100Hz, then w = [10000, 10100,10200]。
The calculation of the cross-power spectrum of each focusing point is realized by using a cross-power spectrum method, and the phase difference of two signals under any frequency can be obtained by conjugating the product of the frequency spectrums of the two signals. After calculating the cross-power spectra of all the focusing points in step S22, the focusing point with the largest cross-power spectrum is called the "main lobe", and the sound source direction is determined according to the sound source coordinates (including the spatial angle between the sound source and the microphone array) of the focusing point.
After determining the sound source direction, step S03 performs linear directional scanning of the actual scene in the sound source direction. The scanning process can be carried out by an operator to hold the imager to rotate towards the direction, or the rotating platform receives an instruction to rotate the sound signal processing equipment to the sound source direction to carry out linear directional scanning. Alternatively, after the sound source direction is determined, step S03 does not perform scanning of the actual scene, but directly forms a focus point in the sound source direction in the signal processing device, and proceeds to the distance calculation process of step S03.
The step S03 performs a linear directional scan of the sound source, which has a scanning principle similar to that of the DOA scan, the difference is that the DOA scan scans the planar popular array one by one, and the linear directional scan mainly scans the array beam in one direction. For this, the step S03 includes:
step S31, assuming N focal points of the sound source according to the sound source direction located at step S2, and calculating the time delay of the sound source propagating to the microphone array at the focal point on each array beam,
Figure 106635DEST_PATH_IMAGE032
wherein, in the step (A),
Figure DEST_PATH_IMAGE033
for the delay assumption of the sound source propagating to the mth microphone at the focus point,
Figure 714333DEST_PATH_IMAGE026
a vector pointing the focus point to the center position of the microphone array,
Figure 625658DEST_PATH_IMAGE034
a vector pointing to the center position of the microphone array for the m-th microphone,
Figure 344215DEST_PATH_IMAGE035
a vector from a focus point to the m microphone, and c is a sound propagation speed;
the focusing points are the focusing points which can be evenly distributed in a camera picture, the number of the focusing points is set according to the camera view angle and the equipment calculation force, and the position of each focusing point is determined according to the camera view angle and the initial setting distance. For example, the camera resolution length and width 1/10 is generally selected, such as the camera resolution 640 × 480, where N = 64 × 48 (i.e. 64 rows and 48 columns of focusing points are evenly distributed on the camera screen, and the position of each focusing point may be calculated with a camera angle and an initial set distance (the camera angle is a parameter of the camera itself, the initial distance is a distance from the focusing point to the center of the microphone array, and the initial distance is generally 1 m)).
Although the two steps are implemented by the same formula method in the process of calculating the focus point and the time delay, step S31 and step S21.1 are different according to different scanning modes, and step S31 scans the sound source according to the specific sound source direction after positioning in step S2, so that the assumed focus points in the two steps are different (including the number of focus points and the position of the focus point), and the calculated time delay is different.
Step S32, based on the calculated time delay, according to
Figure 595068DEST_PATH_IMAGE036
The output at the focal point on each array beam is calculated, which in turn forms a linear focusing array, wherein,
Figure 731258DEST_PATH_IMAGE037
to a focus position
Figure 270823DEST_PATH_IMAGE038
Frequency of
Figure 589809DEST_PATH_IMAGE004
The output of the time-delay circuit is output,
Figure 757485DEST_PATH_IMAGE039
is the signal received by the m-th microphone,
Figure DEST_PATH_IMAGE040
actually collecting signals for the microphone array; wherein the content of the first and second substances,
Figure 972566DEST_PATH_IMAGE004
for the angular frequency of the sound, it is set as desired, for example, the target sound source frequency is concentrated between 10KHz and 20KHz, then a frequency bandwidth of 10KHz to 20KHz is selected, the frequency resolution is 100Hz, then w = [10000, 10100,10200]. The focusing precision of the linear focusing array is 2cm (the wavelength of ultrasonic waves above 20KHz is less than 2 cm).
Step S33, determining a distance between the sound source and the center of the microphone array based on the linear focusing array by a distance formula.
Based on the sound source coordinates (x, y, z) of the focusing point, the distance between the sound source and the center of the microphone array can be calculated by using a distance formula between the coordinates.
The step S04 includes: assuming that the distance from the sound source to the center of the microphone array is smaller than or equal to the distance from the newly generated DOA array to the center of the microphone array in the near-field mode; based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA prevalence array in a near field mode is obtained. Preferentially, when the DOA algorithm is in a near-field mode and the distance from a sound source to the center of the microphone array is equal to the distance from a popular array to the center of the microphone array, the time delay calculated by a focusing point is consistent with the actual time delay of the actual sound transmitted to the microphone array, and the signal-to-noise ratio of the calculated cross-power spectrum is the largest. In the calculation process, the center of the microphone array can be designed as a spherical center point.
Taking fig. 2 as an example to illustrate, when scanning is performed in a far-field mode (the distance from a sound source to the center of a microphone array is greater than the distance from a popular array to the center of the microphone array), a DOA popular array is formed by using the microphone array as a spherical center point, and the field angle is larger at this time; after the sound source direction is determined, linear scanning is carried out, the distance between the sound source and the center of the microphone array is calculated, the distance is used as the radius to construct a spherical surface, and a new popular array is determined by combining the sound source direction. In this process, the coordinates of the focal point (x, y, z) are converted into coordinates in a sphere coordinate system. Therefore, the original popular array with a large field angle and a short distance is switched to the popular array with a small field angle and the distance matched with the distance of the sound source, so that the signal to noise ratio of the DOA imaging is greatly improved.
The method further includes step S05, forming an acoustic cloud image (as shown in fig. 2) based on the popular arrays of DOAs in the near-field mode obtained in step S04.
The method of the present invention further includes step S05', specifically:
feeding the DOA popular array in the near field mode obtained in step S04 to a directional sound pickup device for picking up sound source audio; the directional sound pickup device can increase the resolving power of different sound sources in the same direction, and accurately picks up sound source audio instead of picking up all sound in a single direction. Alternatively, the first and second electrodes may be,
sending the DOA popular array in the near field mode obtained in the step S04 to a voiceprint recognition device, wherein the voiceprint recognition device is used for recognizing voice data to determine the identity of a user; the voiceprint recognition device can obtain a voiceprint recognition input signal with a higher signal-to-noise ratio, and is favorable for accurately recognizing the identity of a user according to voice data. Alternatively, the first and second electrodes may be,
the DOA popular array in the near field mode obtained in step S04 is fed to the image pickup apparatus which zooms and enlarges the specified area image in accordance with the position of the sound source to enlarge the sound source in close-up.
It will be appreciated by persons skilled in the art that the embodiments of the invention described above and shown in the drawings are given by way of example only and are not limiting of the invention. The objects of the present invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the embodiments, and any variations or modifications may be made to the embodiments of the present invention without departing from the principles described.

Claims (10)

1. A method of acoustic imaging supporting zoom, comprising:
step S01, acquiring sound signals collected by the microphone array in real time;
step S02, based on the sound signal, the DOA sound source direction is positioned in the far field mode;
step S03, according to the sound source direction, carrying out linear directional scanning on the sound in the sound source direction to obtain a linear focusing array, and determining the distance between the sound source and the center of the microphone array based on the linear focusing array;
step S04, based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA prevalence array in a near field mode matching the sound source distance is obtained.
2. The method according to claim 1, wherein the step S02 comprises:
step S21, based on the sound signal, obtaining a DOA popular array in a far-field mode;
step S22, calculating the cross-power spectrum of each focus point on the DOA popular array in the far-field mode
Figure DEST_PATH_IMAGE001
Figure 386212DEST_PATH_IMAGE002
To a focus position
Figure DEST_PATH_IMAGE003
Frequency of
Figure 162407DEST_PATH_IMAGE004
The output of the time-of-day,
Figure DEST_PATH_IMAGE005
for the sound actually picked up by the microphone array,
Figure 410985DEST_PATH_IMAGE006
is a cross power spectrum; and determining the sound source direction in which the focus point with the maximum cross-power spectrum is positioned as the positioned sound source direction.
3. The acoustic imaging method supporting zooming according to claim 2, wherein the step S21 is specifically as follows:
step S21.1, assuming N focusing points of the sound source, and calculating the time delay of the sound source of each focusing point to propagate to the microphone array
Figure DEST_PATH_IMAGE007
Wherein the content of the first and second substances,
Figure 602932DEST_PATH_IMAGE008
for the delay assumption of the sound source propagating to the mth microphone at the focus point,
Figure DEST_PATH_IMAGE009
a vector pointing the focus point to the center position of the microphone array,
Figure 10780DEST_PATH_IMAGE010
a vector pointing to the center position of the microphone array for the m-th microphone,
Figure DEST_PATH_IMAGE011
a vector from a focusing point to an m-th microphone, and c is a sound propagation speed;
step S21.2, based on the calculated time delay, according to the formula
Figure 567663DEST_PATH_IMAGE012
The output volume at each focus point is calculated, and then a popular array is formed,
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
to a focus position
Figure 926707DEST_PATH_IMAGE014
Frequency of
Figure DEST_PATH_IMAGE015
The output of the time-of-day,
Figure 922345DEST_PATH_IMAGE016
is the signal received by the m-th microphone,
Figure DEST_PATH_IMAGE017
the signals are actually acquired for the microphone array.
4. The acoustic imaging method supporting zooming according to claim 1, wherein the step S03 includes:
step S31, assuming N focal points of the sound source according to the sound source direction located at step S2, and calculating the time delay of the sound source propagating to the microphone array at the focal point on each array beam,
Figure 60065DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure 319008DEST_PATH_IMAGE008
for the delay assumption of the sound source propagating to the mth microphone at the focus point,
Figure 401233DEST_PATH_IMAGE009
a vector pointing the focus point to the center position of the microphone array,
Figure 607087DEST_PATH_IMAGE010
a vector pointing to the center position of the microphone array for the m-th microphone,
Figure 396051DEST_PATH_IMAGE011
a vector from a focus point to the m microphone, and c is a sound propagation speed;
step S32, based on the calculated time delay, according to
Figure DEST_PATH_IMAGE019
Calculating the output quantity at the focus point on each array beam, and then forming a linear focusing array,
wherein, the first and the second end of the pipe are connected with each other,
Figure 622633DEST_PATH_IMAGE020
to the focus position
Figure DEST_PATH_IMAGE021
Frequency of
Figure 959199DEST_PATH_IMAGE015
The output of the time-of-day,
Figure 499902DEST_PATH_IMAGE022
is the signal received by the m-th microphone,
Figure DEST_PATH_IMAGE023
actually collecting signals for the microphone;
step S33, determining a distance between the sound source and the center of the microphone array based on the linear focusing array by a distance formula.
5. The acoustic imaging method supporting zooming according to claim 3 or 4, wherein the focusing points are focusing points which can be evenly distributed in a camera frame, the number of the focusing points is set according to a camera view angle and device calculation force, and the position of each focusing point is determined according to the camera view angle and an initial set distance calculation.
6. The method according to claim 1, wherein the step S04 comprises: assuming that the distance between a sound source and the center of a microphone array is smaller than or equal to the distance between a newly generated DOA array and the center of the microphone array in a near-field mode; based on the sound source direction localized at step S02 and the distance between the sound source and the center of the microphone array determined at step S03, a DOA prevalence array in a near field mode is obtained.
7. The method of claim 6, wherein a distance of the sound source from a center of the microphone array is equal to a distance of the newly generated DOA array from the center of the microphone array.
8. The method of claim 1, further comprising step S05, forming an acoustic cloud based on the popular array of DOAs in the near field mode obtained in step S04.
9. The acoustic imaging method with zoom support according to claim 1, further comprising the steps of S05',
sending the DOA popular array in the near field mode obtained in the step S04 to a directional sound pickup device, wherein the directional sound pickup device is used for picking up sound source audio; alternatively, the first and second electrodes may be,
sending the DOA popular array in the near field mode obtained in the step S04 to a voiceprint recognition device, wherein the voiceprint recognition device is used for recognizing voice data to determine the identity of a user; alternatively, the first and second liquid crystal display panels may be,
the DOA popular array in the near field mode obtained in step S04 is fed to the image pickup apparatus which zooms and enlarges the specified area image in accordance with the position of the sound source to enlarge the sound source in close-up.
10. The method of claim 1, applied to an imager.
CN202210924179.4A 2022-08-03 2022-08-03 Acoustic imaging method supporting zooming Active CN114994607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210924179.4A CN114994607B (en) 2022-08-03 2022-08-03 Acoustic imaging method supporting zooming

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210924179.4A CN114994607B (en) 2022-08-03 2022-08-03 Acoustic imaging method supporting zooming

Publications (2)

Publication Number Publication Date
CN114994607A true CN114994607A (en) 2022-09-02
CN114994607B CN114994607B (en) 2022-11-04

Family

ID=83020867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210924179.4A Active CN114994607B (en) 2022-08-03 2022-08-03 Acoustic imaging method supporting zooming

Country Status (1)

Country Link
CN (1) CN114994607B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115452141A (en) * 2022-11-08 2022-12-09 杭州兆华电子股份有限公司 Non-uniform acoustic imaging method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103954931A (en) * 2014-04-28 2014-07-30 西安交通大学 Method for locating far field and near field mixed signal sources
US20150055797A1 (en) * 2013-08-26 2015-02-26 Canon Kabushiki Kaisha Method and device for localizing sound sources placed within a sound environment comprising ambient noise
CN111044973A (en) * 2019-12-31 2020-04-21 山东大学 MVDR target sound source directional pickup method for microphone matrix
CN111693942A (en) * 2020-07-08 2020-09-22 湖北省电力装备有限公司 Sound source positioning method based on microphone array
CN113868583A (en) * 2021-12-06 2021-12-31 杭州兆华电子有限公司 Method and system for calculating sound source distance focused by subarray wave beams
CN114624689A (en) * 2022-05-12 2022-06-14 杭州兆华电子股份有限公司 Near-field focusing sound source distance calculation method and system based on acoustic imaging instrument

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150055797A1 (en) * 2013-08-26 2015-02-26 Canon Kabushiki Kaisha Method and device for localizing sound sources placed within a sound environment comprising ambient noise
CN103954931A (en) * 2014-04-28 2014-07-30 西安交通大学 Method for locating far field and near field mixed signal sources
CN111044973A (en) * 2019-12-31 2020-04-21 山东大学 MVDR target sound source directional pickup method for microphone matrix
CN111693942A (en) * 2020-07-08 2020-09-22 湖北省电力装备有限公司 Sound source positioning method based on microphone array
CN113868583A (en) * 2021-12-06 2021-12-31 杭州兆华电子有限公司 Method and system for calculating sound source distance focused by subarray wave beams
CN114624689A (en) * 2022-05-12 2022-06-14 杭州兆华电子股份有限公司 Near-field focusing sound source distance calculation method and system based on acoustic imaging instrument

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115452141A (en) * 2022-11-08 2022-12-09 杭州兆华电子股份有限公司 Non-uniform acoustic imaging method

Also Published As

Publication number Publication date
CN114994607B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN104106267B (en) Signal enhancing beam forming in augmented reality environment
JP4469887B2 (en) Ultrasonic camera tracking system and related method
JP4296197B2 (en) Arrangement and method for sound source tracking
KR101761312B1 (en) Directonal sound source filtering apparatus using microphone array and controlling method thereof
CN107534725B (en) Voice signal processing method and device
US6198693B1 (en) System and method for finding the direction of a wave source using an array of sensors
CN111044973B (en) MVDR target sound source directional pickup method for microphone matrix
CN113281706B (en) Target positioning method, device and computer readable storage medium
TW201120469A (en) Method, computer readable storage medium and system for localizing acoustic source
CN205621437U (en) Remote voice acquisition device that audio -video was jointly fixed a position
US20160165338A1 (en) Directional audio recording system
JP6977448B2 (en) Device control device, device control program, device control method, dialogue device, and communication system
CN106887236A (en) A kind of remote speech harvester of sound image combined positioning
CN114994607B (en) Acoustic imaging method supporting zooming
CN111445920A (en) Multi-sound-source voice signal real-time separation method and device and sound pick-up
US20160161594A1 (en) Swarm mapping system
CN110322892B (en) Voice pickup system and method based on microphone array
CN112672251A (en) Control method and system of loudspeaker, storage medium and loudspeaker
KR101664733B1 (en) Omnidirectional high resolution tracking and recording apparatus and method
KR101542647B1 (en) A Method for Processing Audio Signal Using Speacker Detection and A Device thereof
CN115453300B (en) Partial discharge positioning system and method based on acoustic sensor array
JP6471955B2 (en) Monitoring system and directivity control method in monitoring system
JP6879144B2 (en) Device control device, device control program, device control method, dialogue device, and communication system
Matsumoto et al. A miniaturized adaptive microphone array under directional constraint utilizing aggregated microphones
Gomez-Bolanos et al. Benefits and applications of laser-induced sparks in real scale model measurements

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant