CN115575896B - Feature enhancement method for non-point sound source image - Google Patents
Feature enhancement method for non-point sound source image Download PDFInfo
- Publication number
- CN115575896B CN115575896B CN202211524057.2A CN202211524057A CN115575896B CN 115575896 B CN115575896 B CN 115575896B CN 202211524057 A CN202211524057 A CN 202211524057A CN 115575896 B CN115575896 B CN 115575896B
- Authority
- CN
- China
- Prior art keywords
- sound source
- sound
- array
- point
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000001228 spectrum Methods 0.000 claims abstract description 44
- 238000003384 imaging method Methods 0.000 claims abstract description 29
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 230000003595 spectral effect Effects 0.000 claims description 14
- 238000004519 manufacturing process Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 230000004069 differentiation Effects 0.000 claims description 3
- 239000012467 final product Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 9
- 238000003491 array Methods 0.000 description 2
- 238000005314 correlation function Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
- G01S5/20—Position of source determined by a plurality of spaced direction-finders
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The invention relates to a characteristic enhancement method for a non-point sound source image, which comprises the following steps: s1, determining the sound source position of a non-point sound source; s2, obtaining an array focusing sound source signal; s3, obtaining a plurality of sound source frequency spectrum characteristics; s4, obtaining each frequency domain signal received by each array element; s5 is inForming time signal points corresponding to all times in the dimensional space; s6, clustering the signal points at each moment to formOf individual, i.e. non-point, sound sourcesA sound emitting area; s7, obtaining each sounding region fromA fitted straight line formed in a dimensional space; s8, calculating to obtain the regional spectrum characteristics of each sounding area based on the filtering signals received by each array element at the final moment and the weight coefficients of the signals received by each array element in each sounding area; and S9, carrying out secondary imaging on the basis of the frequency spectrum characteristics of each region, the positions of the array elements and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics. The invention can enhance the characteristics of the sound source image of the non-point sound source, thereby improving the imaging effect of the sound imaging instrument on the weak sound source.
Description
Technical Field
The invention belongs to the technical field of sound source positioning, and particularly relates to a feature enhancement method for a non-point sound source image.
Background
Acoustic imaging (acoustic imaging) is based on a microphone array measurement technology, and is characterized in that the position of a sound source is determined according to a phased array principle by measuring the phase difference of signals of sound waves in a certain space reaching each microphone, the amplitude of the sound source is measured, and the distribution of the sound source in the space is displayed in an image mode, namely a cloud image-sound image of the spatial sound field distribution is obtained, wherein the intensity is represented by the color and the brightness of the image.
For example, chinese patent publication No. CN110082725a discloses a sound source localization delay estimation method and a sound source localization system based on a microphone array, which integrate two improved frequency domain weighting functions, i.e., PATH and ML, by using a newly proposed frequency domain weighting function, and make up for the disadvantage that the original algorithm cannot resist noise and reverberation at the same time. Firstly, a microphone array receives two paths of signals, the two paths of signals are converted into digital signals through ADC sampling, windowing and framing are carried out on the two paths of signals, then frequency domain signals are obtained through Fourier transformation, cross power spectrums and weighting functions of the two frames of signals are calculated, weighting is carried out on the cross power spectrums, then cross correlation functions of the two paths of signals are obtained through inverse Fourier transformation on the weighted cross power spectrums, and finally peak detection is carried out on the cross correlation functions to obtain the relative time delay of the two paths of signals. The method reduces the influence of the environmental noise and reverberation on the time delay estimation, improves the accuracy of the time delay estimation and improves the sound source positioning precision.
For another example, chinese patent publication No. CN113126028a discloses a noise source positioning method based on multiple microphone arrays. M microphone sensors are selected to construct an annular microphone array, one microphone sensor is arranged to serve as a reference microphone sensor, an array coordinate system is established by the reference microphone sensor, the other M-1 microphone sensors are arranged around the reference microphone sensor, and D sound sources are arranged in a cabin; obtaining relative transfer functions from D sound sources to each microphone sensor, and constructing an array flow pattern matrix of the annular microphone array; further introducing the linear distance between the sound source and the reference microphone sensor, the azimuth angle of the sound source relative to the reference microphone sensor and the sound source frequency to construct an array flow pattern near-field model; estimating the azimuth angle of each sound source relative to the reference microphone sensor by adopting a MUSIC algorithm; more than two identical annular microphone arrays are preset in the cabin, the azimuth angle of the sound source relative to each annular microphone array relative to the reference microphone sensor is estimated, the distance from the sound source to each annular microphone array is solved by using a least square method overall, and then the sound source position information is obtained.
Therefore, at present, the research on sound source positioning is mature, but the imaging research on the sound image is less, and when the acoustic imaging instrument images a weak sound source, the final imaging effect is poor due to weak sound source signals, so that the effect of finally displaying the image in front of a user is poor. Therefore, a method for enhancing the features of the sound source image is needed.
Disclosure of Invention
In view of the above problems in the prior art, the present invention provides a method for enhancing characteristics of a sound source image of a non-point sound source, which can enhance characteristics of the sound source image of the non-point sound source, thereby improving an imaging effect of a sound imaging apparatus on a weak sound source. The invention adopts the following technical scheme:
a feature enhancement method for a non-point sound source image comprises the following steps:
s1, determining the sound source position of a non-point sound source;
s2, pointing the microphone array of the acoustic imaging instrument to the sound source position to obtain an array focusing sound source signal;
s3, carrying out frequency spectrum search on the array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristics;
s4, performing band-pass filtering processing on the sound source signals received by the array elements according to a plurality of sound source frequency spectrum characteristics to obtain filtering signals received by the array elements, and performing frequency domain processing on the filtering signals to obtain frequency domain signals;
s5, according to the frequency domain signals received by all array elements at each momentForming a time signal point corresponding to each time in the dimensional space,time signal point corresponding to timeHas the coordinates of,To representAt time firstThe frequency domain signal received by an array element, i.e. atIn a dimensional space ofThe coordinates of the individual dimensions are such that,representing the total number of microphone array elements,,represents the final time;
s6, clustering the signal points at each moment based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesA sound emitting area;
s7, fitting the time signal points in each sound-emitting area to obtain each sound-emitting areaThe slope of the fitting straight line in different dimensions represents the weight coefficient of each array element in the sound production area corresponding to the fitting straight line;
s8, calculating to obtain sound source signals of all sounding areas at the final moment based on the filtering signals received by all array elements at the final moment and the weight coefficients of the signals received by all array elements at all sounding areas, and obtaining the area spectrum characteristics of all sounding areas based on the sound source signals of all sounding areas;
and S9, carrying out secondary imaging based on the frequency spectrum characteristics of each region, the positions of each array element and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics, and carrying out sound source positioning.
Preferably, step S3 includes the following steps:
s3.1, carrying out differential operation on the array focusing sound source signals:
wherein,representing the differentiated array focused sound source signal,representing the array focused sound source signal,means differentiation;
s3.2, carrying out frequency spectrum search on the differentiated array focused sound source signals to obtain a plurality of sound source frequency spectrum characteristicsAnd each sound source frequency spectrum characteristic meets the following conditions:
wherein,denotes the firstThe spectral characteristics of the individual sound sources are,representing an imaging threshold.
Preferably, the method further comprises the following steps between step S5 and step S6:
normalizing the signal points at each moment to obtain normalized signal points corresponding to each moment, wherein the normalized calculation formula is as follows:
wherein,to representThe frequency domain signal received by the 1 st array element at the moment,to representA standardized signal point corresponding to the moment;
in step S6, the standardized signal points corresponding to each time are clustered based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesAnd a sound emitting area.
Preferably, step S6 includes the following steps:
s6.2, calculating the distance between each standardized signal point and each clustering center, and dividing each standardized signal point into the class to which the clustering center closest to the standardized signal point belongs;
s6.3, based on all the standardized signal points in each class, recalculating to obtain the iterated signalA cluster center;
s6.4, repeating the step S6.2 to the step S6.3 until the preset iteration times are reached to obtain the final productOf individual, i.e. non-point, sound sourcesAnd a sound emitting area.
Preferably, in step S7, the slopes of the different dimensions of each of the fitting straight lines formed in the dimensional space in each of the voicing regions can be represented as follows:in whichIs shown asThe fitting straight line corresponding to each sound production area isSlope of one dimension, i.e. representing the secondArray element is inThe individual voicing regions receive the weighting coefficients of the signal.
Preferably, in step S7, the time signal points in each sound emission region are fitted by using the least square method to obtain each sound emission regionA fitted straight line formed in a dimensional space.
Preferably, in step S8, the sound source signal of each sound emission area at the final time is calculated based on the following equation:
wherein,indicates the last momentThe filtered signals received by the individual array elements,is shown asThe array element is inThe individual voicing regions receive the weighting coefficients of the signal,indicates the first time at the final momentSound source signals of the individual sound emitting areas.
Preferably, step S9 includes the following steps:
s9.1, calculating an array flow pattern of each array element at each area spectrum characteristic position based on the area spectrum characteristics of each sound production area of the non-point sound source, the position of each array element and the sound source position;
and S9.2, performing secondary imaging based on the array flow pattern of each array element at each region spectrum feature to obtain a feature-enhanced sound source image.
Preferably, in step S9.1, the calculation formula of the array flow pattern of each array element at each region spectral feature is as follows:
wherein,is shown asThe regional spectral characteristics of the individual sound-emanating regions,is shown asArray element is inThe array flow pattern at the spectral features of the individual sound emanating areas,a position of a sound source is indicated,the number of the imaginary numbers is represented by,which is indicative of the speed of sound,is shown asCoordinates of individual array elements.
Preferably, in step S9.2, the calculation formula of the feature-enhanced sound source image is:
wherein,representing a feature-enhanced sound source image,is shown asThe regional spectral characteristics of the individual sound-emanating regions,is shown asThe area spectrum characteristics in the sound source signal received by each array element areThe composition of (1).
The invention has the beneficial effects that:
the characteristic enhancement can be carried out on the sound source image of the non-point sound source, and the imaging effect of the sound imaging instrument on the sound source is further improved.
Because the mixing frequencies of the sounding components at different positions of the non-point sound source can be different, and a dominant sound source exists at different moments, the signal correlation of the sound source frequency needs to be searched, and the imaging enhancement is respectively performed on different positions of the non-point sound source to achieve the image enhancement effect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a feature enhancement method for a non-point sound source image according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided by way of specific examples, and other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure herein. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1, the present embodiment provides a feature enhancement method for a non-point sound source image, including the steps of:
s1, determining the sound source position of a non-point sound source;
s2, pointing the microphone array of the acoustic imaging instrument to the sound source position to obtain an array focusing sound source signal;
s3, carrying out frequency spectrum search on the array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristics;
s4, performing band-pass filtering processing on the sound source signals received by the array elements according to a plurality of sound source frequency spectrum characteristics to obtain filtering signals received by the array elements, and performing frequency domain processing on the filtering signals to obtain frequency domain signals;
s5, according to the frequency domain signals received by all array elements at each momentForming a time signal point corresponding to each time in the dimensional space,time signal point corresponding to timeHas the coordinates of,To representAt time firstThe frequency domain signal received by an array element, i.e. atIn a dimensional spaceThe coordinates of the individual dimensions are such that,representing the total number of elements of the microphone array,,represents the final time;
it should be noted that, the above-mentioned dimensions may refer to a two-dimensional coordinate system and a three-dimensional coordinate system, where the coordinates of the two-dimensional coordinate system are (x, y), and the coordinates of the three-dimensional coordinate system are (x, y, z), where x is the coordinates representing the first dimension in the two-dimensional coordinate system and the three-dimensional coordinate system, y is the coordinates representing the second dimension in the two-dimensional coordinate system and the three-dimensional coordinate system, and z is the coordinates representing the third dimension in the three-dimensional coordinate system.
S6, clustering the signal points at all the moments based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesA sound emitting area;
s7, fitting the time signal points in each sound-emitting area to obtain each sound-emitting areaA fitted straight line formed in a dimensional space, the fitted straight line being differentThe slope of the dimension represents the weight coefficient of each array element for receiving signals in the sounding area corresponding to the fitting straight line, and the least square method is adopted for fitting in the embodiment;
s8, calculating to obtain sound source signals of all sounding areas at the final moment based on the filtering signals received by all array elements at the final moment and the weight coefficients of the signals received by all array elements at all sounding areas, and obtaining the area spectrum characteristics of all sounding areas based on the sound source signals of all sounding areas;
and S9, carrying out secondary imaging based on the frequency spectrum characteristics of each region, the positions of each array element and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics.
It should be noted that: although a point sound source exists in an ideal state and is relatively few in real life, a sound source with a small sound emitting area can be approximated to a point sound source, and the non-point sound source in this embodiment represents a sound source with a large sound emitting area (the sound emitting area can also be regarded as a diaphragm).
Because the mixing frequencies of the sounding components at different positions of the non-point sound source can be different, and a dominant sound source exists at different moments, the signal correlation of the sound source frequency needs to be searched, and the imaging enhancement is respectively performed on different positions of the non-point sound source to achieve the effect of image enhancement.
Therefore, the invention can enhance the characteristics of the sound source image of the non-point sound source, thereby improving the imaging effect of the acoustic imaging instrument on the weak sound source.
Specifically, the method comprises the following steps:
in step S1, the output result of the acoustic imager is a two-dimensional image, and the physical meaning is that the stronger the energy of the sound source is, the brighter the sound source position in the image is, the horizontal scanning angle is the abscissa and the vertical scanning angle is the ordinate in the image, so that the position of the sound source can be determined by energy peak search, and the sound source position is recorded as。
In step S2, the acoustic imaging instrument microphone array hasAn array element, the firstThe signals received by an array element are recorded asThe frequency domain signal of each array element can be obtained using fast fourier transform:
wherein,the frequency is represented by a frequency-dependent signal,representing a fast fourier transform operation.
It is known that the sound source position is calculated by step S1So the pan microphone array points to the sound source location to get an array focused sound source signal:
wherein,the number of the imaginary numbers is represented by,denotes the firstThe coordinates of the individual array elements are,representing the speed of sound.
In step S3, the method includes the following steps:
s3.1, carrying out differential operation on the array focusing sound source signals:
wherein,representing the differentiated array focused sound source signal,representing the array focused sound source signal,means differentiation;
s3.2, carrying out frequency spectrum search on the differentiated array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristicsAnd each sound source frequency spectrum characteristic meets the following conditions:
wherein,denotes the firstThe spectral characteristics of the individual sound sources are,which represents the imaging threshold, which in this embodiment is the average of the spectral characteristics of a plurality of sound sources.
The method also comprises the following steps between the step S5 and the step S6:
normalizing the signal points at each moment to obtain normalized signal points corresponding to each moment, wherein the normalized calculation formula is as follows:
wherein,to representThe frequency domain signal received by the 1 st array element at the moment,to representA standardized signal point corresponding to the moment;
in step S6, the standardized signal points corresponding to each time are clustered based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesAnd a sound emitting area.
Step S6 includes the following steps:
s6.2, calculating the distance between each standardized signal point and each clustering center, and dividing each standardized signal point into the class to which the clustering center closest to the standardized signal point belongs;
s6.3, based on all the standardized signal points in each class, recalculating to obtain the iterated signalA cluster center;
s6.4, repeating the step S6.2 to the step S6.3 until the preset iteration times are reached to obtain the final productOf individual, i.e. non-point, sound sourcesAnd a sound emitting area. The preset iteration number can be set according to the actual situation, and the iteration is carried out untilThe individual cluster centers are not changed.
In step S7, the slopes of different dimensions of each of the sound-emitting areas, which are the fitted straight lines formed in the dimensional space, can be represented as:whereinIs shown asThe fitting straight line corresponding to each sound production area isSlope of one dimension, i.e. representing the secondThe array element is inThe individual voicing regions receive the weighting coefficients of the signal.
Why the following fitting straight line formed in the pair-dimensional space existsThe individual slopes are interpreted:
taking a two-dimensional plane as an example, if there is a straight line in the two-dimensional plane, in the front view and the top view of the two-dimensional plane, there are two slopes for the straight line, that is, there is a slope for each of the x-dimension and the y-dimension.
Taking a three-dimensional space as an example, if a straight line exists in the three-dimensional space, in the front view, the top view and the side view of the three-dimensional space, the straight line has three slopes, that is, an x-dimension, a y-dimension and a z-dimension each have one slope.
This fitted straight line thus formed in the dimensional space, from the different dimensions of which there should be a line which is presentA slope.
In step S8, the sound source signal of each sound emission area at the final time is calculated based on the following equation:
wherein,indicates the first time at the final momentArray element connectorThe received filtered signal is then transmitted to the receiver,denotes the firstArray element is inThe individual voicing regions receive the weighting coefficients of the signal,indicates the last momentSound source signals of the individual sound emitting areas.
The above formula is written as a set of equations that can be expressed as:
therefore it is firstArray element is inThe weight coefficient and the second of the received signal of each sounding regionThe fitting straight line corresponding to each sound production area isThe slopes of the dimensions are equal.
In step S9, the method includes the following steps:
s9.1, calculating an array flow pattern of each array element at each area spectrum characteristic position based on the area spectrum characteristics of each sounding area of the non-point sound source, the position of each array element and the position of the sound source;
and S9.2, performing secondary imaging based on the array flow pattern of each array element at each region spectrum characteristic to obtain a characteristic-enhanced sound source image.
In step S9.1, the calculation formula of the array flow pattern of each array element at each region spectrum feature is as follows:
wherein,denotes the firstThe regional spectral characteristics of the individual sound-emanating regions,is shown asArray element is inThe array flow pattern at the spectral features of the individual sound emanating areas,the position of the sound source is represented,the number of the imaginary numbers is represented,which is indicative of the speed of sound,is shown asCoordinates of individual array elements.
In step S9.2, the calculation formula of the feature-enhanced sound source image is:
wherein,representing a feature-enhanced sound source image,denotes the firstThe regional spectral characteristics of the individual sound-emanating regions,is shown asThe area spectrum characteristics in the sound source signal received by each array element areThe composition of (1).
The present embodiment can enhance a sound source image of a non-point sound source.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention by those skilled in the art should fall within the protection scope of the present invention without departing from the design spirit of the present invention.
Claims (10)
1. A feature enhancement method for a non-point sound source image, comprising the steps of:
s1, determining the sound source position of a non-point sound source;
s2, pointing the microphone array of the acoustic imaging instrument to the sound source position to obtain an array focusing sound source signal;
s3, carrying out frequency spectrum search on the array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristics;
s4, performing band-pass filtering processing on the sound source signals received by the array elements according to a plurality of sound source frequency spectrum characteristics to obtain filtering signals received by the array elements, and performing frequency domain processing on the filtering signals to obtain frequency domain signals;
s5, according to the frequency domain signals received by all array elements at each momentForming a time signal point corresponding to each time in the dimensional space,time signal point corresponding to timeHas the coordinates of,To representAt a time, firstThe frequency domain signal received by an array element, i.e. atIn a dimensional space ofThe coordinates of the individual dimensions are such that,representing the total number of microphone array elements,,represents the final time;
s6, clustering the signal points at all the moments based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesA sound emitting area;
s7, fitting the time signal points in each sound-emitting area to obtain each sound-emitting areaThe slope of the fitting straight line in different dimensions represents the weight coefficient of each array element in the sound production area corresponding to the fitting straight line;
s8, calculating to obtain sound source signals of all sounding areas at the final moment based on the filtering signals received by all array elements at the final moment and the weight coefficients of the signals received by all array elements at all sounding areas, and obtaining the area spectrum characteristics of all sounding areas based on the sound source signals of all sounding areas;
and S9, performing secondary imaging based on the frequency spectrum characteristics of each region, the positions of the array elements and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics.
2. The method for enhancing characteristics of a non-point sound source image according to claim 1, wherein step S3 comprises the following steps:
s3.1, carrying out differential operation on the array focusing sound source signals:
wherein,representing the differentiated array focused sound source signal,representing the array focused sound source signal,means differentiation;
s3.2, carrying out frequency spectrum search on the differentiated array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristicsAnd each sound source frequency spectrum characteristic meets the following conditions:
3. The method for enhancing the characteristics of the sound source image of the non-point sound source according to claim 1, wherein between the step S5 and the step S6, the method further comprises the steps of:
normalizing the signal points at each moment to obtain a normalized signal point corresponding to each moment, wherein the normalized calculation formula is as follows:
wherein,to representThe frequency domain signal received by the first array element at a time,to representA standardized signal point corresponding to the moment;
4. The method of claim 3, wherein the step S6 comprises the steps of:
s6.2, calculating the distance between each standardized signal point and each clustering center, and dividing each standardized signal point into the class to which the clustering center closest to the standardized signal point belongs;
s6.3, based on all the standardized signal points in each class, recalculating to obtain the iterated signalA cluster center;
5. The method according to claim 4, wherein in step S7, the slopes of different dimensions of each sound emitting region are represented by a fitted straight line formed in a dimensional space:whereinIs shown asThe fitting straight line corresponding to each sound production area isSlope of one dimension, i.e. representing the secondArray element is inThe individual voicing regions receive the weighting coefficients of the signals.
7. The method of claim 1, wherein in step S8, the sound source signal of each sound emission area at the final time is calculated based on the following formula:
wherein,indicates the last momentThe filtered signals received by the individual array elements,is shown asArray element is inA sound production area receives signalsThe weight coefficient of (a) is,indicates the first time at the final momentSound source signals of the individual sound emitting areas.
8. The method of enhancing features of a non-point sound source image according to claim 1, wherein the step S9 comprises the steps of:
s9.1, calculating an array flow pattern of each array element at each area spectrum characteristic position based on the area spectrum characteristics of each sound production area of the non-point sound source, the position of each array element and the sound source position;
and S9.2, performing secondary imaging based on the array flow pattern of each array element at each region spectrum feature to obtain a feature-enhanced sound source image.
9. The method according to claim 8, wherein in step S9.1, the formula for calculating the array flow pattern of each array element at each region spectral feature is as follows:
wherein,is shown asThe regional spectral characteristics of the individual sound-emanating regions,is shown asArray element is inThe array flow pattern at the spectral features of the individual sound emanating areas,a position of a sound source is indicated,the number of the imaginary numbers is represented by,which is indicative of the speed of sound,is shown asCoordinates of individual array elements.
10. The method according to claim 9, wherein in step S9.2, the calculation formula of the feature-enhanced sound source image is:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524057.2A CN115575896B (en) | 2022-12-01 | 2022-12-01 | Feature enhancement method for non-point sound source image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524057.2A CN115575896B (en) | 2022-12-01 | 2022-12-01 | Feature enhancement method for non-point sound source image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115575896A CN115575896A (en) | 2023-01-06 |
CN115575896B true CN115575896B (en) | 2023-03-10 |
Family
ID=84590473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211524057.2A Active CN115575896B (en) | 2022-12-01 | 2022-12-01 | Feature enhancement method for non-point sound source image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115575896B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014215385A (en) * | 2013-04-24 | 2014-11-17 | 日本電信電話株式会社 | Model estimation system, sound source separation system, model estimation method, sound source separation method, and program |
WO2018045973A1 (en) * | 2016-09-08 | 2018-03-15 | 南京阿凡达机器人科技有限公司 | Sound source localization method for robot, and system |
KR20200038688A (en) * | 2018-10-04 | 2020-04-14 | 서희 | Apparatus and method for providing sound source |
CN113884986A (en) * | 2021-12-03 | 2022-01-04 | 杭州兆华电子有限公司 | Beam focusing enhanced strong impact signal space-time domain joint detection method and system |
CN114175144A (en) * | 2019-07-30 | 2022-03-11 | 杜比实验室特许公司 | Data enhancement for each generation of training acoustic models |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6635903B2 (en) * | 2016-10-14 | 2020-01-29 | 日本電信電話株式会社 | Sound source position estimating apparatus, sound source position estimating method, and program |
CN107680593A (en) * | 2017-10-13 | 2018-02-09 | 歌尔股份有限公司 | The sound enhancement method and device of a kind of smart machine |
-
2022
- 2022-12-01 CN CN202211524057.2A patent/CN115575896B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014215385A (en) * | 2013-04-24 | 2014-11-17 | 日本電信電話株式会社 | Model estimation system, sound source separation system, model estimation method, sound source separation method, and program |
WO2018045973A1 (en) * | 2016-09-08 | 2018-03-15 | 南京阿凡达机器人科技有限公司 | Sound source localization method for robot, and system |
KR20200038688A (en) * | 2018-10-04 | 2020-04-14 | 서희 | Apparatus and method for providing sound source |
CN114175144A (en) * | 2019-07-30 | 2022-03-11 | 杜比实验室特许公司 | Data enhancement for each generation of training acoustic models |
CN113884986A (en) * | 2021-12-03 | 2022-01-04 | 杭州兆华电子有限公司 | Beam focusing enhanced strong impact signal space-time domain joint detection method and system |
Non-Patent Citations (2)
Title |
---|
ENHANCED POWER-NORMALIZED FEATURES FOR MANDARIN ROBUST SPEECH RECOGNITION BASED ON A VOICED-UNVOICED-SILENCE DECISION;Ying-Wei Tan et al.;《2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP)》;20140613;第222-225页 * |
基于频域ICA的语音特征增强;吕钊 等;《振动与冲击》;20110225;第30卷(第2期);第238-257页 * |
Also Published As
Publication number | Publication date |
---|---|
CN115575896A (en) | 2023-01-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2746737B1 (en) | Acoustic sensor apparatus and acoustic camera using a mems microphone array | |
JP5701164B2 (en) | Position detection apparatus and position detection method | |
Ginn et al. | Noise source identification techniques: simple to advanced applications | |
CN110491403A (en) | Processing method, device, medium and the speech enabled equipment of audio signal | |
CA2496785A1 (en) | Sound source search system | |
CN113868583B (en) | Method and system for calculating sound source distance focused by subarray wave beams | |
WO2009145310A1 (en) | Sound source separation and display method, and system thereof | |
JPH09512676A (en) | Adaptive beamforming method and apparatus | |
CN110444220B (en) | Multi-mode remote voice perception method and device | |
CN109683134A (en) | A kind of high-resolution localization method towards rotation sound source | |
CN115435891A (en) | Road vehicle sound power monitoring system based on vector microphone | |
Jing et al. | Sound source localisation using a single acoustic vector sensor and multichannel microphone phased arrays | |
CN115575896B (en) | Feature enhancement method for non-point sound source image | |
Prezelj et al. | A novel approach to localization of environmental noise sources: Sub-windowing for time domain beamforming | |
CN114355290A (en) | Sound source three-dimensional imaging method and system based on stereo array | |
CN114001816A (en) | Acoustic imager audio acquisition system based on MPSOC | |
CN109061558A (en) | A kind of sound collision detection and sound localization method based on deep learning | |
CN109254265A (en) | A kind of whistle vehicle positioning method based on microphone array | |
CN116559778B (en) | Vehicle whistle positioning method and system based on deep learning | |
Chen et al. | Insight into split beam cross-correlator detector with the prewhitening technique | |
CN115061089B (en) | Sound source positioning method, system, medium, equipment and device | |
CN112857560B (en) | Acoustic imaging method based on sound frequency | |
CN116309921A (en) | Delay summation acoustic imaging parallel acceleration method based on CUDA technology | |
Kerstens et al. | An optimized planar MIMO array approach to in-air synthetic aperture sonar | |
Bianchi et al. | High resolution imaging of acoustic reflections with spherical microphone arrays |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |