CN115575896A - Feature enhancement method for non-point sound source image - Google Patents
Feature enhancement method for non-point sound source image Download PDFInfo
- Publication number
- CN115575896A CN115575896A CN202211524057.2A CN202211524057A CN115575896A CN 115575896 A CN115575896 A CN 115575896A CN 202211524057 A CN202211524057 A CN 202211524057A CN 115575896 A CN115575896 A CN 115575896A
- Authority
- CN
- China
- Prior art keywords
- sound source
- sound
- array
- point
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/18—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
- G01S5/20—Position of source determined by a plurality of spaced direction-finders
Abstract
The invention relates to a characteristic enhancement method for a non-point sound source image, which comprises the following steps: s1, determining the sound source position of a non-point sound source; s2, obtaining an array focusing sound source signal; s3, obtaining a plurality of sound source frequency spectrum characteristics; s4, obtaining each frequency domain signal received by each array element; s5 is inForming a time signal point corresponding to each time in the dimensional space; s6, clustering signal points at all moments to formOf a kind, i.e. not of point sourceA sound emitting area; s7, obtaining each sounding region fromA fitted straight line formed in a dimensional space; s8, calculating to obtain the regional spectrum characteristics of each sounding area based on the filtering signals received by each array element at the final moment and the weight coefficients of the signals received by each array element in each sounding area; and S9, carrying out secondary imaging on the basis of the frequency spectrum characteristics of each region, the positions of the array elements and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics. The invention can enhance the characteristics of the sound source image of the non-point sound source, thereby improving the imaging effect of the acoustic imaging instrument on the weak sound source.
Description
Technical Field
The invention belongs to the technical field of sound source positioning, and particularly relates to a feature enhancement method for a non-point sound source image.
Background
Acoustic imaging (acoustic imaging) is based on a microphone array measurement technology, and is characterized in that the position of a sound source is determined according to a phased array principle by measuring the phase difference of signals of sound waves in a certain space reaching each microphone, the amplitude of the sound source is measured, and the distribution of the sound source in the space is displayed in an image mode, namely a cloud image-sound image of the spatial sound field distribution is obtained, wherein the intensity is represented by the color and the brightness of the image.
For example, chinese patent publication No. CN110082725A discloses a sound source localization delay estimation method and a sound source localization system based on a microphone array, which utilize a newly proposed frequency domain weighting function to synthesize two improved frequency domain weighting functions of PATH and ML, and make up for the deficiency that the original algorithm cannot resist noise and reverberation at the same time. Firstly, a microphone array receives two paths of signals, the two paths of signals are converted into digital signals through ADC sampling, windowing and framing are carried out on the two paths of signals, then, frequency domain signals are obtained through Fourier transform, cross power spectrums and weighting functions of the two frames of signals are calculated, the cross power spectrums are weighted, then, cross correlation functions of the two paths of signals are obtained through Fourier inverse transformation on the weighted cross power spectrums, and finally, peak detection is carried out on the cross correlation functions, so that relative time delay of the two paths of signals can be obtained. The method reduces the influence of the environmental noise and reverberation on the time delay estimation, improves the accuracy of the time delay estimation and improves the sound source positioning precision.
For another example, chinese patent publication No. CN113126028A discloses a noise source positioning method based on multiple microphone arrays. M microphone sensors are selected to construct an annular microphone array, one microphone sensor is arranged to serve as a reference microphone sensor, an array coordinate system is established by the reference microphone sensor, the other M-1 microphone sensors are arranged around the reference microphone sensor, and D sound sources are arranged in a cabin; obtaining relative transfer functions from D sound sources to each microphone sensor, and constructing an array flow pattern matrix of the annular microphone array; further introducing the linear distance between the sound source and the reference microphone sensor, the azimuth angle of the sound source relative to the reference microphone sensor and the sound source frequency to construct an array flow pattern near-field model; estimating the azimuth angle of each sound source relative to the reference microphone sensor by adopting a MUSIC algorithm; more than two identical annular microphone arrays are preset in the cabin, the azimuth angle of the sound source relative to each annular microphone array relative to the reference microphone sensor is estimated, the distance from the sound source to each annular microphone array is solved by using a least square method overall, and then the sound source position information is obtained.
Therefore, at present, the research on sound source positioning is mature, but the imaging research on the sound image is less, and when the acoustic imaging instrument images a weak sound source, the final imaging effect is poor due to weak sound source signals, so that the effect of finally displaying the image in front of a user is poor. Therefore, a method for feature enhancement of a sound source image is needed.
Disclosure of Invention
In view of the above problems in the prior art, the present invention provides a method for enhancing characteristics of a sound source image of a non-point sound source, which can enhance characteristics of the sound source image of the non-point sound source, thereby improving an imaging effect of a sound imaging apparatus on a weak sound source. The invention adopts the following technical scheme:
a feature enhancement method for a non-point sound source image comprises the following steps:
s1, determining the sound source position of a non-point sound source;
s2, pointing the microphone array of the acoustic imaging instrument to the sound source position to obtain an array focusing sound source signal;
s3, carrying out frequency spectrum search on the array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristics;
s4, performing band-pass filtering processing on the sound source signals received by the array elements according to a plurality of sound source frequency spectrum characteristics to obtain filtering signals received by the array elements, and performing frequency domain processing on the filtering signals to obtain frequency domain signals;
s5, according to the frequency domain signals received by all array elements at each momentForming a time signal point corresponding to each time in the dimensional space,time signal point corresponding to timeHas the coordinates of,To representAt a time, firstThe frequency domain signal received by an array element, i.e. atWei hollowIn the middle ofThe coordinates of the individual dimensions are such that,representing the total number of microphone array elements,,represents the final time;
s6, clustering the signal points at each moment based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesA sound emitting area;
s7, fitting the time signal points in each sounding region to obtain each sounding regionThe slope of the fitting straight line in different dimensions represents the weight coefficient of each array element for receiving signals in the sounding area corresponding to the fitting straight line;
s8, calculating to obtain sound source signals of all sounding areas at the final moment based on the filtering signals received by all array elements at the final moment and the weight coefficients of the signals received by all array elements at all sounding areas, and obtaining the area spectrum characteristics of all sounding areas based on the sound source signals of all sounding areas;
and S9, carrying out secondary imaging based on the frequency spectrum characteristics of each region, the positions of the array elements and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics, and carrying out sound source positioning.
Preferably, step S3 includes the following steps:
s3.1, differentiating the array focusing sound source signals:
wherein, the first and the second end of the pipe are connected with each other,representing the differentiated array focused sound source signal,representing the array focused sound source signal,means differentiation;
s3.2, carrying out frequency spectrum search on the differentiated array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristicsAnd each sound source frequency spectrum characteristic meets the following conditions:
wherein, the first and the second end of the pipe are connected with each other,is shown asThe frequency spectrum characteristics of each sound source are determined,representing an imaging threshold.
Preferably, the method further comprises the following steps between step S5 and step S6:
normalizing the signal points at each moment to obtain normalized signal points corresponding to each moment, wherein the normalized calculation formula is as follows:
wherein the content of the first and second substances,to representThe frequency domain signal received by the 1 st array element at the moment,to representA standardized signal point corresponding to the moment;
in step S6, the standardized signal points corresponding to each time are clustered based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesAnd a sound emitting area.
Preferably, step S6 includes the following steps:
s6.2, calculating the distance between each standardized signal point and each clustering center, and dividing each standardized signal point into the class to which the clustering center closest to the standardized signal point belongs;
s6.3, recalculating to obtain iterated standard signal points based on all the standardized signal points in each classA cluster center;
s6.4, repeating the step S6.2 to the step S6.3 until the preset iteration times are reached to obtain the final productOf individual, i.e. non-point, sound sourcesAnd a sound emitting area.
Preferably, in step S7, the slopes of the different dimensions of each of the fitting straight lines formed in the dimensional space in each of the voicing regions can be represented as follows:whereinIs shown asThe fitting straight line corresponding to each sound production area isSlope of one dimension, i.e. representing the secondArray element is inThe individual voicing regions receive the weighting coefficients of the signal.
Preferably, in step S7, the time signal points in each sound emission region are fitted by using the least square method to obtain each sound emission regionA fitted straight line formed in a dimensional space.
Preferably, in step S8, the sound source signal of each sound emission area at the final time is calculated based on the following formula:
wherein the content of the first and second substances,indicates the first time at the final momentThe filtered signals received by the individual array elements,is shown asArray element is inThe individual voicing regions receive the weighting coefficients of the signal,indicates the last momentSound source signals of the individual sound emitting areas.
Preferably, step S9 includes the following steps:
s9.1, calculating an array flow pattern of each array element at each area spectrum characteristic position based on the area spectrum characteristics of each sound production area of the non-point sound source, the position of each array element and the sound source position;
and S9.2, performing secondary imaging based on the array flow pattern of each array element at each region spectrum feature to obtain a feature-enhanced sound source image.
Preferably, in step S9.1, the calculation formula of the array flow pattern of each array element at each region spectral feature is:
wherein the content of the first and second substances,is shown asThe regional spectral characteristics of the individual sound-emanating regions,denotes the firstThe array element is inThe array flow pattern at the spectral features of the individual sound emanating areas,the position of the sound source is represented,the number of the imaginary numbers is represented,which is indicative of the speed of sound,denotes the firstCoordinates of individual array elements.
Preferably, in step S9.2, the calculation formula of the feature-enhanced sound source image is:
wherein the content of the first and second substances,a sound source image with enhanced characteristics is represented,is shown asThe regional spectral characteristics of the individual sound-emanating regions,denotes the firstThe area spectrum characteristics in the sound source signal received by each array element areThe component (c).
The beneficial effects of the invention are:
the characteristic enhancement can be carried out on the sound source image of the non-point sound source, and the imaging effect of the sound imaging instrument on the sound source is further improved.
Because the mixing frequencies of the sounding components at different positions of the non-point sound source can be different, and a dominant sound source exists at different moments, the signal correlation of the sound source frequency needs to be searched, and the imaging enhancement is respectively performed on different positions of the non-point sound source to achieve the effect of image enhancement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a feature enhancement method for a non-point sound source image according to the present invention.
Detailed Description
The following description is provided for illustrative purposes and is not intended to limit the invention to the particular embodiments disclosed. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the features in the following embodiments and examples may be combined with each other without conflict.
Referring to fig. 1, the present embodiment provides a feature enhancement method for a non-point sound source image, including the steps of:
s1, determining the sound source position of a non-point sound source;
s2, pointing the microphone array of the acoustic imaging instrument to the sound source position to obtain an array focusing sound source signal;
s3, carrying out frequency spectrum search on the array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristics;
s4, performing band-pass filtering processing on the sound source signals received by the array elements according to a plurality of sound source frequency spectrum characteristics to obtain filtering signals received by the array elements, and performing frequency domain processing on the filtering signals to obtain frequency domain signals;
s5, according to the frequency domain signals received by all array elements at each momentForming a time signal point corresponding to each time in the dimensional space,time signal point corresponding to timeHas the coordinates of,To representAt a time, firstThe frequency domain signal received by an array element, i.e. atIn a dimensional spaceThe coordinates of the individual dimensions are such that,representing the total number of elements of the microphone array,,represents the final time;
it should be noted that, the above-mentioned dimensions may refer to a two-dimensional coordinate system and a three-dimensional coordinate system, where the coordinates of the two-dimensional coordinate system are (x, y), and the coordinates of the three-dimensional coordinate system are (x, y, z), where x is the coordinates representing the first dimension in the two-dimensional coordinate system and the three-dimensional coordinate system, y is the coordinates representing the second dimension in the two-dimensional coordinate system and the three-dimensional coordinate system, and z is the coordinates representing the third dimension in the three-dimensional coordinate system.
S6, clustering the signal points at each moment based on a clustering algorithm to formOf a kind, i.e. not of point sourceA sound emitting area;
s7, fitting the time signal points in each sounding region to obtain each sounding regionA fitting straight line formed in a dimensional space, wherein slopes of the fitting straight line in different dimensions represent weight coefficients of received signals of each array element in a sound production area corresponding to the fitting straight line, and a least square method is adopted for fitting in the embodiment;
s8, calculating to obtain sound source signals of all sounding areas at the final moment based on the filtering signals received by all array elements at the final moment and the weight coefficients of the signals received by all array elements at all sounding areas, and obtaining the area spectrum characteristics of all sounding areas based on the sound source signals of all sounding areas;
and S9, performing secondary imaging based on the frequency spectrum characteristics of each region, the positions of the array elements and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics.
It should be noted that: although a point sound source exists in an ideal state and is relatively few in real life, a sound source with a small sound emitting area can be approximated to a point sound source, and the non-point sound source in this embodiment represents a sound source with a large sound emitting area (the sound emitting area can also be regarded as a diaphragm).
Because the mixing frequencies of the sounding components at different positions of the non-point sound source can be different, and a dominant sound source exists at different moments, the signal correlation of the sound source frequency needs to be searched, and the imaging enhancement is respectively performed on different positions of the non-point sound source to achieve the image enhancement effect.
Therefore, the method can perform characteristic enhancement on the sound source image of the non-point sound source, and further improve the imaging effect of the sound imaging instrument on the weak sound source.
Specifically, the method comprises the following steps:
in step S1, acoustic imagingThe output result of the instrument is a two-dimensional image, and the physical meaning of the two-dimensional image is that the stronger the energy of the sound source is, the brighter the position of the sound source in the image is, the horizontal scanning angle is the abscissa in the image, and the vertical scanning angle is the ordinate, so that the position of the sound source can be determined through energy peak value search, and the position of the sound source is recorded as。
In step S2, the acoustic imaging instrument microphone array hasIndividual array element, firstThe signals received by the individual array elements are recorded asThe frequency domain signal of each array element can be obtained using fast fourier transform:
wherein the content of the first and second substances,the frequency is represented by a frequency-dependent signal,representing a fast fourier transform operation.
It is known that the sound source position is calculated by step S1Thus, the acoustic imager microphone array is pointed at the sound source location to obtain an array focused sound source signal:
wherein, the first and the second end of the pipe are connected with each other,the number of the imaginary numbers is represented,is shown asThe coordinates of the individual array elements are,representing the speed of sound.
In step S3, the method includes the following steps:
s3.1, differentiating the array focusing sound source signals:
wherein, the first and the second end of the pipe are connected with each other,representing the differentiated array focused sound source signal,representing the array focused sound source signal,means differentiation;
s3.2, carrying out frequency spectrum search on the differentiated array focused sound source signals to obtain a plurality of sound source frequency spectrum characteristicsAnd each sound source frequency spectrum characteristic meets the following conditions:
wherein the content of the first and second substances,denotes the firstThe frequency spectrum characteristics of each sound source are determined,which represents the imaging threshold, which in this embodiment is the average of the spectral characteristics of a plurality of sound sources.
The method also comprises the following steps between the step S5 and the step S6:
normalizing the signal points at each moment to obtain normalized signal points corresponding to each moment, wherein the normalized calculation formula is as follows:
wherein the content of the first and second substances,to representThe frequency domain signal received by the 1 st array element at the moment,to representA standardized signal point corresponding to the moment;
in step S6, the standardized signal points corresponding to each time are clustered based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesAnd a sound emitting area.
Step S6 includes the following steps:
s6.2, calculating the distance between each standardized signal point and each clustering center, and dividing each standardized signal point into the class to which the clustering center closest to the standardized signal point belongs;
s6.3, based on all the standardized signal points in each class, recalculating to obtain the iterated signalA cluster center;
s6.4, repeating the step S6.2 to the step S6.3 until the preset iteration times are reached to obtain the final productOf individual, i.e. non-point, sound sourcesAnd a sound emitting area. The preset iteration number can be set according to the actual situation, and the iteration is carried out untilThe individual cluster centers are not changed any more.
In step S7, the slopes of different dimensions of each of the sound-emitting areas, which are the fitted straight lines formed in the dimensional space, can be represented as:whereinIs shown asThe fitting straight line corresponding to each sound production area isSlope of one dimension, i.e. representing the secondArray element is inThe individual voicing regions receive the weighting coefficients of the signals.
Why the following fitting straight line formed in the pair-dimensional space existsThe individual slopes are interpreted:
taking a two-dimensional plane as an example, if there is a straight line in the two-dimensional plane, in the front view and the top view of the two-dimensional plane, there are two slopes for the straight line, that is, there is a slope for each of the x-dimension and the y-dimension.
Taking a three-dimensional space as an example, if a straight line exists in the three-dimensional space, in the front view, the top view and the side view of the three-dimensional space, the straight line has three slopes, that is, an x-dimension, a y-dimension and a z-dimension each have one slope.
This fitted straight line thus formed in the dimensional space, from the different dimensions of which there should be a line which is presentA slope.
In step S8, the sound source signal of each sound emission area at the final time is calculated based on the following equation:
wherein, the first and the second end of the pipe are connected with each other,indicates the last momentThe filtered signals received by the individual array elements,is shown asThe array element is inThe weight coefficients of the signals are received by the respective voicing regions,indicates the first time at the final momentSound source signals of the individual sound emitting areas.
The above formula is written as a system of equations that can be expressed as:
therefore it is firstThe array element is inThe weight coefficient and the second of the received signal of each sounding regionThe fitting straight line corresponding to each sound production area isThe slopes of the dimensions are equal.
In step S9, the method includes the following steps:
s9.1, calculating an array flow pattern of each array element at each area spectrum characteristic position based on the area spectrum characteristics of each sound production area of the non-point sound source, the position of each array element and the sound source position;
and S9.2, performing secondary imaging based on the array flow pattern of each array element at each region spectrum feature to obtain a feature-enhanced sound source image.
In step S9.1, the calculation formula of the array flow pattern of each array element at each region spectrum feature is as follows:
wherein the content of the first and second substances,denotes the firstThe regional spectral characteristics of the individual sound-emanating regions,is shown asArray element is inThe array flow pattern at the spectral features of the individual sound emanating areas,the position of the sound source is represented,the number of the imaginary numbers is represented,which is indicative of the speed of sound,denotes the firstCoordinates of individual array elements.
In step S9.2, the calculation formula of the feature-enhanced sound source image is:
wherein, the first and the second end of the pipe are connected with each other,a sound source image with enhanced characteristics is represented,is shown asThe regional spectral characteristics of the individual sound-emanating regions,is shown asThe area spectrum characteristic in the sound source signal received by each array element isThe component (c).
The present embodiment can enhance a sound source image of a non-point sound source.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention by those skilled in the art should fall within the protection scope of the present invention without departing from the design spirit of the present invention.
Claims (10)
1. A feature enhancement method for a non-point sound source image, comprising the steps of:
s1, determining the sound source position of a non-point sound source;
s2, pointing the microphone array of the acoustic imaging instrument to the sound source position to obtain an array focusing sound source signal;
s3, carrying out frequency spectrum search on the array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristics;
s4, performing band-pass filtering processing on the sound source signals received by the array elements according to a plurality of sound source frequency spectrum characteristics to obtain filtering signals received by the array elements, and performing frequency domain processing on the filtering signals to obtain frequency domain signals;
s5, according to the frequency domain signals received by all array elements at each momentForming a time signal point corresponding to each time in the dimensional space,time signal point corresponding to timeHas the coordinates of,To representAt a time, firstThe frequency domain signal received by an array element, i.e. atIn a dimensional spaceThe coordinates of the individual dimensions are such that,representing the total number of microphone array elements,,represents the final time;
s6, clustering the signal points at each moment based on a clustering algorithm to formOf individual, i.e. non-point, sound sourcesA sound emitting area;
s7, fitting the time signal points in each sound-emitting area to obtain each sound-emitting areaThe slope of the fitted straight line formed in the dimensional space in different dimensions indicates that each array element is inThe fitting straight line corresponds to the weight coefficient of the received signal of the sounding area;
s8, calculating to obtain sound source signals of all sounding areas at the final moment based on the filtering signals received by all array elements at the final moment and the weight coefficients of the signals received by all array elements at all sounding areas, and obtaining the area spectrum characteristics of all sounding areas based on the sound source signals of all sounding areas;
and S9, performing secondary imaging based on the frequency spectrum characteristics of each region, the positions of the array elements and the sound source position of the non-point sound source to obtain a sound source image with enhanced characteristics.
2. The method of enhancing features of a non-point sound source image according to claim 1, wherein the step S3 comprises the steps of:
s3.1, carrying out differential operation on the array focusing sound source signals:
wherein, the first and the second end of the pipe are connected with each other,representing the differentiated array focused sound source signal,representing the array focused sound source signal,means differentiation;
s3.2, carrying out frequency spectrum search on the differentiated array focusing sound source signals to obtain a plurality of sound source frequency spectrum characteristicsAnd each sound source frequency spectrum characteristic meets the following conditions:
3. The method for enhancing the characteristics of the sound source image of the non-point sound source according to claim 1, wherein between the step S5 and the step S6, the method further comprises the steps of:
normalizing the signal points at each moment to obtain a normalized signal point corresponding to each moment, wherein the normalized calculation formula is as follows:
wherein the content of the first and second substances,representThe frequency domain signal received by the 1 st array element at the moment,to representA standardized signal point corresponding to the moment;
4. The method of claim 3, wherein the step S6 comprises the steps of:
s6.2, calculating the distance between each standardized signal point and each clustering center, and dividing each standardized signal point into the class to which the clustering center closest to the standardized signal point belongs;
s6.3, based on all the standardized signal points in each class, recalculating to obtain the iterated signalA cluster center;
5. The method according to claim 4, wherein in step S7, the slopes of different dimensions of each sound emitting region are represented by a fitted straight line formed in a dimensional space:whereinIs shown asThe fitting straight line corresponding to each sound production area isSlope of one dimension, i.e. representing the secondArray element is inThe individual voicing regions receive the weighting coefficients of the signal.
7. The method of claim 1, wherein in step S8, the sound source signal of each sound emission area at the final time is calculated based on the following formula:
wherein the content of the first and second substances,indicates the first time at the final momentThe filtered signals received by the individual array elements,is shown asArray element is inThe weight coefficients of the signals are received by the respective voicing regions,indicates the last momentSound source signals of the individual sound emitting areas.
8. The method for enhancing characteristics of a non-point sound source image according to claim 1, wherein step S9 comprises the following steps:
s9.1, calculating an array flow pattern of each array element at each area spectrum characteristic position based on the area spectrum characteristics of each sound production area of the non-point sound source, the position of each array element and the sound source position;
and S9.2, performing secondary imaging based on the array flow pattern of each array element at each region spectrum feature to obtain a feature-enhanced sound source image.
9. The method according to claim 8, wherein in step S9.1, the formula for calculating the array flow pattern of each array element at each region spectral feature is as follows:
wherein the content of the first and second substances,is shown asThe regional spectral characteristics of the individual sound-emanating regions,denotes the firstThe array element is inThe array flow pattern at the spectral features of the individual sound emanating areas,a position of a sound source is indicated,the number of the imaginary numbers is represented by,which is indicative of the speed of sound,is shown asCoordinates of individual array elements.
10. The method of claim 9, wherein in step S9.2, the feature-enhanced sound source image is calculated by the following formula:
wherein the content of the first and second substances,representing a feature-enhanced sound source image,is shown asThe regional spectral characteristics of the individual sound-emanating regions,is shown asThe area spectrum characteristics in the sound source signal received by each array element areThe composition of (1).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524057.2A CN115575896B (en) | 2022-12-01 | 2022-12-01 | Feature enhancement method for non-point sound source image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211524057.2A CN115575896B (en) | 2022-12-01 | 2022-12-01 | Feature enhancement method for non-point sound source image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115575896A true CN115575896A (en) | 2023-01-06 |
CN115575896B CN115575896B (en) | 2023-03-10 |
Family
ID=84590473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211524057.2A Active CN115575896B (en) | 2022-12-01 | 2022-12-01 | Feature enhancement method for non-point sound source image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115575896B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014215385A (en) * | 2013-04-24 | 2014-11-17 | 日本電信電話株式会社 | Model estimation system, sound source separation system, model estimation method, sound source separation method, and program |
CN107680593A (en) * | 2017-10-13 | 2018-02-09 | 歌尔股份有限公司 | The sound enhancement method and device of a kind of smart machine |
WO2018045973A1 (en) * | 2016-09-08 | 2018-03-15 | 南京阿凡达机器人科技有限公司 | Sound source localization method for robot, and system |
JP2018063200A (en) * | 2016-10-14 | 2018-04-19 | 日本電信電話株式会社 | Sound source position estimation device, sound source position estimation method, and program |
KR20200038688A (en) * | 2018-10-04 | 2020-04-14 | 서희 | Apparatus and method for providing sound source |
CN113884986A (en) * | 2021-12-03 | 2022-01-04 | 杭州兆华电子有限公司 | Beam focusing enhanced strong impact signal space-time domain joint detection method and system |
CN114175144A (en) * | 2019-07-30 | 2022-03-11 | 杜比实验室特许公司 | Data enhancement for each generation of training acoustic models |
-
2022
- 2022-12-01 CN CN202211524057.2A patent/CN115575896B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014215385A (en) * | 2013-04-24 | 2014-11-17 | 日本電信電話株式会社 | Model estimation system, sound source separation system, model estimation method, sound source separation method, and program |
WO2018045973A1 (en) * | 2016-09-08 | 2018-03-15 | 南京阿凡达机器人科技有限公司 | Sound source localization method for robot, and system |
JP2018063200A (en) * | 2016-10-14 | 2018-04-19 | 日本電信電話株式会社 | Sound source position estimation device, sound source position estimation method, and program |
CN107680593A (en) * | 2017-10-13 | 2018-02-09 | 歌尔股份有限公司 | The sound enhancement method and device of a kind of smart machine |
KR20200038688A (en) * | 2018-10-04 | 2020-04-14 | 서희 | Apparatus and method for providing sound source |
CN114175144A (en) * | 2019-07-30 | 2022-03-11 | 杜比实验室特许公司 | Data enhancement for each generation of training acoustic models |
CN113884986A (en) * | 2021-12-03 | 2022-01-04 | 杭州兆华电子有限公司 | Beam focusing enhanced strong impact signal space-time domain joint detection method and system |
Non-Patent Citations (2)
Title |
---|
YING-WEI TAN ET AL.: "ENHANCED POWER-NORMALIZED FEATURES FOR MANDARIN ROBUST SPEECH RECOGNITION BASED ON A VOICED-UNVOICED-SILENCE DECISION", 《2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP)》 * |
吕钊 等: "基于频域ICA的语音特征增强", 《振动与冲击》 * |
Also Published As
Publication number | Publication date |
---|---|
CN115575896B (en) | 2023-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2746737B1 (en) | Acoustic sensor apparatus and acoustic camera using a mems microphone array | |
CN103329567B (en) | Device, method and system for directivity information of deriving | |
JP5701164B2 (en) | Position detection apparatus and position detection method | |
Ginn et al. | Noise source identification techniques: simple to advanced applications | |
CN113868583B (en) | Method and system for calculating sound source distance focused by subarray wave beams | |
WO2009145310A1 (en) | Sound source separation and display method, and system thereof | |
CN109489796A (en) | A kind of underwater complex structural radiation noise source fixation and recognition based on unit radiation method and acoustic radiation forecasting procedure | |
CN109683134A (en) | A kind of high-resolution localization method towards rotation sound source | |
CN113607447A (en) | Acoustic-optical combined fan fault positioning device and method | |
CN107113496A (en) | The surround sound record of mobile device | |
CN110444220B (en) | Multi-mode remote voice perception method and device | |
Jing et al. | Sound source localisation using a single acoustic vector sensor and multichannel microphone phased arrays | |
Prezelj et al. | A novel approach to localization of environmental noise sources: Sub-windowing for time domain beamforming | |
CN115575896B (en) | Feature enhancement method for non-point sound source image | |
CN114355290A (en) | Sound source three-dimensional imaging method and system based on stereo array | |
TWI429885B (en) | Method for visualizing sound source energy distribution in reverberant environment | |
Zhao et al. | Design and evaluation of a prototype system for real-time monitoring of vehicle honking | |
CN115201821B (en) | Small target detection method based on strong target imaging cancellation | |
Kwak et al. | Convolutional neural network trained with synthetic pseudo-images for detecting an acoustic source | |
CN115061089B (en) | Sound source positioning method, system, medium, equipment and device | |
CN116309921A (en) | Delay summation acoustic imaging parallel acceleration method based on CUDA technology | |
Chen et al. | Insight into split beam cross-correlator detector with the prewhitening technique | |
Bianchi et al. | High resolution imaging of acoustic reflections with spherical microphone arrays | |
Meng et al. | Acquisition of exterior multiple sound sources for train auralization based on beamforming | |
CN115435891A (en) | Road vehicle sound power monitoring system based on vector microphone |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |