CN112863540A - Sound source distribution visualization method and computer program product - Google Patents

Sound source distribution visualization method and computer program product Download PDF

Info

Publication number
CN112863540A
CN112863540A CN201911186137.XA CN201911186137A CN112863540A CN 112863540 A CN112863540 A CN 112863540A CN 201911186137 A CN201911186137 A CN 201911186137A CN 112863540 A CN112863540 A CN 112863540A
Authority
CN
China
Prior art keywords
sound source
source distribution
detection
signal
physical signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911186137.XA
Other languages
Chinese (zh)
Inventor
王智中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie International Co ltd
Original Assignee
Ruijie International Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruijie International Co ltd filed Critical Ruijie International Co ltd
Priority to CN201911186137.XA priority Critical patent/CN112863540A/en
Publication of CN112863540A publication Critical patent/CN112863540A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • G10L21/14Transforming into visible information by displaying frequency domain information
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H17/00Measuring mechanical vibrations or ultrasonic, sonic or infrasonic waves, not provided for in the preceding groups
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/27Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique
    • G10L25/30Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the analysis technique using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)

Abstract

The invention provides a sound source distribution visualization method and a computer program product, wherein the method comprises the following steps: reading a target image of a detection target; marking a detection boundary on the target image and setting a plurality of detection points on the detection boundary, wherein each detection point has a special code; inputting a physical signal generated in the operation process of the detection target corresponding to each detection point; calculating the spectrum distribution of each physical signal through spectrum superposition, analyzing the bandwidth range of each physical signal, and acquiring a time waveform in the bandwidth range of each physical signal through analysis and operation processing to generate a characteristic signal of each physical signal; and calculating each characteristic signal through a neural network to form an image sound source distribution of the visual characteristic, wherein the image sound source distribution is matched with the target image and presented in the detection boundary; therefore, the sound source distribution can be obtained instantly, quickly and accurately.

Description

Sound source distribution visualization method and computer program product
Technical Field
The present invention relates to a visualization technology, and more particularly, to a sound source distribution visualization method and computer program product.
Background
In the field of noise control, correct identification of a noise source is the basis for effective noise improvement, so the accuracy of sound source identification and localization will affect the effect of noise control, and the influence of noise can be effectively controlled or correctly evaluated only under the conditions of really mastering the position, intensity distribution, speed distribution, density distribution and the like of the noise source, and further, the noise caused by structure vibration is reduced, so that the noise of the structure is optimized. For example, the noise control technique is applied to the power machine diagnosis industry, and not only can assist engineers in determining the fault point of the power machine and evaluating the influence caused by the noise source, but also can improve the accuracy of the judgment of the engineers.
In the prior art, a sound source is searched by using a sound intensity method in the technology of identifying the sound source, a plurality of grids need to be distinguished from a detection target space, a sound intensity value of a region is measured in each grid by using a sound intensity meter (sound intensity probe), and then the current sound intensity distribution is reduced and measured by using an interpolation method, so that the purpose of positioning the sound source is achieved.
In addition, U.S. Pat. No. 20050225497 discloses a method for recognizing a sound source by using a beam forming array (beam forming array) technique, however, the beam forming array technique can only recognize a far-field sound field, and has a poor recognition performance for an unsteady sound source, and has disadvantages that it is impossible to perform an instantaneous operation, it is impossible to recognize a sound field with different coordinates synchronously, and it is necessary to change the shape of an array microphone in order to prevent a spatial distortion.
Disclosure of Invention
In order to solve the above problems, the present invention provides a method for visualizing sound source distribution, which can obtain the visualized distribution of the sound source in real time, quickly and accurately by matching the visual characteristics with the analysis operation and the neural network operation.
An embodiment of the present invention provides a sound source distribution visualization method, including: an image creating step: reading a target image of a detection target; a marking step: marking a detection boundary on the target image and setting a plurality of detection points on the detection boundary, wherein each detection point has a special code; a signal acquisition step: inputting a physical signal generated in the operation process of the detection target corresponding to each detection point; an operation processing step: calculating the spectrum distribution of each physical signal through spectrum superposition to analyze the bandwidth range of each physical signal, and acquiring a time waveform in the bandwidth range of each physical signal through analysis and calculation processing to generate a characteristic signal of each physical signal; and a visualization step: and calculating each characteristic signal through a neural network to form an image sound source distribution of the visual characteristic, wherein the image sound source distribution is presented in the detection boundary in cooperation with the target image.
In one embodiment, the intensity variation of each feature signal is generated by calculating each feature signal through a neural network to obtain the distance between each detection point, so as to form the image sound source distribution of the visual feature.
In one embodiment, a continuous and smooth image sound source distribution is formed between the detection points by a bi-harmonic spline interpolation method.
In one embodiment, the distribution of the image sound source of the visual features exhibits color variations according to the intensity of each feature signal.
In one embodiment, the analysis operation is a time-frequency analysis, each physical signal obtains a time waveform within a bandwidth range of each physical signal through the analysis operation, and the characteristic signal for selectively generating each physical signal is provided as a root mean square value or a maximum value of the waveform.
In one embodiment, the neural Network operation is a regression neural Network method (GRNN) or a Supervised neural Network method (Supervised Learning Network).
In one embodiment, when the detection target is a constant-speed device, each physical signal is input corresponding to each detection point in a step-by-step manner, and each physical signal corresponds to the exclusive code memory of each detection point.
In one embodiment, when the detection target is a variable-speed device, each physical signal is input corresponding to each detection point in a synchronous manner, and each physical signal corresponds to the exclusive code memory of each detection point.
In one embodiment, the physical signal is a sound signal or a vibration signal.
One embodiment of the present invention provides a computer program product comprising a non-transitory computer readable medium having instructions recorded thereon, which when executed by a computer implement the method of any of the above embodiments.
Through the above, the invention can instantly, rapidly and accurately obtain the visual image distribution of the sound source by matching the physical signal generated in the operation process of the detection target with the visual characteristics through analysis operation and neural network operation, thereby solving the problem that the prior art cannot instantly and accurately obtain the sound source.
Furthermore, the invention can form the image sound source distribution with visual characteristics by analyzing and calculating and neural network calculation without considering the linear or nonlinear sound source transmission path.
In addition, the invention can be applied to equipment for detecting and measuring the rotating speed or changing the rotating speed so as to improve the application range of the invention.
Drawings
FIG. 1 is a schematic diagram of the method steps of an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the detection of a boundary marked on a target image according to the present invention;
FIG. 3 is a schematic diagram of a plurality of inspection points arranged on an inspection boundary according to the present invention;
FIG. 4 is a schematic diagram of inputting physical signals corresponding to various detection points according to the present invention;
FIG. 5 is a schematic diagram of calculating the spectral distribution of each physical signal by spectral superposition according to the present invention;
FIG. 6 is a schematic diagram illustrating the distribution of the image sound source in cooperation with the presentation of the target image within the detection boundary according to the present invention.
Description of the reference numerals
I target image
B detecting the boundary
P detection point
C-specific coding
SI image sound source distribution
S1 image creating step
S2 marking step
S3 Signal acquisition step
S4 arithmetic processing step
And S5 visualization step.
Detailed Description
For the purpose of illustrating the central concepts of the present invention as embodied in the above summary, reference will now be made to specific embodiments. Various objects in the embodiments are depicted in terms of scale, dimensions, deformation, or displacement suitable for illustration, rather than in terms of actual component proportions, as previously described.
Referring to fig. 1 to 6, the present invention provides a method for visualizing sound source distribution, including:
an image creating step S1: reading a target image I of a detection target; the detection target can be fixed-speed equipment or variable-speed equipment; the target image I of the detected target is obtained by a camera device (e.g., a camera, a video camera, or a smart mobile device), or is generated by a drawing method.
A labeling step S2: marking a detection boundary B on the target image I obtained in the image establishing step S1, and setting a plurality of detection points P on the detection boundary B, each detection point P having a dedicated code C, wherein the detection boundary B is a closed range, and the detection boundary B can be a closed rectangular shape, a polygonal shape, or a circular curve shape, and in the embodiment of the present invention, the detection boundary B is a closed rectangular shape, as shown in fig. 2; furthermore, the number of the detecting points P can be set according to the requirement of the user, the dedicated code C of each detecting point P can be a number or an english letter, and the dedicated code C is used to identify each detecting point P, in the embodiment of the present invention, the number of the detecting points P is 12, the dedicated code C of each detecting point P is a number, and the dedicated codes C of each detecting point P are 1 to 12, respectively, as shown in fig. 3 and 4.
A signal acquiring step S3: after the marking step S2, a physical signal generated in the operation process of the detection target is inputted corresponding to each detection point P, as shown in fig. 4; wherein, the physical signal is a sound signal or a vibration signal; when the physical signal is a sound signal, the physical signal generated in the operation process of the detection target can be read by a sound reading device (such as an independent microphone, a built-in microphone of an intelligent mobile device or a multi-bit recording pen, but the invention is not limited thereto); when the physical signal is a vibration signal, the physical signal can pass through a vibration sensor (for example, a displacement sensor, a velocity sensor, an acceleration sensor or an accelerometer, but the invention is not limited thereto).
Furthermore, when the detected object is a constant speed device, a single sound reading device or vibration sensor is used to read the physical signal corresponding to each detecting point P step by step according to the type of the physical signal to be read, and the read physical signal is memorized corresponding to the exclusive code C of each detecting point P.
In addition, when the detected object is a variable-speed device, a plurality of sound reading devices or a plurality of vibration sensors are used according to the types of physical signals to be read, each sound reading device or each vibration device is placed at the actual position of the physical detected object corresponding to each detection point P, the physical signals are input into each detection point P through each sound reading device or each vibration device in a synchronous mode, and the read physical signals are memorized corresponding to the exclusive codes C of each detection point P.
An arithmetic processing step S4: the physical signals of each detection point P obtained in the signal obtaining step S3 are subjected to spectrum superposition to calculate the spectrum distribution of each physical signal, so as to analyze the bandwidth range of each physical signal, and a time waveform within the bandwidth range of each physical signal is obtained through an analyzing and calculating process to generate a characteristic signal of each physical signal, as shown in fig. 5; in the embodiment of the invention, the analysis operation is time-frequency analysis, and the characteristic signal of the physical signal can be a root mean square value or a waveform maximum value; after the bandwidth range of each physical signal is analyzed, each physical signal can obtain a time waveform in the bandwidth range of each physical signal through analysis and calculation processing, and a characteristic signal for selectively generating each physical signal is provided as a root mean square value or a waveform maximum value, wherein the bandwidth range of each physical signal can be set or preset by a user.
A visualization step S5: the feature signal of each detection point P obtained in the operation step S4 is operated through a neural network to form an image sound source distribution SI of visual features, and the image sound source distribution SI is presented in the detection boundary B in cooperation with the target image I, wherein the image sound source distribution SI is superimposed on the target image I and does not display each detection point P, as shown in fig. 6.
In the embodiment of the present invention, the neural Network operation is a regression neural Network method (GRNN) or a Supervised neural Network method (Supervised Learning Network); the image sound source distribution of the visual features presents color changes according to the intensity of each feature signal. Further explanation is as follows: calculating each characteristic signal through a neural network to obtain the intensity variation of each characteristic signal generated by the distance difference between the detection points P, for example: when 12 detection points P exist, the distance from each detection point P to the rest detection points P is different, and the intensity change of different characteristic signals can be generated among the detection points P; then, a continuous and smooth image sound source distribution SI is formed between the detecting points P by bi-harmonic spline interpolation method, as shown in FIG. 6.
Some embodiments according to the invention comprise an data carrier with electronically readable control signals capable of cooperating with a programmable computer system such that one of the methods described in the invention is performed. Generally, embodiments of the present invention can be implemented as a computer program product having program code operative to perform one of the methods described above when the computer program product is executed on a terminal device; wherein the program code may for example be stored on a machine readable carrier.
In other embodiments of the invention, a computer program product stored on a machine-readable carrier for performing the methods described herein can be included. In other words, an embodiment of the inventive method is thus a computer program having program code for performing one of the methods described herein when the computer program product is executed on a terminal device, such as a computer or a smart mobile device.
Therefore, when the embodiment of the invention is a computer program product with program codes, the target image I of the detection target can be obtained through signal connection between the terminal device and the camera device; or generating a target image I of the detection target on the terminal device in a drawing mode. Furthermore, the terminal device can be connected with a sound reading device or a vibration sensor through signals to acquire physical signals.
In summary, the present invention has the following effects:
the invention matches the physical signal generated in the operation process of the detection target with the visual characteristic through analysis operation and neural network operation, and can instantly, quickly and accurately obtain the visual image distribution of the sound source.
The invention can form the image sound source distribution with visual characteristics by analysis operation and neural network operation without considering the linear or nonlinear sound source transmission path.
The invention can be applied to equipment for detecting and measuring the rotating speed or changing the rotating speed so as to improve the application range of the invention.
The above examples are provided only for illustrating the present invention and are not intended to limit the scope of the present invention. All such modifications and variations are within the scope of the invention as determined by the appended claims.

Claims (10)

1. A method for visualizing a sound source distribution, comprising:
an image creating step: reading a target image of a detection target;
a marking step: marking a detection boundary on the target image and setting a plurality of detection points on the detection boundary, wherein each detection point has a special code;
a signal acquisition step: inputting a physical signal generated in the operation process of the detection target corresponding to each detection point;
an operation processing step: calculating the spectrum distribution of each physical signal through spectrum superposition, analyzing the bandwidth range of each physical signal, and acquiring a time waveform in the bandwidth range of each physical signal through analysis and operation processing to generate a characteristic signal of each physical signal; and
a visualization step: and calculating each characteristic signal through a neural network to form an image sound source distribution of the visual characteristic, wherein the image sound source distribution is presented in the detection boundary in cooperation with the target image.
2. The sound source distribution visualization method according to claim 1, wherein: and calculating each characteristic signal through the neural network to obtain the intensity change of each characteristic signal generated by different distances between the detection points so as to form the image sound source distribution of the visual characteristic.
3. The sound source distribution visualization method according to claim 2, wherein: and forming continuous and smooth image sound source distribution among the detection points by a bi-harmonic spline interpolation method.
4. The sound source distribution visualization method according to claim 3, wherein: the image sound source distribution of the visual features presents color variations according to the intensity of each feature signal.
5. The sound source distribution visualization method according to claim 4, wherein: the analysis operation is time-frequency analysis, each physical signal obtains the time waveform in the bandwidth range of each physical signal through the analysis operation processing, and the characteristic signal for selectively generating each physical signal is provided as a root mean square value or a waveform maximum value.
6. The sound source distribution visualization method according to claim 1, wherein: the neural network operation is a regression neural network method or a supervised neural network method.
7. The sound source distribution visualization method according to claim 1, wherein: when the detection target is a fixed-speed device, each physical signal is input corresponding to each detection point in a step-by-step mode, and each physical signal corresponds to the exclusive code memory of each detection point.
8. The sound source distribution visualization method according to claim 1, wherein: when the detection target is a variable-speed device, each physical signal is input corresponding to each detection point in a synchronous mode, and each physical signal corresponds to the exclusive code memory of each detection point.
9. The sound source distribution visualization method according to claim 1, wherein: the physical signal is a sound signal or a vibration signal.
10. A computer program product, comprising: a non-transitory computer-readable medium comprising instructions which, when executed by a computer, implement the sound source distribution visualization method of claims 1 to 9.
CN201911186137.XA 2019-11-28 2019-11-28 Sound source distribution visualization method and computer program product Pending CN112863540A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911186137.XA CN112863540A (en) 2019-11-28 2019-11-28 Sound source distribution visualization method and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911186137.XA CN112863540A (en) 2019-11-28 2019-11-28 Sound source distribution visualization method and computer program product

Publications (1)

Publication Number Publication Date
CN112863540A true CN112863540A (en) 2021-05-28

Family

ID=75985937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911186137.XA Pending CN112863540A (en) 2019-11-28 2019-11-28 Sound source distribution visualization method and computer program product

Country Status (1)

Country Link
CN (1) CN112863540A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981066A (en) * 1995-09-14 1997-03-28 Toshiba Corp Display device
CN101556187A (en) * 2009-05-07 2009-10-14 广东美的电器股份有限公司 Statistically optimal near-field acoustical holography used for visual recognition of air-conditioner noise sources and operation method thereof
CN104346531A (en) * 2014-10-30 2015-02-11 重庆大学 Hospital acoustic environment simulation system based on social force model
CN106488358A (en) * 2015-09-09 2017-03-08 上海其高电子科技有限公司 Optimize sound field imaging localization method and system
CN106934149A (en) * 2017-03-09 2017-07-07 哈尔滨工业大学 A kind of Forecasting Methodology of calculating crowd noise stack result in space
CN107688165A (en) * 2017-07-11 2018-02-13 国网山西省电力公司电力科学研究院 A kind of extra-high voltage transformer vibration noise source localization method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0981066A (en) * 1995-09-14 1997-03-28 Toshiba Corp Display device
CN101556187A (en) * 2009-05-07 2009-10-14 广东美的电器股份有限公司 Statistically optimal near-field acoustical holography used for visual recognition of air-conditioner noise sources and operation method thereof
CN104346531A (en) * 2014-10-30 2015-02-11 重庆大学 Hospital acoustic environment simulation system based on social force model
CN106488358A (en) * 2015-09-09 2017-03-08 上海其高电子科技有限公司 Optimize sound field imaging localization method and system
CN106934149A (en) * 2017-03-09 2017-07-07 哈尔滨工业大学 A kind of Forecasting Methodology of calculating crowd noise stack result in space
CN107688165A (en) * 2017-07-11 2018-02-13 国网山西省电力公司电力科学研究院 A kind of extra-high voltage transformer vibration noise source localization method

Similar Documents

Publication Publication Date Title
CN102521560B (en) Instrument pointer image identification method of high-robustness rod
US11301712B2 (en) Pointer recognition for analog instrument image analysis
JP6688962B2 (en) Judgment device, judgment method, and judgment program
US20180137386A1 (en) Object instance identification using three-dimensional spatial configuration
CN105894002A (en) Instrument reading identification method based on machine vision
CN107194908A (en) Image processing apparatus and image processing method
KR20190075641A (en) Printed circuit board inspecting apparatus, method for detecting anomalry of solder paste and computer readable recording medium
CN111027531A (en) Pointer instrument information identification method and device and electronic equipment
CN113554645B (en) Industrial anomaly detection method and device based on WGAN
CN107924182A (en) The control method of monitoring arrangement and monitoring arrangement
CN109211138A (en) A kind of appearance detection system and method
CN112863540A (en) Sound source distribution visualization method and computer program product
KR20100125015A (en) Apparatus and method for calibration, and calibration rig
CN116645612A (en) Forest resource asset determination method and system
CN116486146A (en) Fault detection method, system, device and medium for rotary mechanical equipment
TWI708191B (en) Sound source distribution visualization method and computer program product thereof
CN111833905B (en) System and method for detecting quality of marked character based on audio analysis
CN114694128A (en) Pointer instrument detection method and system based on abstract metric learning
CN114255458A (en) Method and system for identifying reading of pointer instrument in inspection scene
KR102039902B1 (en) Remote device precision inspection system and method
KR20080043687A (en) Method for measuring and displaying a factor, apparatus for measuring and displaying a factor, computer readble medium on which program for measuring and displaying a factor is recorded and sound scanner
JP2020144619A (en) Abnormality detecting device and abnormality detecting method
KR101830331B1 (en) Apparatus for detecting abnormal operation of machinery and method using the same
CN110738180B (en) Method for evaluating signal accuracy and system precision in detection process
CN116297620B (en) Magnetic variable measurement method and system for nuclear magnetic resonance apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination