CN113759314A - Sound source visualization method, device and system and computer readable storage medium - Google Patents

Sound source visualization method, device and system and computer readable storage medium Download PDF

Info

Publication number
CN113759314A
CN113759314A CN202111022397.0A CN202111022397A CN113759314A CN 113759314 A CN113759314 A CN 113759314A CN 202111022397 A CN202111022397 A CN 202111022397A CN 113759314 A CN113759314 A CN 113759314A
Authority
CN
China
Prior art keywords
sound source
sound
audio data
information
position information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111022397.0A
Other languages
Chinese (zh)
Inventor
万杉杉
李俊
黄晴媛
车骋
宫韬
徐甲甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Xunfei Intelligent Technology Co ltd
Original Assignee
Zhejiang Xunfei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Xunfei Intelligent Technology Co ltd filed Critical Zhejiang Xunfei Intelligent Technology Co ltd
Priority to CN202111022397.0A priority Critical patent/CN113759314A/en
Publication of CN113759314A publication Critical patent/CN113759314A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The application discloses a sound source visualization method, a device, a system and a computer readable storage medium, wherein the method comprises the following steps: acquiring sound information to be processed through sound in the current environment acquired by a microphone array; positioning the sound information to be processed to obtain the position information of the sound source in the current environment; and sending the control instruction to the laser module so that the laser module emits light and points to the position of the sound source. Through the mode, the sound source position visualization can be achieved.

Description

Sound source visualization method, device and system and computer readable storage medium
Technical Field
The present application relates to the field of sound source localization technologies, and in particular, to a sound source visualization method, device, system, and computer-readable storage medium.
Background
In the detection of gas leakage, can fix a position gas leakage's position through the ultrasonic wave, but the acoustics imaging technique only shows the superimposed effect of video picture and acoustic image/thermal image in real time, when the pipeline size is great or the pipeline is laid in a large number, because the surrounding environment does not refer to the thing, need compare the position of leaking point and real position in the video picture repeatedly, be difficult to find real leaking point in the reality, need increase the reference thing even and can find gas leakage point in the reality.
Disclosure of Invention
The application provides a sound source visualization method, a sound source visualization device, a sound source visualization system and a computer-readable storage medium, which can realize the position visualization of a sound source.
In order to solve the technical problem, the technical scheme adopted by the application is as follows: there is provided a method of visualizing a sound source, the method comprising: acquiring sound information to be processed through sound in the current environment acquired by a microphone array; positioning the sound information to be processed to obtain the position information of the sound source in the current environment; and sending the control instruction to the laser module so that the laser module emits light and points to the position of the sound source.
In order to solve the above technical problem, another technical solution adopted by the present application is: the acoustic imaging device comprises a microphone array, a processing module and a laser module, wherein the microphone array is used for acquiring sound in the current environment to obtain sound information to be processed; the processing module is connected with the microphone array and used for positioning the sound information to be processed to obtain the position information of the sound source; the laser module is connected with the processing module and used for emitting light and pointing to the position of the sound source based on the position information of the sound source.
In order to solve the above technical problem, another technical solution adopted by the present application is: an acoustic imaging system is provided, which comprises an acoustic imaging apparatus and a laser device connected to each other, wherein the acoustic imaging apparatus is configured to control the laser device, and the acoustic imaging apparatus comprises a memory and a processor connected to each other, wherein the memory is configured to store a computer program, and the computer program, when executed by the processor, is configured to implement the sound source visualization method in the above technical solution.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer readable storage medium for storing a computer program for implementing the sound source visualization method of the above technical solution when the computer program is executed by a processor.
Through the scheme, the beneficial effects of the application are that: firstly, collecting all sounds in the current environment by using a microphone array to obtain sound information to be processed; then, sound source positioning processing is carried out on the sound information to be processed by adopting a sound source positioning method to obtain the position information of the sound source in the current environment; then generating a control instruction, and sending the control instruction to a laser module, wherein the laser module emits laser after receiving the control instruction, and the laser points to the position of the sound source; through combining microphone array and laser module for the laser that the laser module sent can direct directive current environment in the position at sound source place, make the position of sound source visual, convenience of customers looks over.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
fig. 1 is a schematic flow chart diagram of an embodiment of a sound source visualization method provided in the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a sound source visualization method provided herein;
FIG. 3 is a schematic diagram of an embodiment of an acoustic imaging apparatus provided herein;
FIG. 4 is a schematic block diagram of an embodiment of an acoustic imaging system provided herein;
FIG. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples and not all examples of the present application, and all other examples obtained by a person of ordinary skill in the art without any inventive work are within the scope of the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
It should be noted that the terms "first", "second" and "third" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of indicated technical features. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
For the detection of gas leakage, some solutions in the related art determine whether the gas leaks by analyzing whether the composition of the detected gas exists outside the gas pipeline; some solutions locate the location of gas leaks by infrared thermal imaging techniques, because gas leaks are usually accompanied by changes in temperature; still other solutions locate the location of the gas leak by ultrasound. In these schemes, the gas composition analysis mode cannot determine the gas leakage position, and infrared thermal imaging and acoustic imaging can display the gas leakage position in a video picture, but the acoustic imaging technology only shows the effect of superimposing the video picture and an acoustic image/thermal image in real time, and has the following disadvantages: although both of them can see the leakage point in the video picture, when the pipeline is large in size or the pipeline is laid in a large quantity, it is not easy to find the true leakage point in reality by only combining the video picture.
Based on the problems, the scheme provided by the application is used for determining the sound source position in reality by utilizing the directivity of laser, and relates to the field of acoustic imaging and the field of laser, the acoustic imaging technology is used for determining the sound source position by utilizing a microphone array technology, the microphone array technology is combined with a camera, the distribution state of the sound source is displayed in an image mode, the intensity of sound is represented by color and brightness in the image, people can be helped to quickly position the sound source, and the problem that the sound positioning capability of human ears is limited is solved; the laser is an artificial light source obtained by amplifying light using an oscillator, and has excellent directivity and beam convergence and high interference resistance. Through combining acoustics imaging technique and laser technology, can make the position of sound source more clear and definite, convenience of customers looks over the position that the sound source was located to carry out operations such as subsequent maintenance and management, carry out detailed description to the technical scheme that this application adopted below.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a sound source visualization method provided in the present application, where the method includes:
step 11: and acquiring sound information to be processed through the sound in the current environment acquired by the microphone array.
The audio acquisition device may be a microphone array, or may read the to-be-processed sound information from a storage device storing the sound of the current environment, where the to-be-processed sound information is acquired by acquiring the sound of the current environment. For example, taking pipeline detection as an example, the sound information to be processed may include sound of gas leakage; alternatively, taking the conversation scenario as an example, the to-be-processed sound information may include the sound of the speaking of the person participating in the conversation.
Step 12: and positioning the sound information to be processed to obtain the position information of the sound source in the current environment.
After the sound information to be processed is acquired, a sound source positioning method can be adopted to perform sound source positioning processing on the sound information to be processed so as to find the position of a sound source emitting sound in the current scene.
Step 13: and sending the control instruction to the laser module so that the laser module emits light and points to the position of the sound source.
After the position of the sound source in the current environment is detected, a control instruction can be generated and sent to the laser module; specifically, this control command includes that the laser module needs to give out light and rotate the information of setting for the position, set for the position that the laser that the position sent can shine the sound source for the laser module, the laser module includes the base and sets up the laser instrument on the base, the laser module is after receiving control command, the base is rotatory to the position corresponding with the sound source, the laser instrument sends laser, the position at directive sound source place, the realization passes through the laser irradiation with the position of sound source in the current scene of using and shows, convenience of customers finds the position of sound source directly perceivedly, for example: the position of gas leakage can be found fast to the facula that testing personnel can fall on the pipeline through laser to carry out the maintenance of pipeline, promote the security of pipeline.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of a sound source visualization method provided in the present application, where the method includes:
step 21: and acquiring sound information to be processed through the sound in the current environment acquired by the microphone array.
The sound information to be processed comprises multi-frame audio data, and the microphone array is used for collecting the sound in the current environment, so that the audio data can be obtained.
Step 22: and positioning the sound information to be processed to obtain the position information of the sound source in the current environment.
The method comprises the steps of carrying out sound source positioning processing on currently input audio data (namely a frame of audio) through a sound source positioning method to obtain the position of a sound source in the current audio data, and recording position information of the sound source, wherein the position information comprises a pitch angle and an azimuth angle of the sound source relative to a microphone array.
Step 23: and judging whether the sound sources in the continuous preset number of frames of audio data are all in a preset range.
Judging whether the difference value between the position information of the sound source in the current frame of audio data and the position information of the sound source in the previous frame of audio data is within a preset difference value range or not; and if the difference value between the position information of the sound source in the current frame of audio data and the position information of the sound source in the previous frame of audio data is within the preset difference value range, determining that the sound source in the current frame of audio data and the sound source in the previous frame of audio data are the same sound source. And continuously executing the judgment operation for the preset number of times, wherein if the sound source in the current frame audio data of the preset number of times is the same as the sound source in the previous frame audio data, the sound source in the continuous preset number of frames audio data is in the preset range. It is understood that the preset number and the preset range can be configured by the user according to the application requirement.
Further, the preset difference range includes a first difference range and a second difference range, whether a difference between a pitch angle of a sound source in the current frame of audio data and a pitch angle of a sound source in the previous frame of audio data falls within the first difference range is judged, and whether a difference between an azimuth angle of the sound source in the current frame of audio data and an azimuth angle of the sound source in the previous frame of audio data falls within the second difference range is judged; and if the difference value between the pitch angle of the sound source in the current frame of audio data and the pitch angle of the sound source in the previous frame of audio data is within a first difference value range, and the difference value between the azimuth angle of the sound source in the current frame of audio data and the azimuth angle of the sound source in the previous frame of audio data is within a second difference value range, the sound sources in the two frames of audio data are the same.
Step 24: if the sound sources are all in the preset range in the continuous preset number of frames of audio data, generating a control instruction, and sending the control instruction to the laser module so that the laser module emits light and points to the position of the sound source.
If the positions of the sound sources in the continuous preset number of frames of audio data are the same, the fact that one sound source exists in the preset range is indicated, at the moment, a control instruction can be generated, and the control instruction is sent to the laser module, so that the laser emitted by the laser module points to the position of the sound source, and positioning in reality is carried out.
Step 25: and judging whether a selection instruction is received or not when the sound sources in the continuous preset number of frames of audio data are not in the preset range.
If the sound sources in the continuous preset number of frames of audio data are not all in the preset range, that is, the sound sources in at least one frame of audio data are not in the preset range, it is indicated that the sound sources exist at least two positions, and it can be determined that at least two sound sources exist in the current environment, and at this time, a user can specify which sound source the laser emitted by the laser module is directed to, that is, whether a selection instruction sent by the user is received or not is judged, and the selection instruction includes information of the sound source directed by the laser module. For example, selection options may be provided on the display interface for the user to select, and when the user clicks on an option matching the sound source on the display interface, a selection instruction is generated.
It will be appreciated that other means may be employed to determine the source at which the laser light emitted by the laser module is directed, such as: and selecting the sound source with the maximum decibel number as the sound source pointed by the laser emitted by the laser module.
Step 26: and if the selection instruction is received, controlling the laser module to point to the position of the sound source matched with the selection instruction.
After receiving a selection instruction issued by a user, a control instruction can be generated and sent to the laser module, so that the laser emitted by the laser module points to the sound source selected by the user.
Step 27: and judging whether a preset positioning end condition is met.
After the laser emitted by the laser module points to the position of a certain sound source, detection can be continuously carried out to judge whether a preset positioning end condition is met; if the preset positioning ending condition is met, ending the positioning; if the preset positioning end condition is not met, executing the sound in the current environment acquired by the microphone array to obtain the sound information to be processed, namely returning to the step 21 until the preset positioning end condition is met.
Further, whether an ending instruction issued by a user is received or not can be judged, the ending instruction is used for indicating ending of positioning the sound source, if the ending instruction is received, the user is indicated to want to end positioning, and the condition that the preset positioning ending condition is met is determined to be met. It can be understood that other means may be adopted to determine whether the preset positioning ending condition is met, for example, setting a preset positioning duration, determining whether the preset positioning duration is reached, and if the time difference between the time when positioning starts and the current time reaches the preset positioning duration, indicating that the preset positioning ending condition is currently met, ending the positioning.
The embodiment provides a method for displaying a sound source position based on laser directivity, which is characterized in that a microphone array is used for sound source positioning, laser emitted by a laser module is used for pointing to the actual position of the sound source after the positioning is finished, the sound source position is determined by the laser directivity in reality, and a user can find the position of the sound source conveniently.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of an acoustic imaging apparatus provided in the present application, in which the acoustic imaging apparatus 30 includes a microphone array 31, a processing module 32, and a laser module 33.
The microphone array 31 is used for acquiring sound in the current environment to obtain sound information to be processed; the processing module 32 is configured to perform positioning processing on the sound information to be processed to obtain position information of the sound source in the current environment, where the position information of the sound source may be an orientation and/or a distance of the sound source relative to the acoustic imaging apparatus 30. The absolute position of the sound source is not changed, and the position information of the sound source acquired by the acoustic imaging device 30 may be changed due to the movement of the acoustic imaging device 30 itself.
Further, as shown in fig. 3, the acoustic imaging device 30 further includes a camera module 34 and a display module 35, where the camera module 34 is connected to the processing module 32, and is configured to take a picture of a current scene to obtain video data, and transmit the video data to the processing module 32, so that the processing module 32 superimposes the video data and position information of a sound source to obtain data to be displayed, where the data to be displayed includes a display picture in the video data and the position information of the sound source. The display module 35 is connected to the processing module, and is configured to display the data to be displayed sent by the processing module 32, so as to display the position of the sound source and the picture of the current scene in real time.
The laser module 33 is configured to emit light and point to the position of the sound source based on the position information of the sound source, and specifically, after the sound source is positioned, the processing module 32 generates a control instruction and sends the control instruction to the laser module 33, and the control laser module 33 emits laser according to the positioned direction, so that the position of the sound source can be visually seen by a user.
In some embodiments, when the user holds the acoustic imaging device 30 and moves to the sound source under the guidance of the laser, the sound source may sound continuously, the positioning may be performed continuously and the position information of the sound source may be updated, and the laser module 33 may change the direction in real time according to the position information of the sound source, so as to ensure that the sound source is always oriented. In other embodiments, the sound source may stop generating sound, and after the sound source stops generating sound, in order to realize positioning of the sound source, an acceleration sensor 33 may be provided, that is, the acoustic imaging apparatus 30 further includes the acceleration sensor 33, the acceleration sensor 33 is connected to the processing module 32, the acceleration sensor 33 is used to acquire displacement information of the acoustic imaging apparatus 30, and the acceleration sensor 33 may be a gyroscope or other device capable of realizing positioning. The processing module 32 calculates updated position information of the sound source (i.e., azimuth information of the sound source with respect to the moved acoustic imaging device 30) based on the displacement information of the acoustic imaging device 30 and the position information of the sound source acquired last time, and the acoustic imaging device 30 may be an acoustic imager.
For example, assuming that the sound source emits sound at the T1 th second, the processing module 32 calculates the position information of the sound source, and records it as G1; however, when the sound source stops generating sound in the (T1+5) th second, that is, the sound in the current environment disappears, at this time, the processing module 32 may send a trigger signal to the acceleration sensor 33, so that the acceleration sensor 33 operates to obtain the position information of the current acoustic imaging apparatus 30, which is denoted as G2; the processing module 32 recalculates the pitch angle and the azimuth angle of the sound source relative to the acoustic imaging device 30 by using the position information G1 and the position information G2 to obtain the position information of the current sound source, which is denoted as G3; at this time, the processing module 32 may control the emitting angle of the laser module 33 according to the position information G3, so that the laser emitted from the laser module 33 is emitted to the position of the sound source, and the position of the sound source can be visually seen even after the sound in the current environment disappears.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of an acoustic imaging system provided in the present application, an acoustic imaging system 40 includes an acoustic imaging apparatus 41 and a laser device 42 that are connected to each other, the acoustic imaging apparatus 41 is configured to control the laser device 42, the acoustic imaging apparatus 41 includes a memory 411 and a processor 412 that are connected to each other, the memory 411 is configured to store a computer program, and the computer program is configured to implement the sound source visualization method in the foregoing embodiment when executed by the processor 412, and the laser device 42 is a laser module in the foregoing embodiment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium 50 provided by the present application, where the computer-readable storage medium 50 is used for storing a computer program 51, and the computer program 51 is used for implementing the sound source visualization method in the foregoing embodiment when being executed by a processor.
The computer readable storage medium 50 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (11)

1. A method of visualizing a sound source, comprising:
acquiring sound in the current environment through a microphone array to obtain sound information to be processed;
positioning the sound information to be processed to obtain the position information of the sound source in the current environment;
and sending a control instruction to a laser module so that the laser module emits light and points to the position of the sound source.
2. The sound source visualization method according to claim 1, wherein the sound information to be processed includes a plurality of frames of audio data, and the step of sending the control command to the laser module is preceded by:
judging whether sound sources in the audio data of the continuous preset number of frames are all in a preset range;
and if so, generating the control instruction.
3. The sound source visualization method according to claim 2, wherein the step of determining whether the sound sources are all within a preset range in the audio data for a preset number of consecutive frames comprises:
judging whether the difference value between the position information of the sound source in the current frame of audio data and the position information of the sound source in the previous frame of audio data is within a preset difference value range or not;
and if so, determining that the sound source in the current frame of audio data and the sound source in the previous frame of audio data are the same sound source.
4. The sound source visualization method according to claim 3, wherein the position information includes a pitch angle and an azimuth angle of the sound source relative to the microphone array, the preset difference range includes a first difference range and a second difference range, and the step of determining whether the difference between the position information of the sound source in the current frame of audio data and the position information of the sound source in the previous frame of audio data falls within the preset difference range includes:
and judging whether the difference value between the pitch angle of the sound source in the current frame of audio data and the pitch angle of the sound source in the previous frame of audio data is within the first difference value range, and whether the difference value between the azimuth angle of the sound source in the current frame of audio data and the azimuth angle of the sound source in the previous frame of audio data is within the second difference value range.
5. The sound source visualization method according to claim 2, wherein the method further comprises:
when the sound sources in the audio data of the continuous preset number of frames are not all in the preset range, determining that at least two sound sources exist in the current environment;
judging whether a selection instruction is received, wherein the selection instruction comprises information of a sound source pointed by the laser module;
and if so, controlling the laser module to point to the position of the sound source matched with the selection instruction.
6. The sound source visualization method according to claim 1, wherein the method further comprises:
judging whether a preset positioning end condition is met or not;
and if not, returning the sound in the current environment acquired by the microphone array to obtain the sound information to be processed.
7. The sound source visualization method according to claim 6, wherein the step of determining whether a preset localization end condition is satisfied comprises:
and judging whether an ending instruction is received or whether a preset positioning time length is reached.
8. An acoustic imaging device is characterized by comprising a microphone array, a processing module and a laser module, wherein the microphone array is used for acquiring sound in the current environment to obtain sound information to be processed; the processing module is connected with the microphone array and used for positioning the sound information to be processed to obtain the position information of the sound source; the laser module is connected with the processing module and used for emitting light and pointing to the position of the sound source based on the position information of the sound source.
9. The acoustic imaging apparatus of claim 8,
the acoustic imaging device further comprises an acceleration sensor, and the acceleration sensor is used for acquiring displacement information of the acoustic imaging device; and the processing module determines the updated position information of the sound source based on the displacement information of the acoustic imaging device and the position information of the sound source after the sound source stops sounding.
10. An acoustic imaging system comprising an acoustic imaging apparatus and a laser device connected to each other, the acoustic imaging apparatus being configured to control the laser device, the acoustic imaging apparatus comprising a memory and a processor connected to each other, wherein the memory is configured to store a computer program, which when executed by the processor is configured to implement the sound source visualization method according to any of claims 1-7.
11. A computer-readable storage medium storing a computer program, characterized in that the computer program, when being executed by a processor, is adapted to carry out the sound source visualization method according to any of the claims 1-7.
CN202111022397.0A 2021-09-01 2021-09-01 Sound source visualization method, device and system and computer readable storage medium Pending CN113759314A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111022397.0A CN113759314A (en) 2021-09-01 2021-09-01 Sound source visualization method, device and system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111022397.0A CN113759314A (en) 2021-09-01 2021-09-01 Sound source visualization method, device and system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113759314A true CN113759314A (en) 2021-12-07

Family

ID=78792512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111022397.0A Pending CN113759314A (en) 2021-09-01 2021-09-01 Sound source visualization method, device and system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113759314A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114325584A (en) * 2022-03-14 2022-04-12 杭州兆华电子股份有限公司 Synthetic aperture-based multi-array-element ultrasonic sound source three-dimensional imaging method and system
CN114859194A (en) * 2022-07-07 2022-08-05 杭州兆华电子股份有限公司 Non-contact-based partial discharge detection method and device

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101201399A (en) * 2007-12-18 2008-06-18 北京中星微电子有限公司 Sound localization method and system
CN203133282U (en) * 2013-03-07 2013-08-14 连芷萱 Object positioning mobile terminal based on big dipper positioning system
CN103379424A (en) * 2012-04-24 2013-10-30 华为技术有限公司 Sound mixing method and multi-point control server
CN104730495A (en) * 2015-04-16 2015-06-24 清华大学苏州汽车研究院(相城) Portable sound source positioning device and positioning method adopted by the same
CN105182286A (en) * 2015-10-09 2015-12-23 北京长城电子装备有限责任公司 Sonar guiding escape rescue system
CN107152892A (en) * 2017-07-11 2017-09-12 于伟 A kind of Portable Automatic scoring round target device based on telephoto lens
CN207473084U (en) * 2017-10-11 2018-06-08 华北电力大学(保定) A kind of sound source locating device
CN208060656U (en) * 2018-02-09 2018-11-06 日照港股份有限公司动力分公司 High and low voltage switchgear pressure test point of discharge automatic positioning equipment
CN109658442A (en) * 2018-12-21 2019-04-19 广东工业大学 Multi-object tracking method, device, equipment and computer readable storage medium
CN109669460A (en) * 2018-12-29 2019-04-23 西安电子科技大学 The intelligent control method of the middle-size and small-size photoelectric turntable of target detection
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN211527601U (en) * 2020-03-08 2020-09-18 江苏尚美环保科技有限公司 Low-frequency band noise detector
CN111739557A (en) * 2020-06-19 2020-10-02 浙江讯飞智能科技有限公司 Equipment fault positioning method, device, equipment and storage medium
CN111768444A (en) * 2020-04-30 2020-10-13 苏州思必驰信息科技有限公司 Sound source based information processing method and device and computer readable medium
CN111986229A (en) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 Video target detection method, device and computer system
CN112629519A (en) * 2020-11-10 2021-04-09 湖北久之洋红外系统股份有限公司 Handheld target positioning observer and navigation method thereof
CN112703375A (en) * 2018-07-24 2021-04-23 弗兰克公司 System and method for projecting and displaying acoustic data
CN113129339A (en) * 2021-04-28 2021-07-16 北京市商汤科技开发有限公司 Target tracking method and device, electronic equipment and storage medium

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101201399A (en) * 2007-12-18 2008-06-18 北京中星微电子有限公司 Sound localization method and system
CN103379424A (en) * 2012-04-24 2013-10-30 华为技术有限公司 Sound mixing method and multi-point control server
CN203133282U (en) * 2013-03-07 2013-08-14 连芷萱 Object positioning mobile terminal based on big dipper positioning system
CN104730495A (en) * 2015-04-16 2015-06-24 清华大学苏州汽车研究院(相城) Portable sound source positioning device and positioning method adopted by the same
CN105182286A (en) * 2015-10-09 2015-12-23 北京长城电子装备有限责任公司 Sonar guiding escape rescue system
CN107152892A (en) * 2017-07-11 2017-09-12 于伟 A kind of Portable Automatic scoring round target device based on telephoto lens
CN207473084U (en) * 2017-10-11 2018-06-08 华北电力大学(保定) A kind of sound source locating device
CN208060656U (en) * 2018-02-09 2018-11-06 日照港股份有限公司动力分公司 High and low voltage switchgear pressure test point of discharge automatic positioning equipment
CN112703375A (en) * 2018-07-24 2021-04-23 弗兰克公司 System and method for projecting and displaying acoustic data
CN109658442A (en) * 2018-12-21 2019-04-19 广东工业大学 Multi-object tracking method, device, equipment and computer readable storage medium
CN109669460A (en) * 2018-12-29 2019-04-23 西安电子科技大学 The intelligent control method of the middle-size and small-size photoelectric turntable of target detection
CN111986229A (en) * 2019-05-22 2020-11-24 阿里巴巴集团控股有限公司 Video target detection method, device and computer system
CN110428008A (en) * 2019-08-02 2019-11-08 深圳市唯特视科技有限公司 A kind of target detection and identification device and method based on more merge sensors
CN211527601U (en) * 2020-03-08 2020-09-18 江苏尚美环保科技有限公司 Low-frequency band noise detector
CN111768444A (en) * 2020-04-30 2020-10-13 苏州思必驰信息科技有限公司 Sound source based information processing method and device and computer readable medium
CN111739557A (en) * 2020-06-19 2020-10-02 浙江讯飞智能科技有限公司 Equipment fault positioning method, device, equipment and storage medium
CN112629519A (en) * 2020-11-10 2021-04-09 湖北久之洋红外系统股份有限公司 Handheld target positioning observer and navigation method thereof
CN113129339A (en) * 2021-04-28 2021-07-16 北京市商汤科技开发有限公司 Target tracking method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
王世清;崔晓东;张群飞;: "基于麦克风阵列的声源定向教学实验系统设计", 实验技术与管理, no. 06 *
金浩文: "基于高清CMOS相机的大视场目标探测系统设计", 《中国硕士学位论文全文数据库 信息科技辑》 *
马跃龙: "一种融合单目视觉SLAM与GPS的无人机视频目标定位方法", 《测绘科学技术学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114325584A (en) * 2022-03-14 2022-04-12 杭州兆华电子股份有限公司 Synthetic aperture-based multi-array-element ultrasonic sound source three-dimensional imaging method and system
CN114325584B (en) * 2022-03-14 2022-06-24 杭州兆华电子股份有限公司 Synthetic aperture-based multi-array-element ultrasonic sound source three-dimensional imaging method and system
CN114859194A (en) * 2022-07-07 2022-08-05 杭州兆华电子股份有限公司 Non-contact-based partial discharge detection method and device
CN114859194B (en) * 2022-07-07 2022-09-23 杭州兆华电子股份有限公司 Non-contact-based partial discharge detection method and device

Similar Documents

Publication Publication Date Title
KR102308937B1 (en) Virtual and real object recording on mixed reality devices
US10165386B2 (en) VR audio superzoom
US11528576B2 (en) Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems
US9693009B2 (en) Sound source selection for aural interest
US9913054B2 (en) System and method for mapping and displaying audio source locations
US9854371B2 (en) Information processing system, apparatus and method for measuring a head-related transfer function
US10754608B2 (en) Augmented reality mixing for distributed audio capture
CN113759314A (en) Sound source visualization method, device and system and computer readable storage medium
US20110193935A1 (en) Controlling a video window position relative to a video camera position
EP2622851A1 (en) Method and apparatus for tracking an audio source in a video conference using multiple sensors
Inoue et al. Visualization system for sound field using see-through head-mounted display
US10031718B2 (en) Location based audio filtering
US20200202626A1 (en) Augmented Reality Noise Visualization
US9304582B1 (en) Object-based color detection and correction
KR20190094166A (en) Method and apparatus for overlaying virtual image and audio data on the reproduction of real scenes, and mobile devices
JP2017016465A (en) Display control method, information processing apparatus, and display control program
CN110488221B (en) Device positioning method and system in multi-device scene
KR20100121086A (en) Ptz camera application system for photographing chase using sound source recognition and method therefor
TWI759004B (en) Target object display method, electronic device and computer-readable storage medium
US9915528B1 (en) Object concealment by inverse time of flight
JP6600186B2 (en) Information processing apparatus, control method, and program
US20220208212A1 (en) Information processing device, information processing method, and program
JP2023037510A (en) Information processing system, information processing method, and information processing program
CN113724382A (en) Map generation method and device and electronic equipment
JP2021108411A (en) Video processing apparatus and video processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211207