CN117784012A - Detection method, device, equipment, storage medium and product based on acoustic wave positioning - Google Patents

Detection method, device, equipment, storage medium and product based on acoustic wave positioning Download PDF

Info

Publication number
CN117784012A
CN117784012A CN202311787039.8A CN202311787039A CN117784012A CN 117784012 A CN117784012 A CN 117784012A CN 202311787039 A CN202311787039 A CN 202311787039A CN 117784012 A CN117784012 A CN 117784012A
Authority
CN
China
Prior art keywords
sound
receiving units
sound receiving
units
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311787039.8A
Other languages
Chinese (zh)
Inventor
张璐
于鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202311787039.8A priority Critical patent/CN117784012A/en
Publication of CN117784012A publication Critical patent/CN117784012A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

The application provides a detection method, device, equipment, storage medium and product based on acoustic wave positioning, and belongs to the technical field of positioning. The method comprises the following steps: when a sound source in a detection area emits sound, receiving sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area; determining metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units; determining a spatial parameter model of the detection area, wherein the spatial parameter model is consistent with the spatial layout of the detection area; position information of the sound source is determined based on metadata of the plurality of sound receiving units and the spatial parameter model. The positioning detection reliability can be improved.

Description

Detection method, device, equipment, storage medium and product based on acoustic wave positioning
Technical Field
The present disclosure relates to the field of positioning technologies, and in particular, to a method, an apparatus, a device, a storage medium, and a product for detecting sound wave positioning.
Background
In order to improve safety, users need to be positioned in many scenes, and the current position of the users is known in time. In the related art, a picture including a user is taken, typically by means of video detection, and the current location of the user is located based on the picture. However, the video detection has the problems of limited detection viewing angle, and the like, so that the reliability of positioning detection is low.
Disclosure of Invention
The embodiment of the application provides a detection method, device, equipment, storage medium and product based on sound wave positioning, which can improve the accuracy of positioning detection. The technical scheme is as follows:
in one aspect, a method for detecting based on acoustic positioning is provided, the method comprising:
when a sound source in a detection area emits sound, receiving sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area;
determining metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units;
determining a spatial parameter model of the detection area, wherein the spatial parameter model is consistent with the spatial layout of the detection area;
position information of the sound source is determined based on metadata of the plurality of sound receiving units and the spatial parameter model.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model comprises the following steps:
grouping the plurality of sound receiving units to obtain a plurality of groups of sound receiving units, wherein one group of sound receiving units comprises two sound receiving units;
For any group of sound receiving units, determining the phase difference of the sound signals received by the group of sound receiving units based on the time difference of the sound signals received by the group of sound receiving units;
and determining the position information of the sound source based on the phase difference of the sound signals received by the plurality of groups of sound receiving units, the position information of the plurality of sound receiving units and the space parameter model.
In some embodiments, the determining the location information of the sound source based on the phase differences of the sound signals received by the plurality of sound receiving units, the location information of the plurality of sound receiving units, and the spatial parameter model includes:
for any group of sound receiving units, determining a first position curve based on the phase difference of sound signals received by the group of sound receiving units and the position information of two sound receiving units included by the group of sound receiving units, wherein the phase difference of sound signals sent by position points on the first position curve to the group of sound receiving units is the phase difference of the sound signals received by the group of sound receiving units;
correcting the first position curve based on the space parameter model to obtain a second position curve;
and determining intersection points of a plurality of second position curves corresponding to the plurality of groups of sound receiving units to obtain the position information of the sound source.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model comprises the following steps:
establishing a radio array model based on the position information of the plurality of radio units;
determining phase differences corresponding to the plurality of sound receiving units based on time differences of the sound signals received by the plurality of sound receiving units;
forming a first beam by adjusting weights of the sound receiving units in the sound receiving array model based on phase differences of the plurality of sound receiving units, wherein the variance of the first beam is minimum;
determining an azimuth of the sound source relative to the plurality of sound pickup units based on the first beam;
position information of the sound source is determined based on azimuth angles of the sound source relative to the plurality of sound pickup units, position information of the first sound pickup unit, and the spatial parameter model.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
The determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model comprises the following steps:
forming a second beam based on a time difference of sound signals received by the plurality of sound receiving units and a geometric structure of the sound receiving array;
changing the sound signal of the target radio unit in the second wave beam by controlling the response power of the target radio unit in the radio array to obtain a third wave beam;
determining an azimuth of the sound source relative to the plurality of sound pickup units based on the third beam;
position information of the sound source is determined based on azimuth angles of the sound source relative to the plurality of sound pickup units, position information of the first sound pickup unit, and the spatial parameter model.
In some embodiments, the determining the location information of the sound source based on the azimuth of the sound source relative to the plurality of sound pickup units, the location information of the first sound pickup unit, and the spatial parameter model includes:
determining first position information based on azimuth angles of the sound source relative to the plurality of sound pickup units and position information of the plurality of sound pickup units;
and correcting the first position information based on the space parameter model to obtain the position information of the sound source.
In some embodiments, the acoustic pickup array comprises a microphone array, an ultrasonic acoustic pickup array, or a hydrophone array.
In some embodiments, the determining metadata of the plurality of sound pickup units based on the sound signals received by the plurality of sound pickup units includes:
determining metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units through an edge computing unit which is arranged together with the sound receiving array, and transmitting the metadata of the plurality of sound receiving units to a space computing unit;
the space calculating unit is used for determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
In some embodiments, the determining the spatial parametric model of the detection region includes:
receiving an input spatial parameter model of the detection area; or,
and under the condition that the detection area is an indoor area, establishing communication connection with the sweeping robot in the indoor area, and receiving a space parameter model sent by the sweeping robot based on the communication connection.
In another aspect, there is provided a detection apparatus based on acoustic positioning, the apparatus comprising:
The receiving module is used for receiving sound signals through a plurality of sound receiving units included in the sound receiving array in the detection area when the sound source in the detection area emits sound;
a first determining module, configured to determine metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units;
the second determining module is used for determining a space parameter model of the detection area, and the space parameter model is consistent with the space layout of the detection area;
and the third determining module is used for determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the third determining module is configured to group the plurality of radio receiving units to obtain a plurality of groups of radio receiving units, where a group of radio receiving units includes two radio receiving units; for any group of sound receiving units, determining the phase difference of the sound signals received by the group of sound receiving units based on the time difference of the sound signals received by the group of sound receiving units; and determining the position information of the sound source based on the phase difference of the sound signals received by the plurality of groups of sound receiving units, the position information of the plurality of sound receiving units and the space parameter model.
In some embodiments, the third determining module is configured to determine, for any group of sound receiving units, a first location curve based on a phase difference of a sound signal received by the group of sound receiving units and location information of two sound receiving units included in the group of sound receiving units, where a phase difference of a sound signal sent by a location point on the first location curve reaching the group of sound receiving units is a phase difference of a sound signal received by the group of sound receiving units; correcting the first position curve based on the space parameter model to obtain a second position curve; and determining intersection points of a plurality of second position curves corresponding to the plurality of groups of sound receiving units to obtain the position information of the sound source.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the third determining module is used for establishing a radio array model based on the position information of the plurality of radio units; determining phase differences corresponding to the plurality of sound receiving units based on time differences of the sound signals received by the plurality of sound receiving units; forming a first beam by adjusting weights of the sound receiving units in the sound receiving array model based on phase differences of the plurality of sound receiving units, wherein the variance of the first beam is minimum; determining an azimuth of the sound source relative to the plurality of sound pickup units based on the first beam; position information of the sound source is determined based on azimuth angles of the sound source relative to the plurality of sound pickup units, position information of the first sound pickup unit, and the spatial parameter model.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the third determining module is configured to form a second beam based on a time difference of the sound signals received by the plurality of sound receiving units and a geometry of the sound receiving array; changing the sound signal of the target radio unit in the second wave beam by controlling the response power of the target radio unit in the radio array to obtain a third wave beam; determining an azimuth of the sound source relative to the plurality of sound pickup units based on the third beam; position information of the sound source is determined based on azimuth angles of the sound source relative to the plurality of sound pickup units, position information of the first sound pickup unit, and the spatial parameter model.
In some embodiments, the third determining module is configured to determine first location information based on azimuth angles of the sound source relative to the plurality of sound pickup units and location information of the plurality of sound pickup units; and correcting the first position information based on the space parameter model to obtain the position information of the sound source.
In some embodiments, the acoustic pickup array comprises a microphone array, an ultrasonic acoustic pickup array, or a hydrophone array.
In some embodiments, the first determining module is configured to determine metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units through an edge computing unit disposed together with the sound receiving array, and transmit the metadata of the plurality of sound receiving units to a space computing unit;
the space calculating unit is used for determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
In some embodiments, the second determining module is configured to receive an input spatial parametric model of the detection region; or,
and under the condition that the detection area is an indoor area, the second determining module is used for establishing communication connection with the sweeping robot in the indoor area and receiving a space parameter model sent by the sweeping robot based on the communication connection.
In another aspect, an edge computing device is provided that includes one or more processors and one or more memories having at least one program code stored therein, the at least one program code loaded and executed by the one or more processors to implement the sonic location-based detection method of any of the above-described implementations.
In another aspect, a computer readable storage medium is provided, in which at least one program code is stored, where the at least one program code is loaded and executed by a processor to implement the acoustic positioning based detection method according to any of the above implementations.
In another aspect, a computer program product is provided, the computer program product comprising computer program code stored in a computer readable storage medium, the computer program code being read from the computer readable storage medium by a processor of an edge computing device, the processor executing the computer program code such that the edge computing device performs the acoustic positioning based detection method of any of the above implementations.
In the embodiment of the application, the sound source is positioned by arranging the sound receiving array in the detection area, receiving sound signals sent by the sound source through a plurality of sound receiving units in the sound receiving array and combining a space parameter model of the detection area; because the propagation of the sound signal has wide area, no matter which corner of the detection area the sound source is, the position of the sound source can be positioned by the method provided by the embodiment of the application, so that the reliability of sound source positioning is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a detection system based on acoustic positioning according to an embodiment of the present application;
FIG. 2 is a flowchart of a detection method based on acoustic positioning according to an embodiment of the present application;
FIG. 3 is a flowchart of another detection method based on acoustic positioning according to an embodiment of the present application;
FIG. 4 is a flowchart of another detection method based on acoustic positioning according to an embodiment of the present application;
FIG. 5 is a schematic diagram of another detection method based on acoustic positioning according to an embodiment of the present application;
FIG. 6 is a block diagram of a detection device based on acoustic positioning according to an embodiment of the present application;
fig. 7 is a block diagram of an edge computing device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be noted that, information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals referred to in this application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of relevant data is required to comply with relevant laws and regulations and standards of relevant countries and regions. For example, the sound signal, the position information of the sound receiving unit, etc. referred to in the present application are acquired with sufficient authorization.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprising," "including," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
FIG. 1 is a schematic diagram of a detection system based on acoustic positioning according to an embodiment of the present application; referring to fig. 1, the detection system includes: the device comprises a radio array, an edge computing unit, a parameter computing unit and a space computing unit; the output end of the sound receiving unit is electrically connected with the input end of the edge calculating unit, and the output end of the edge calculating unit and the output end of the parameter calculating unit are respectively electrically connected with the space calculating unit. The radio array is arranged in the detection area and comprises a plurality of radio units which can be arranged in the detection area in any geometric structure; for example, a plurality of sound pickup units are uniformly disposed in the detection area, so that sound sources in various places of the detection area can be localized. For another example, a plurality of sound receiving units are deployed in a space with limited video viewing angle in a detection area, that is, the embodiment of the application can supplement video detection through a detection method based on sound wave positioning.
In some embodiments, the sound pickup array may be a microphone array; correspondingly, the sound receiving array comprises a plurality of microphones; the microphone array may be applied in everyday situations, i.e. the detection area may be any daily living situation. In other embodiments, the sound pickup array may be an ultrasonic sound pickup array; correspondingly, the sound receiving array comprises a plurality of ultrasonic sound receiving units; the ultrasonic sound-receiving array is a sound-receiving device which can work outside the frequency of sound waves and is used for finding out some sounds which are emitted by mechanical vibration, namely, a detection area can be an area where a mechanical object is located. In other embodiments, the acoustic array may be a hydrophone array; correspondingly, if the sound receiving array comprises a plurality of listeners, the hydrophone array is used for underwater sound receiving equipment, and the use scene is used for detecting abnormal sound in an underwater area, namely the detection area can be the underwater area.
In some embodiments, the plurality of sound pickup units are configured to receive a sound signal sent by a sound source in the detection area, and transmit the sound signal to an edge computing unit disposed together with the sound pickup array, where the edge computing unit is configured to determine metadata of the plurality of sound pickup units based on the sound signal received by the sound pickup array; the edge computing unit is a computing model for analyzing and processing computing resources and storage resources on edge equipment closer to a data source, metadata of a plurality of sound receiving units are determined by the edge computing unit which is commonly deployed by the sound receiving array, and the metadata of the plurality of sound receiving units are transmitted to the space computing unit, so that data transmission quantity and data transmission pressure can be reduced. The metadata of the sound receiving unit comprises position information of the sound receiving unit and time for receiving the sound signal, and the metadata of the sound receiving unit further comprises loudness of the sound signal received by the sound receiving unit.
And the parameter calculation unit is used for determining a spatial parameter model of the detection area and transmitting the spatial parameter model of the detection area to the spatial calculation unit. And the space calculation unit is used for receiving the metadata of the plurality of sound receiving units and the space parameter model of the detection area, and determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
In some embodiments, the edge computing unit, the parameter computing unit, and the space computing unit may be deployed on the same device; for example, an edge computing unit, a parameter computing unit, and a space computing unit are deployed into an edge computing device; correspondingly, the implementation environment of the detection method based on the sound wave positioning comprises edge computing equipment and a cloud server, so that an end-to-end cloud integrated framework is used, the edge computing equipment is used for determining the position information of the sound source, and the position information of the sound source is transmitted to the cloud service.
In the embodiment of the application, after the edge computing device transmits the position information of the sound source to the cloud server, the cloud server stores the position information of the sound source; or the cloud server forwards the position information of the sound source to a terminal used by a corresponding user in the detection area; for example, the position information of the sound source is forwarded by the cloud server to a terminal used by the owner of the detection area.
FIG. 2 is a flowchart of a detection method based on acoustic positioning according to an embodiment of the present application, where an execution subject of the method is an edge computing device; referring to fig. 2, the method includes:
Step 201: when a sound source in the detection area emits sound, sound signals are received through a plurality of sound receiving units included in a sound receiving array in the detection area.
The sound sources may differ based on the differences in the sound receiving arrays; for example, the sound receiving array is a microphone array, and the sound source can be a sound source capable of emitting sound signals by people, animals or objects, and the sound signals can be sound wave signals correspondingly. For another example, the sound receiving array is an ultrasonic sound receiving array, and the sound source may be a sound source that emits an ultrasonic sound signal with a frequency other than the sound wave frequency, and the sound signal may be an ultrasonic sound signal. For another example, if the acoustic receiving array is a hydrophone array, the detection area may be an underwater area, the sound source may be sound emitted in the area under the hand, and the corresponding sound signal may be a water sound signal.
In some embodiments, the plurality of sound receiving units included in the sound receiving array are all in a working state all the time; therefore, when the sound source in the detection area emits sound, the plurality of sound receiving units included in the sound receiving array can receive the sound signals. In other embodiments, one of the plurality of sound receiving units included in the sound receiving array is in a working state, and the other sound receiving units are in a dormant state; when the sound receiving units in the working state detect that the sound source in the detection area emits sound, other sound receiving units are awakened, so that sound signals are received through the plurality of sound receiving units included in the sound receiving array, and the other sound receiving units can be in a dormant state when the sound source in the detection area does not emit sound, and therefore power consumption is saved.
In some embodiments, the sound pickup array includes a plurality of sound pickup units, each of which can receive a sound signal; the number and geometry of the sound pickup units may be set and modified according to the size and structure of the detection area, and in the embodiment of the present application, the number and geometry of the sound pickup units are not specifically limited.
Step 202: metadata of the plurality of sound pickup units is determined based on the sound signals received by the plurality of sound pickup units.
For any sound receiving unit, the metadata of the sound receiving unit comprises the time of the sound receiving unit receiving the sound signal and the position information of the sound receiving unit; accordingly, the steps may be: when the sound receiving unit receives the sound signal, determining the current time as the time when the sound receiving unit receives the sound signal, wherein the sound receiving unit stores the position information of the sound receiving unit; correspondingly, the sound receiving unit composes metadata of the sound receiving unit from time of receiving the sound signal, audio loudness of the received sound signal and position information of the sound receiving unit. In some embodiments, the metadata of the sound receiving unit may further include an audio loudness of the sound signal received by the sound receiving unit; also, the above metadata including three kinds of information is merely an example; the metadata may also include other data, and the specific content of the metadata is not specifically limited in the embodiments of the present application.
Step 203: and determining a spatial parameter model of the detection area, wherein the spatial parameter model is consistent with the spatial layout of the detection area.
The space parameter model is used for representing the structure of the detection area; in some embodiments, the edge computing device (parameter computing unit) stores therein a spatial parameter model of the detection region; an edge computing device (parameter computing unit) acquires a stored spatial parameter model of the detection region. In other embodiments, the user may input a spatial parametric model of the detection region to the edge computing device; the edge computing device receives the input spatial parametric model of the detection region. In other embodiments, where the detection zone is an indoor zone, the edge computing device establishes a communication connection with the sweeping robot in the indoor zone, and receives a spatial parametric model sent by the sweeping robot based on the communication connection. In the embodiment of the application, the space parameter model of the detection area can be acquired by means of the sweeping robot, so that the cost for acquiring the space parameter model of the detection area is reduced.
Step 204: the position information of the sound source is determined based on the metadata of the plurality of sound receiving units and the spatial parameter model.
The step can determine the position information of the sound source through any algorithm; for example, by an acoustic wave phase difference algorithm, a minimum variance distortion-free response algorithm, a controllable response power phase transformation algorithm, or the like; in the embodiment of the present application, the algorithm for determining the position information of the sound source is not particularly limited.
In the embodiment of the application, the sound source is positioned by arranging the sound receiving array in the detection area, receiving sound signals sent by the sound source through a plurality of sound receiving units in the sound receiving array and combining a space parameter model of the detection area; because the propagation of the sound signal has wide area, no matter which corner of the detection area the sound source is, the position of the sound source can be positioned by the method provided by the embodiment of the application, so that the reliability of sound source positioning is improved. In addition, the method does not need to rely on a camera, so that the method has the advantages of no dead angle and small required data bandwidth.
FIG. 3 is a flowchart of a detection method based on acoustic positioning according to an embodiment of the present application; referring to fig. 3, the method includes:
step 301: when a sound source in the detection area emits sound, the edge computing device receives sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area.
In some embodiments, this step is the same as step 201, and will not be described here again.
Step 302: the edge computing device determines metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units, wherein the metadata of the sound receiving units comprise time when the sound receiving units receive the sound signals and position information of the sound receiving units.
In some embodiments, this step is the same as step 202, and will not be described here again.
Step 303: the edge computing device determines a spatial parametric model of the detection region, the spatial parametric model being consistent with a spatial layout of the detection region.
In some embodiments, this step is the same as step 203, and will not be described here again.
Step 304: the edge computing equipment groups the plurality of sound receiving units to obtain a plurality of groups of sound receiving units, and one group of sound receiving units comprises two sound receiving units.
In some embodiments, in the case where the number of sound pickup units included in the sound pickup array is small (less than the preset number), the edge computing device may group all the sound pickup units included in the sound pickup array to obtain multiple groups of sound pickup units. Under the condition that the number of the sound receiving units included in the sound receiving array is large (larger than the preset number), the edge computing equipment can group the plurality of sound receiving units far away from each other in the sound receiving array to obtain a plurality of groups of sound receiving units, so that the sound source is positioned by adopting the small number of sound receiving units, the positioning complexity is reduced, and the positioning efficiency is improved.
Step 305: for any group of sound pickup units, the edge computing device determines a phase difference of the sound signals received by the group of sound pickup units based on a time difference of the sound signals received by the group of sound pickup units.
For convenience of description, two sound pickup units included in the group of sound pickup units are referred to as a first sound pickup unit and a second sound pickup unit; the edge computing equipment determines the period of the sound signal, and determines the phase difference of the sound signals received by the first sound receiving unit and the second sound receiving unit based on the time difference of the sound signals received by the first sound receiving unit and the second sound receiving unit and the period of the sound signal; for example, the edge computing device determines a product of 2pi and the time difference to obtain a first value, and determines a ratio of the first value to a period of the sound signal to obtain a phase difference of the sound signals received by the first sound receiving unit and the second sound receiving unit.
In some embodiments, for any sound receiving unit, the metadata of the sound receiving unit may further include an audio intensity of a sound signal received by the sound receiving unit; in this step, the edge computing device may further determine a phase difference of the sound signals received by the group of sound receiving units based on an audio intensity difference of the sound signals received by the group of sound receiving units; and determining an average value of the phase difference determined based on the time difference and the phase difference determined based on the audio intensity difference, thereby improving the accuracy of the determined phase difference.
Step 306: the edge computing device determines the position information of the sound source based on the phase differences of the sound signals received by the plurality of groups of sound receiving units, the position information of the plurality of sound receiving units and the space parameter model.
In some embodiments, this step may be achieved by the following steps (1) to (3), comprising:
(1) For any group of sound receiving units, the edge computing equipment determines a first position curve based on the phase difference of the sound signals received by the group of sound receiving units and the position information of two sound receiving units included by the group of sound receiving units, and the phase difference of the sound signals sent by the position points on the first position curve to the group of sound receiving units is the phase difference of the sound signals received by the group of sound receiving units.
The edge computing device locates a plurality of location points based on an algorithm of the phase difference, and the plurality of location points form a first location curve.
(2) The edge computing device corrects the first position curve based on the space parameter model to obtain a second position curve.
In some embodiments, the edge computing device determines, based on the spatial parametric model, whether the location points on the first location curve are in the same space of the detection region; in the case that the position points on the first position curve are not in the same space in the detection area, for example, the first position curve includes a first position point, a second position point, a third position point and a fourth position point, while the first position point, the second position point and the third position point are in the same space, and the fourth position point is in other spaces, the edge computing device corrects the fourth position point based on the position information of the two sound receiving units included in the set of sound receiving units, and forms the first position point, the second position point, the third position point and the corrected fourth position point into the second position curve. When the position points on the first position curve are in the same space in the detection area, the position points on the first position curve are accurate position points, and correction of the first position curve is not needed, namely the first position curve can be used as the second position curve for subsequent positioning steps.
(3) The edge computing equipment determines the intersection points of a plurality of second position curves corresponding to the plurality of groups of sound receiving units to obtain the position information of the sound source.
In the embodiment of the application, the position of the sound source is determined by measuring the phase difference of the sound receiving units of the sound signals at different positions, namely, the sound source is positioned by a phase difference method, and the positioning accuracy of the method is high, so that the accuracy of sound source positioning can be improved.
FIG. 4 is a flowchart of a detection method based on acoustic positioning according to an embodiment of the present application; referring to fig. 4, the method includes:
step 401: when a sound source in the detection area emits sound, the edge computing device receives sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area.
In some embodiments, this step is the same as step 201, and will not be described here again.
Step 402: metadata of the sound receiving units are determined based on the sound signals received by the sound receiving units, and the metadata of the sound receiving units comprise time when the sound receiving units receive the sound signals and position information of the sound receiving units.
In some embodiments, this step is the same as step 202, and will not be described here again.
Step 403: the edge computing device determines a spatial parametric model of the detection region, the spatial parametric model being consistent with a spatial layout of the detection region.
In some embodiments, this step is the same as step 203, and will not be described here again.
Step 404: the edge computing device establishes a sound pickup array model based on the location information of the plurality of sound pickup units.
The geometrical mechanism of the radio array model is identical to the distribution of a plurality of radio units, namely, the edge computing equipment establishes the radio array model based on the number, the position information and the arrangement mode of the radio units, and the radio array model is used for subsequent beam forming and variance calculation.
Step 405: the edge computing device determines phase differences corresponding to the plurality of sound receiving units based on time differences of the sound signals received by the plurality of sound receiving units.
In some embodiments, this step is the same as step 305, and will not be described here again.
Step 406: the edge computing device forms a first beam by adjusting weights of the sound receiving units in the sound receiving array model based on phase differences of the plurality of sound receiving units, and the variance of the first beam is minimum.
The edge computing equipment forms fourth beams pointing to different directions based on the phase differences of the plurality of sound receiving units, determines the weight of the sound receiving units in the sound receiving array model with the smallest variance, and adjusts the weight of the sound receiving units in the sound receiving array model based on the determined weight of the sound receiving units, wherein the formed beams are the first beams. In some embodiments, the variance minimization may be solved by using an optimization algorithm such as gradient descent, least squares, or the like, that is, the weight of the sound pickup unit in the sound pickup array model with the smallest variance is determined by using an optimization method such as gradient descent or least squares.
In the embodiment of the application, by forming the first beam with the smallest variance, noise and interference can be restrained, and the directivity and resolution of the first beam can be improved. In addition, in the process of adjusting the weight of the sound receiving unit, the undistorted response to the sound signal of the sound source needs to be protected, and the process can be realized through constraint conditions or regularization terms, so that the sound signal of the sound source is ensured not to be lost in the beam forming process.
Step 407: the edge computing device determines an azimuth of the sound source relative to the plurality of sound pickup units based on the first beam.
The edge computing device determines an output power of the first beam and determines an azimuth of the sound source relative to the plurality of sound pickup units based on the output power of the first beam. In some embodiments, the edge computing device may determine the azimuth of the sound source relative to the plurality of sound receiving units through other indexes of the first beam, and in embodiments of the present application, the indexes referred to in determining the azimuth are not specifically limited.
Step 408: the edge computing device determines location information of the sound source based on azimuth angles of the sound source relative to the plurality of sound pickup units, location information of the first sound pickup unit, and the spatial parameter model.
The edge computing device determines first position information based on azimuth angles of the sound source relative to the plurality of sound receiving units and position information of the plurality of sound receiving units; and correcting the first position information based on the space parameter model to obtain the position information of the sound source. For example, the edge computing device determines a target location of the first location information in the detection region based on the spatial parameter model; under the condition that no wall exists between the target position and the plurality of sound receiving units, the determined first position information is the position information of the sound source; under the condition that a wall body exists between the target position and the plurality of sound receiving units, for any sound receiving unit, the edge computing equipment corrects the sound signals received by the sound receiving unit based on the wall body between the target position and the sound receiving unit, and re-executes the steps of the embodiment of the application based on the corrected sound signals to obtain the position information of the sound source. For example, the edge computing device increases the intensity of the sound signal received by the sound pickup unit or advances the time the sound pickup unit receives the sound signal based on the wall between the target location and the sound pickup unit.
In some embodiments, the method can not only suppress noise and interference and improve the directivity and resolution of the first beam, but also ensure that the sound signal of the sound source is not lost in the beam forming process, so that the accuracy of sound source positioning can be improved based on the method.
FIG. 5 is a flowchart of a detection method based on acoustic positioning according to an embodiment of the present application; referring to fig. 5, the method includes:
step 501: when a sound source in the detection area emits sound, the edge computing device receives sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area.
In some embodiments, this step is the same as step 201, and will not be described here again.
Step 502: metadata of the sound receiving units are determined based on the sound signals received by the sound receiving units, and the metadata of the sound receiving units comprise time when the sound receiving units receive the sound signals and position information of the sound receiving units.
In some embodiments, this step is the same as step 202, and will not be described here again.
Step 503: the edge computing device determines a spatial parametric model of the detection region, the spatial parametric model being consistent with a spatial layout of the detection region.
In some embodiments, this step is the same as step 203, and will not be described here again.
Step 504: the edge computing device forms a second beam based on a time difference of sound signals received by the plurality of sound pickup units and a geometry of the sound pickup array.
After the sound receiving unit receives the sound signal, the sound signal is converted into an electric signal, and the electric signal is amplified and filtered so as to increase the quality and detectability of the electric signal.
Step 505: the edge computing device changes the sound signal of the target sound receiving unit in the second wave beam by controlling the response power of the target sound receiving unit in the sound receiving array to obtain a third wave beam.
The edge computing device can selectively enhance the sound signal of the target sound receiving unit by controlling the response power of the sound receiving array on the basis of the second beam, the formed beam is a third beam, the target sound receiving unit can be any sound receiving unit in the sound receiving array, and the step can be realized by adjusting the weight or the phase of the sound receiving unit in the sound receiving unit array. And, the control of the response power may be performed according to different algorithms and optimization criteria; for example, by a minimum variance distortion-free algorithm.
Step 506: the edge computing device determines an azimuth of the sound source relative to the plurality of sound pickup units based on the third beam.
The edge computing device determines an output power of the third beam and determines an azimuth of the sound source relative to the plurality of sound pickup units based on the output power of the third beam. In some embodiments, the edge computing device may determine the azimuth of the sound source relative to the plurality of sound receiving units through other indexes of the third beam, in addition to determining the azimuth of the sound source relative to the plurality of sound receiving units through the index of the output power, and in embodiments of the present application, the indexes referred to in determining the azimuth are not specifically limited.
Step 507: the edge computing device determines location information of the sound source based on azimuth angles of the sound source relative to the plurality of sound pickup units, location information of the first sound pickup unit, and the spatial parameter model.
The edge computing device determines first position information based on azimuth angles of the sound source relative to the plurality of sound receiving units and position information of the plurality of sound receiving units; and correcting the first position information based on the space parameter model to obtain the position information of the sound source.
In some embodiments, the sound source is positioned by a controllable response power phase transformation algorithm, and the positioning method has high accuracy; therefore, the accuracy of sound source positioning can be improved based on the method.
FIG. 6 is a block diagram of a detection device based on acoustic positioning according to an embodiment of the present application; referring to fig. 6, the apparatus includes:
a receiving module 601, configured to receive, when a sound source in a detection area emits sound, sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area;
a first determining module 602, configured to determine metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units;
a second determining module 603, configured to determine a spatial parameter model of the detection area, where the spatial parameter model is consistent with a spatial layout of the detection area;
A third determining module 604 is configured to determine location information of the sound source based on metadata of the plurality of sound receiving units and the spatial parameter model.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received the sound signal and location information of the sound pickup unit;
a third determining module 604, configured to group the plurality of sound receiving units to obtain a plurality of groups of sound receiving units, where a group of sound receiving units includes two sound receiving units; for any group of sound receiving units, determining the phase difference of the sound signals received by the group of sound receiving units based on the time difference of the sound signals received by the group of sound receiving units; and determining the position information of the sound source based on the phase difference of the sound signals received by the plurality of groups of sound receiving units, the position information of the plurality of sound receiving units and the space parameter model.
In some embodiments, the third determining module 604 is configured to determine, for any group of sound pickup units, a first location curve based on a phase difference of the sound signals received by the group of sound pickup units and location information of two sound pickup units included in the group of sound pickup units, where a phase difference of the sound signals sent by location points on the first location curve reaching the group of sound pickup units is a phase difference of the sound signals received by the group of sound pickup units; correcting the first position curve based on the space parameter model to obtain a second position curve; and determining the intersection points of a plurality of second position curves corresponding to the plurality of groups of sound receiving units to obtain the position information of the sound source.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received the sound signal and location information of the sound pickup unit;
a third determining module 604, configured to establish a sound receiving array model based on the location information of the plurality of sound receiving units; determining phase differences corresponding to the plurality of sound receiving units based on time differences of the sound signals received by the plurality of sound receiving units; forming a first wave beam by adjusting the weight of the sound receiving units in the sound receiving array model based on the phase differences of the plurality of sound receiving units, wherein the variance of the first wave beam is minimum; determining an azimuth of the sound source relative to the plurality of sound pickup units based on the first beam; the position information of the sound source is determined based on the azimuth angle of the sound source relative to the plurality of sound pickup units, the position information of the first sound pickup unit, and the spatial parameter model.
In some embodiments, the metadata of the sound pickup unit includes a time when the sound pickup unit received the sound signal and location information of the sound pickup unit;
a third determining module 604, configured to form a second beam based on a time difference of the sound signals received by the plurality of sound receiving units and a geometry of the sound receiving array; changing the sound signal of the target radio unit in the second wave beam by controlling the response power of the target radio unit in the radio array to obtain a third wave beam; determining an azimuth of the sound source relative to the plurality of sound pickup units based on the third beam; the position information of the sound source is determined based on the azimuth angle of the sound source relative to the plurality of sound pickup units, the position information of the first sound pickup unit, and the spatial parameter model.
In some embodiments, the third determining module 604 is configured to determine the first location information based on an azimuth of the sound source relative to the plurality of sound pickup units and location information of the plurality of sound pickup units; and correcting the first position information based on the space parameter model to obtain the position information of the sound source.
In some embodiments, the acoustic pickup array comprises a microphone array, an ultrasonic acoustic pickup array, or a hydrophone array.
In some embodiments, the first determining module 602 is configured to determine metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units through an edge computing unit disposed together with the sound receiving array, and transmit the metadata of the plurality of sound receiving units to the space computing unit;
and the space calculation unit is used for determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
In some embodiments, the second determining module 603 is configured to receive the input spatial parameter model of the detection area; or,
in the case that the detection area is an indoor area, the second determining module 603 is configured to establish a communication connection with the sweeping robot in the indoor area, and receive a spatial parameter model sent by the sweeping robot based on the communication connection.
In the embodiment of the application, the sound source is positioned by arranging the sound receiving array in the detection area, receiving sound signals sent by the sound source through a plurality of sound receiving units in the sound receiving array and combining a space parameter model of the detection area; because the propagation of the sound signal has wide area, no matter which corner of the detection area the sound source is, the position of the sound source can be positioned by the method provided by the embodiment of the application, so that the reliability of sound source positioning is improved.
It should be noted that: in the detection device based on acoustic positioning according to the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the edge computing device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the detection device based on acoustic positioning provided in the above embodiment and the detection method embodiment based on acoustic positioning belong to the same concept, and detailed implementation processes of the detection device based on acoustic positioning are shown in the method embodiment, and are not repeated here.
Fig. 7 illustrates a block diagram of an edge computing device 700 provided in accordance with an exemplary embodiment of the present invention. The edge computing device 700 may be: a computer or tablet computer. In general, the edge computing device 700 includes: a processor 701 and a memory 702.
Processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 701 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 701 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 701 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 701 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. The memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one program code for execution by processor 701 to implement the sonic location-based detection method provided by the method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 703 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, a display 705, a camera assembly 706, audio circuitry 707, and a power supply 708.
A peripheral interface 703 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 701 and memory 702. In some embodiments, the processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 704 is configured to receive and transmit RF (Radio Frequency) signals, also referred to as electromagnetic signals. The radio frequency circuitry 704 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 704 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 704 may also include NFC (Near Field Communication ) related circuitry, which is not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 705 is a touch display, the display 705 also has the ability to collect touch signals at or above the surface of the display 705. The touch signal may be input to the processor 701 as a control signal for processing. At this time, the display 705 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 705 may be one, providing a front panel of the terminal 700; in other embodiments, the display 705 may be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The display 705 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 706 is used to capture images or video. Optionally, the camera assembly 706 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing, or inputting the electric signals to the radio frequency circuit 704 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 707 may also include a headphone jack.
The power supply 708 is used to power the various components in the terminal 700. The power source 708 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power source 708 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 700 further includes one or more sensors 709. The one or more sensors 709 include, but are not limited to: acceleration sensor 710, gyro sensor 711, pressure sensor 712, optical sensor 713, and proximity sensor 714.
The acceleration sensor 710 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 710 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 701 may control the display screen 705 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal acquired by the acceleration sensor 710. Acceleration sensor 710 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 711 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 711 may collect a 3D motion of the user on the terminal 700 in cooperation with the acceleration sensor 710. The processor 701 may implement the following functions according to the data collected by the gyro sensor 711: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 712 may be disposed at a side frame of the terminal 700 and/or at a lower layer of the display screen 705. When the pressure sensor 712 is disposed at a side frame of the terminal 700, a grip signal of the user to the terminal 700 may be detected, and the processor 701 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 712. When the pressure sensor 712 is disposed at the lower layer of the display screen 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 705. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 713 is used to collect the intensity of ambient light. In one embodiment, the processor 701 may control the display brightness of the display screen 705 based on the ambient light intensity collected by the optical sensor 713. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 705 is turned up; when the ambient light intensity is low, the display brightness of the display screen 705 is turned down. In another embodiment, the processor 701 may also dynamically adjust the shooting parameters of the camera assembly 706 based on the ambient light intensity collected by the optical sensor 713.
A proximity sensor 714, also known as a distance sensor, is typically provided on the front panel of the terminal 700. The proximity sensor 714 is used to collect the distance between the user and the front of the terminal 700. In one embodiment, when the proximity sensor 714 detects that the distance between the user and the front of the terminal 700 gradually decreases, the processor 701 controls the display 705 to switch from the bright screen state to the off screen state; when the proximity sensor 714 detects that the distance between the user and the front surface of the terminal 700 gradually increases, the processor 701 controls the display screen 705 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 7 is not limiting of the terminal 700 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
The embodiment of the application also provides a computer readable storage medium, wherein at least one program code is stored in the computer readable storage medium, and the at least one program code is loaded and executed by a processor to realize the detection method based on acoustic positioning according to any implementation mode. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, a ROM (Read-Only Memory), a RAM (Random Access Memory ), a CD-ROM (Compact Disc Read-Only Memory), a magnetic tape, a floppy disk, an optical data storage device, and the like.
The embodiment of the application also provides a computer program product, which comprises computer program code, the computer program code is stored in a computer readable storage medium, a processor of the edge computing device reads the computer program code from the computer readable storage medium, and the processor executes the computer program code, so that the edge computing device executes the detection method based on acoustic positioning in any implementation mode.
In some embodiments, the computer program product according to the embodiments of the present application may be deployed to be executed on one edge computing device or on a plurality of edge computing devices located at one site, or alternatively, may be executed on a plurality of edge computing devices distributed across a plurality of sites and interconnected by a communication network, where the plurality of edge computing devices distributed across the plurality of sites and interconnected by the communication network may constitute a blockchain system.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as being included within the spirit and principles of the present invention.

Claims (13)

1. A method of detection based on acoustic positioning, the method comprising:
When a sound source in a detection area emits sound, receiving sound signals through a plurality of sound receiving units included in a sound receiving array in the detection area;
determining metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units;
determining a spatial parameter model of the detection area, wherein the spatial parameter model is consistent with the spatial layout of the detection area;
position information of the sound source is determined based on metadata of the plurality of sound receiving units and the spatial parameter model.
2. The method of claim 1, wherein the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model comprises the following steps:
grouping the plurality of sound receiving units to obtain a plurality of groups of sound receiving units, wherein one group of sound receiving units comprises two sound receiving units;
for any group of sound receiving units, determining the phase difference of the sound signals received by the group of sound receiving units based on the time difference of the sound signals received by the group of sound receiving units;
And determining the position information of the sound source based on the phase difference of the sound signals received by the plurality of groups of sound receiving units, the position information of the plurality of sound receiving units and the space parameter model.
3. The method of claim 2, wherein the determining the location information of the sound source based on the phase differences of the sound signals received by the plurality of groups of sound pickup units, the location information of the plurality of sound pickup units, and the spatial parametric model comprises:
for any group of sound receiving units, determining a first position curve based on the phase difference of sound signals received by the group of sound receiving units and the position information of two sound receiving units included by the group of sound receiving units, wherein the phase difference of sound signals sent by position points on the first position curve to the group of sound receiving units is the phase difference of the sound signals received by the group of sound receiving units;
correcting the first position curve based on the space parameter model to obtain a second position curve;
and determining intersection points of a plurality of second position curves corresponding to the plurality of groups of sound receiving units to obtain the position information of the sound source.
4. The method of claim 1, wherein the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
The determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model comprises the following steps:
establishing a radio array model based on the position information of the plurality of radio units;
determining phase differences corresponding to the plurality of sound receiving units based on time differences of the sound signals received by the plurality of sound receiving units;
forming a first beam by adjusting weights of the sound receiving units in the sound receiving array model based on phase differences of the plurality of sound receiving units, wherein the variance of the first beam is minimum;
determining an azimuth of the sound source relative to the plurality of sound pickup units based on the first beam;
position information of the sound source is determined based on azimuth angles of the sound source relative to the plurality of sound pickup units, position information of the first sound pickup unit, and the spatial parameter model.
5. The method of claim 1, wherein the metadata of the sound pickup unit includes a time when the sound pickup unit received a sound signal and location information of the sound pickup unit;
the determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model comprises the following steps:
Forming a second beam based on a time difference of sound signals received by the plurality of sound receiving units and a geometric structure of the sound receiving array;
changing the sound signal of the target radio unit in the second wave beam by controlling the response power of the target radio unit in the radio array to obtain a third wave beam;
determining an azimuth of the sound source relative to the plurality of sound pickup units based on the third beam;
position information of the sound source is determined based on azimuth angles of the sound source relative to the plurality of sound pickup units, position information of the first sound pickup unit, and the spatial parameter model.
6. The method of claim 4 or 5, wherein the determining the location information of the sound source based on the azimuth of the sound source relative to the plurality of sound pickup units, the location information of the first sound pickup unit, and the spatial parameter model comprises:
determining first position information based on azimuth angles of the sound source relative to the plurality of sound pickup units and position information of the plurality of sound pickup units;
and correcting the first position information based on the space parameter model to obtain the position information of the sound source.
7. The method of claim 1, wherein the acoustic array comprises a microphone array, an ultrasonic acoustic array, or a hydrophone array.
8. The method of claim 1, wherein the determining metadata for the plurality of sound pickup units based on the sound signals received by the plurality of sound pickup units comprises:
determining metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units through an edge computing unit which is arranged together with the sound receiving array, and transmitting the metadata of the plurality of sound receiving units to a space computing unit;
the space calculating unit is used for determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
9. The method of claim 1, wherein said determining a spatial parametric model of the detection region comprises:
receiving an input spatial parameter model of the detection area; or,
and under the condition that the detection area is an indoor area, establishing communication connection with the sweeping robot in the indoor area, and receiving a space parameter model sent by the sweeping robot based on the communication connection.
10. A detection device based on acoustic positioning, the device comprising:
the receiving module is used for receiving sound signals through a plurality of sound receiving units included in the sound receiving array in the detection area when the sound source in the detection area emits sound;
a first determining module, configured to determine metadata of the plurality of sound receiving units based on sound signals received by the plurality of sound receiving units;
the second determining module is used for determining a space parameter model of the detection area, and the space parameter model is consistent with the space layout of the detection area;
and the third determining module is used for determining the position information of the sound source based on the metadata of the plurality of sound receiving units and the space parameter model.
11. An edge computing device comprising one or more processors and one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement the sonic location-based detection method of any of claims 1-9.
12. A computer readable storage medium, characterized in that at least one program code is stored in the storage medium, which is loaded and executed by a processor to implement the acoustic localization based detection method of any one of claims 1 to 9.
13. A computer program product, characterized in that the computer program product comprises computer program code, which is stored in a computer readable storage medium, from which computer program code a processor of an edge computing device reads, which processor executes the computer program code, such that the edge computing device performs the acoustic localization based detection method as claimed in any one of claims 1 to 9.
CN202311787039.8A 2023-12-22 2023-12-22 Detection method, device, equipment, storage medium and product based on acoustic wave positioning Pending CN117784012A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311787039.8A CN117784012A (en) 2023-12-22 2023-12-22 Detection method, device, equipment, storage medium and product based on acoustic wave positioning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311787039.8A CN117784012A (en) 2023-12-22 2023-12-22 Detection method, device, equipment, storage medium and product based on acoustic wave positioning

Publications (1)

Publication Number Publication Date
CN117784012A true CN117784012A (en) 2024-03-29

Family

ID=90397510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311787039.8A Pending CN117784012A (en) 2023-12-22 2023-12-22 Detection method, device, equipment, storage medium and product based on acoustic wave positioning

Country Status (1)

Country Link
CN (1) CN117784012A (en)

Similar Documents

Publication Publication Date Title
CN110502954B (en) Video analysis method and device
CN110764730B (en) Method and device for playing audio data
US20220164159A1 (en) Method for playing audio, terminal and computer-readable storage medium
CN112134995A (en) Method, terminal and computer readable storage medium for searching application object
CN109977570B (en) Vehicle body noise determination method, device and storage medium
CN113191976B (en) Image shooting method, device, terminal and storage medium
CN112991439B (en) Method, device, electronic equipment and medium for positioning target object
CN110349527B (en) Virtual reality display method, device and system and storage medium
CN111383243B (en) Method, device, equipment and storage medium for tracking target object
CN111754564B (en) Video display method, device, equipment and storage medium
CN114384466A (en) Sound source direction determining method, sound source direction determining device, electronic equipment and storage medium
CN112734346B (en) Method, device and equipment for determining lane coverage and readable storage medium
CN112184802B (en) Calibration frame adjusting method, device and storage medium
CN114550717A (en) Voice sound zone switching method, device, equipment and storage medium
CN117784012A (en) Detection method, device, equipment, storage medium and product based on acoustic wave positioning
CN111179628B (en) Positioning method and device for automatic driving vehicle, electronic equipment and storage medium
CN112990424B (en) Neural network model training method and device
CN112243083B (en) Snapshot method and device and computer storage medium
CN111982293A (en) Body temperature measuring method and device, electronic equipment and storage medium
CN112989868A (en) Monitoring method, device, system and computer storage medium
CN113689484B (en) Method and device for determining depth information, terminal and storage medium
CN115079126B (en) Point cloud processing method, device, equipment and storage medium
CN114419913B (en) In-vehicle reminding method and device, vehicle and storage medium
CN113052408B (en) Method and device for community aggregation
CN110428802B (en) Sound reverberation method, device, computer equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination