CN112098940B - Indoor stationary sound source positioning method based on group robot - Google Patents

Indoor stationary sound source positioning method based on group robot Download PDF

Info

Publication number
CN112098940B
CN112098940B CN202010991601.9A CN202010991601A CN112098940B CN 112098940 B CN112098940 B CN 112098940B CN 202010991601 A CN202010991601 A CN 202010991601A CN 112098940 B CN112098940 B CN 112098940B
Authority
CN
China
Prior art keywords
sound source
sound
robots
time
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010991601.9A
Other languages
Chinese (zh)
Other versions
CN112098940A (en
Inventor
孙昊
陆国庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei University of Technology
Original Assignee
Hebei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei University of Technology filed Critical Hebei University of Technology
Priority to CN202010991601.9A priority Critical patent/CN112098940B/en
Publication of CN112098940A publication Critical patent/CN112098940A/en
Application granted granted Critical
Publication of CN112098940B publication Critical patent/CN112098940B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/20Position of source determined by a plurality of spaced direction-finders
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Abstract

The invention discloses an indoor stationary sound source positioning method based on group robots, which adopts a plurality of robots to perform indoor sound source positioning, each robot is provided with a single microphone, the structure is simple, and the fault tolerance in the positioning process is improved. And in the positioning process, the sound source position is calculated for a plurality of times, and higher positioning precision is obtained by continuously approaching the sound source. In each sound source position calculation, grouping positioning is carried out according to the number of robots, finally, weights are distributed according to the difference value between the positioning result of each group and the average value of all groups of positioning results, and the final sound source position judgment at a certain moment is obtained after the result of each group is multiplied by the weight accumulation and added. And obtaining the optimal sound source position by weighting multiple groups of sound source position estimation values at different moments. The arrangement of the reference microphone simplifies the calculated amount of the group robots in the positioning process, and the positioning precision is improved through the cooperation and multiple movements of a plurality of robots.

Description

Indoor stationary sound source positioning method based on group robot
Technical Field
The invention belongs to the technical field of sound source position positioning, and particularly relates to an indoor stationary sound source positioning method based on a group robot.
Background
With the development of the robot field, the application scene of the robot is wider and wider, the indoor sweeping robot, the service robot and the like appear in the life of people, and the indoor sound source of the robot is positioned by means of the sound sensor, so that the robot is provided with an auditory system, the defect that the visual sensor fails under the dim light and dark conditions is overcome, and the working environment of the robot is wider. The robot sound source positioning technology needs to use sound sensors such as microphones, and a plurality of microphones are often needed to better position a sound source. Most of the current researches are to mount a microphone array integrated with a plurality of microphones on a single robot to realize the position judgment of a sound source target under indoor conditions, and the microphone array is affected by the environmental factors such as self faults and noise of the robot, so that the fault tolerance is poor. The group robot is composed of a plurality of robots with the same simple structure, has higher fault tolerance and robustness, and can realize more accurate positioning through the mutual matching of the robots.
CN109001682a proposes a robot sound source localization method, which uses a cross-correlation algorithm to calculate a time delay difference between a sound source signal and each microphone pair, so as to reduce the complexity of calculation. CN108254721a proposes a sound source localization method of a two-dimensional microphone array, which determines an azimuth interval of a target sound source relative to a robot according to sound energy values acquired by a first microphone and a second microphone and a positional relationship between the first microphone and the second microphone. In the sound source positioning methods mentioned in the above patents, a plurality of robots are mounted with a microphone array composed of a plurality of microphones to perform sound source positioning, the relative positions of the microphones are fixed, and the optimal sound source position cannot be obtained by changing the relative positions of the microphones.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: the positioning of the sound source target is realized by utilizing the mutual cooperation of a plurality of robots carrying single microphones and calculating the direction and distance of the sound source target through the time difference of receiving sound of different robots and the relative position relationship among the robots. The problem that a single robot cannot position due to self faults in sound source positioning is solved, and a plurality of robots are distributed at all positions in a discrete mode in the positioning process.
The technical scheme adopted by the invention for solving the technical problems is as follows: the indoor stationary sound source positioning method based on the group robot is characterized by comprising the following steps of:
first, determining the coordinate relation of the robot in an initial state:
defining a global coordinate system, placing four robots R1, R2, R3 and R4 with single microphones at known initial positions, and defining initial t0 time coordinates of the robots R1, R2, R3 and R4 as follows respectively
Figure BDA0002689805540000021
Setting each robot to be round, fixing the initial relative position of each robot, and positioning each microphone at the center of the robot, wherein the coordinates of the microphones are the same as the coordinates of the robots;
secondly, collecting the sound arrival time of the group robot:
assume four robots R1 each having a single microphone mounted thereon at time t0,The arrival time of the sound collected by R2, R3 and R4 is T respectively 1 、T 2 、T 3 、T 4 Then controlling four robots to randomly perform the process towards the sound source at different speeds respectively, and keeping a fixed time interval
Figure BDA0002689805540000022
After the time T1, acquiring the arrival time of the sound again, and recording the arrival time T of the sound acquired by the four robots R1, R2, R3 and R4 at the time T1 1 、T 2 、T 3 、T 4 The method comprises the steps of carrying out a first treatment on the surface of the Every fixed time +.>
Figure BDA0002689805540000023
Collecting sound arrival time, recording the arrival time of sound obtained at each collecting time, and passing N fixed times +.>
Figure BDA0002689805540000024
Obtaining a data set of the arrival time of the N+1 sounds;
thirdly, calculating sound source coordinates:
when calculating by utilizing the sound arrival time difference, the sound signal received by the microphone carried by the robot is approximately plane wave, namely the sound field is a far-field model; setting a microphone with a fixed position as a reference microphone, wherein the distance between the reference microphone and the real position of the sound source is known;
setting the sound signal models received by the microphones of all robots to be ideal models, namely, the environment noise is approximately replaced by Gaussian white noise, setting the sound signals generated by sound sources to be S (t), wherein t represents the corresponding time of sound generation of each sound source, and each robot receives the sound signals S i (t) is represented by the following formula (1):
S i (t)=αS(t-T i )+n(t) (1)
in the formula (1), S i (t) represents the functional relation of the sound signal received by the ith robot and the time t, i= (1, 2,3, 4), t is the time corresponding to the sound signal received by the corresponding microphone, and alpha represents the time from the sound signal to the microphoneAmplitude attenuation coefficient, T i Time delay from the sound signal to the ith robot, n (T) ambient noise signal, S (T-T) i ) Representing a time delay T i Sound signal generated by sound source S i (t) and n (t) are independent of each other; calculating the arrival time of the sound at the current moment according to the waveform of the sound signal received by the microphone on each robot;
arrival time T of sound collected by four robots R1, R2, R3, R4 carrying a single microphone at each moment 1 、T 2 、T 3 、T 4 The coordinates of the sound source at the current moment are calculated by combining the coordinate data of the four robots at the current moment, and the specific operation mode is as follows: the four robots carrying microphones are divided into two groups, the robots R1 and R2 are one group, the robots R3 and R4 are one group, and the maximum value of the arrival time of the sound collected by the robots in the same group is taken, namely, the moment t is assumed:
A=max(T 1 ,T 2 ) (2)
B=max(T 3 ,T 4 ) (3)
defining the coordinates of the robots R1, R2, R3 and R4 at the time t as follows respectively
Figure BDA0002689805540000031
Figure BDA0002689805540000032
The real distance between the reference microphone Ms and the sound source is ds, and the time for receiving the sound is T s ,T s =ds/C, C being the speed of sound;
thus, the sound time difference distance between times R1 and R2 at time t is d 12 C is the sound velocity, and has:
d 12 =C·(T 1 -T 2 ) (4)
the straight line distance between R1 and R2 at time t is L 12
Figure BDA0002689805540000033
the linear distance between the times R1 and R2 and the estimated sound source position angle are theta 1
Figure BDA0002689805540000041
Obtaining estimated values (x) of sound source positions from the coordinate values of robots R1 and R2 at time t and formulas (4), (5) and (6) s1·r12 ,y s1·r12 ):
Figure BDA0002689805540000042
Figure BDA0002689805540000043
Similarly, an estimated value (x) of the sound source position of the robots R3, R4 at the time t s2·r34 ,y s2·r34 ):
Figure BDA0002689805540000044
Figure BDA0002689805540000045
Calculating the average value of the estimated values of the two groups of sound source positions according to the estimated values of the two groups of sound source positions obtained by the two groups of robots, calculating the difference distribution weight of the estimated value of each group of sound source positions relative to the average value, multiplying the estimated value of each group of sound source positions by the difference distribution weight, and obtaining the estimated value of the sound source coordinates at the moment t after the accumulated addition
Figure BDA0002689805540000046
The coordinate mean of the estimates of the two sets of sound source positions is calculated as follows:
Figure BDA0002689805540000047
Figure BDA0002689805540000048
the difference between the estimates of the two sets of sound source locations is assigned a weight:
Figure BDA0002689805540000049
Figure BDA00026898055400000410
Figure BDA00026898055400000411
Figure BDA0002689805540000051
estimated value of sound source coordinates at time t
Figure BDA0002689805540000052
Figure BDA0002689805540000053
Figure BDA0002689805540000054
According to the operation mode, calculating the estimated value of the sound source coordinates at each moment in sequence;
fourth, calculating the optimal sound source position:
and (3) averaging the sound source coordinate estimated value at each moment calculated in the step (III), calculating the difference distribution weight of the sound source coordinate estimated value at each moment relative to the average value, multiplying the sound source coordinate estimated value at each moment by the difference distribution weight, and carrying out cumulative addition to obtain the optimal sound source coordinate estimated value.
Compared with the prior art, the invention has the beneficial effects that:
1. and a plurality of robots are adopted for indoor sound source positioning, each robot carries a single microphone, the structure is simple, and the fault tolerance rate in the positioning process is improved.
2. And in the positioning process, the sound source position is calculated for a plurality of times, and higher positioning precision is obtained by continuously approaching the sound source.
3. In each sound source position calculation, grouping positioning can be performed according to the number of robots, and finally, weights are distributed according to the difference between the positioning results of each group and the average value of all the positioning results of each group, and the final sound source position judgment at a certain moment is obtained after the results of each group are multiplied by the weights and added together.
4. And obtaining the optimal sound source position by weighting multiple groups of sound source position estimation values at different moments.
5. The arrangement of the reference microphone simplifies the calculated amount of the group robots in the positioning process, and the positioning precision is improved through the cooperation and multiple movements of a plurality of robots.
Drawings
FIG. 1 is a schematic diagram of a robot position at time t0 according to one embodiment of a group robot-based indoor stationary sound source localization method of the present invention; in the figure, R1, R2, R3 and R4 respectively represent four robots, and M1, M2, M3 and M4 respectively represent four microphones; black triangles represent true sound source positions; t1, T2, T3, T4 respectively represent the time when the microphone receives sound at time T0; the origin (0, 0) of the global coordinate system is located at the upper left.
FIG. 2 is a schematic diagram of a robot position at time t1 in one embodiment of a group robot-based indoor stationary sound source localization method of the present invention; in the figure, R1, R2, R3 and R4 respectively represent four robots, and M1, M2, M3 and M4 respectively represent four microphones; black triangles represent true sound source positions; t1, T2, T3, T4 respectively represent the time when the microphone receives sound at time T1; the origin (0, 0) of the global coordinate system is located at the upper left.
FIG. 3 is a schematic diagram of the calculation principle of an estimated value of a sound source position at time t according to an embodiment of an indoor stationary sound source positioning method based on a group robot; in the figure, black arrows represent sound directions, and black dotted lines represent plane waves of a sound propagation far-field model; r1, R2, R3 and R4 respectively represent four robots, d12 is the sound time difference distance between the times R1 and R2 at t, d34 is the sound time difference distance between the times R3 and R4 at t, and theta 1 and theta 2 are the angles of the times R1 and R3 and the estimated sound source positions S1 and S2; the real distance between the reference microphone Ms and the sound source is ds, and the time for receiving the sound is T s The method comprises the steps of carrying out a first treatment on the surface of the After each robot collects the time of receiving the sound signal, the four robots carrying the microphones are divided into two groups, the robots R1 and R2 are one group, and the robots R3 and R4 are one group.
Detailed Description
The invention provides an indoor stationary sound source positioning method (simply referred to as positioning method) based on a group robot, which comprises the following steps:
first, determining the coordinate relation of the robot in an initial state:
as shown in fig. 1, a global coordinate system is defined, four robots R1, R2, R3, R4 having a single microphone mounted thereon are placed at known initial positions, and initial t0 time coordinates of the robots R1, R2, R3, R4 are defined as
Figure BDA0002689805540000061
Setting each robot to be round, assuming that the radius of each robot is 5cm, and fixing the initial relative position of each robot to enable +.>
Figure BDA0002689805540000062
/>
Figure BDA0002689805540000063
Each microphone is located at the center of the robot, i.e. the coordinates of the microphone are the same as the robot coordinates.
Secondly, collecting the sound arrival time of the group robot:
as shown in fig. 1, it is assumed that the arrival times of the sounds collected by the four robots R1, R2, R3, R4 having the single microphone mounted thereon at time T0 are respectively T 1 、T 2 、T 3 、T 4 Then the robots are controlled to randomly and respectively change the speed (v 1 ,v 2 ,v 3 ,v 4 ) Proceeding toward the sound source for a fixed period of time
Figure BDA0002689805540000071
After the time T1, acquiring the arrival time of the sound again, and recording the arrival time T of the sound acquired by the four robots R1, R2, R3 and R4 at the time T1 1 、T 2 、T 3 、T 4 The method comprises the steps of carrying out a first treatment on the surface of the Every fixed time +.>
Figure BDA0002689805540000072
Collecting the arrival time of sound, recording the arrival time of sound obtained at each collecting moment, and obtaining the arrival time of sound after N (N is more than or equal to 1) fixed times +.>
Figure BDA0002689805540000073
A dataset of arrival times for N +1 sounds is then obtained.
Thirdly, calculating the sound source positions of the group robots:
when a plurality of robots are used for sound source positioning, the sound arrival time difference is used for calculating the sound source position, and every fixed time
Figure BDA0002689805540000074
The calculation of the sound source position is performed once, and after the calculated position of the sound source is determined, the robots respectively calculate the sound source position at different speeds (v 1 ,v 2 ,v 3 ,v 4 ) Move toward the sound source. When the calculation is performed using the sound arrival time difference, the sound signal received by the microphone mounted on the robot may be approximated as a plane wave, that is, the sound field is a far-field model. A fixed position microphone is provided as a reference microphone, the distance of which from the actual position of the sound source is known.
Setting microphone reception of all robotsThe sound signal model of (2) is an ideal model, namely, the environment noise is approximately replaced by Gaussian white noise, the sound signal generated by the sound source is set as S (t), wherein t represents the corresponding time of sounding of each sound source, and each robot receives the sound signal S i (t) is represented by the following formula (1):
S i (t)=αS(t-T i )+n(t) (1)
in the formula (1), S i (T) represents the functional relation of the sound signal received by the ith robot and the time T, i= (1, 2,3, 4), T is the time corresponding to the sound signal received by the corresponding microphone, alpha represents the amplitude attenuation coefficient of the sound signal to the microphone, and T i Time delay from the sound signal to the ith robot, n (T) ambient noise signal, S (T-T) i ) Representing a time delay T i Sound signal generated by sound source S i (t) and n (t) are independent of each other.
And calculating the arrival time of the sound at the current moment according to the waveform of the sound signal received by the microphone on each robot.
Arrival time T of sound collected by four robots R1, R2, R3, R4 carrying a single microphone at each moment 1 、T 2 、T 3 、T 4 The coordinates of the sound source at the current moment are calculated by combining the coordinate data of the four robots at the current moment, and the specific operation mode is as follows: the four robots carrying microphones are divided into two groups, the robots R1 and R2 are one group, the robots R3 and R4 are one group, and the maximum value of the arrival time of the sound collected by the robots in the same group is taken, namely, the moment t is assumed:
A=max(T 1 ,T 2 ) (2)
B=max(T 3 ,T 4 ) (3)
defining the coordinates of the robots R1, R2, R3 and R4 at the time t as follows respectively
Figure BDA0002689805540000081
Figure BDA0002689805540000082
Reference microphoneMs is ds, the true distance between Ms and sound source is T s ,T s =ds/C, C being the speed of sound;
thus, the sound time difference distance between times R1 and R2 at time t is d 12 C is the sound velocity, and has:
d 12 =C·(T 1 -T 2 ) (4)
the straight line distance between R1 and R2 at time t is L 12
Figure BDA0002689805540000083
the linear distance between the times R1 and R2 and the estimated sound source position angle are theta 1
Figure BDA0002689805540000084
Obtaining estimated values (x) of sound source positions from the coordinate values of robots R1 and R2 at time t and formulas (4), (5) and (6) s1·r12 ,y s1·r12 ),
Figure BDA0002689805540000085
Figure BDA0002689805540000086
Similarly, an estimated value (x) of the sound source position of the robots R3, R4 at the time t s2·r34 ,y s2·r34 ),
Figure BDA0002689805540000087
Figure BDA0002689805540000091
According to the two groups of machines described aboveCalculating the average value of the estimated values of the two groups of sound source positions, calculating the difference distribution weight of the estimated value of each group of sound source positions relative to the average value, multiplying the estimated value of each group of sound source positions by the difference distribution weight, and obtaining the estimated value of the sound source coordinates at the moment t after the accumulated addition
Figure BDA0002689805540000092
The coordinate mean of the estimates of the two sets of sound source positions is calculated as follows:
Figure BDA0002689805540000093
Figure BDA0002689805540000094
the difference between the estimates of the two sets of sound source locations is assigned a weight:
Figure BDA0002689805540000095
Figure BDA0002689805540000096
Figure BDA0002689805540000097
Figure BDA0002689805540000098
/>
estimated value of sound source coordinates at time t
Figure BDA0002689805540000099
Figure BDA00026898055400000910
Figure BDA00026898055400000911
According to the operation mode, the estimated value of the sound source coordinates at each moment is calculated in turn.
Fourth, calculating the optimal sound source position:
and (3) averaging the sound source coordinate estimated value at each moment calculated in the step (III), calculating the difference distribution weight of the sound source coordinate estimated value at each moment relative to the average value, multiplying the sound source coordinate estimated value at each moment by the difference distribution weight, and carrying out cumulative addition to obtain the optimal sound source coordinate estimated value.
Example 1
The embodiment provides an indoor stationary sound source positioning method based on a group robot, which is characterized by comprising the following steps of the positioning method and the following parameters:
assuming that the radius of each robot is 5cm, the initial relative position of each robot is fixed, so that
Figure BDA0002689805540000101
Figure BDA0002689805540000102
N=2,/>
Figure BDA0002689805540000103
Other parameters are shown in the following table:
table 1 parameters
Figure BDA0002689805540000104
Simulation tests are carried out on the sound sources at positions (5, 5) m and (8, 8) m by adopting the positioning method, and the obtained test results are shown in table 1.
TABLE 1 Sound source localization results
Figure BDA0002689805540000105
The test results in table 1 show that the positioning method of the invention has good positioning accuracy.
The invention is applicable to the prior art where it is not described.

Claims (2)

1. An indoor stationary sound source positioning method based on group robots is characterized by comprising the following steps:
first, determining the coordinate relation of the robot in an initial state:
defining a global coordinate system, placing four robots R1, R2, R3 and R4 with single microphones at known initial positions, and defining initial t0 time coordinates of the robots R1, R2, R3 and R4 as follows respectively
Figure FDA0002689805530000011
Setting each robot to be round, fixing the initial relative position of each robot, and positioning each microphone at the center of the robot, wherein the coordinates of the microphones are the same as the coordinates of the robots;
secondly, collecting the sound arrival time of the group robot:
assume that the arrival time of the sound collected by the four robots R1, R2, R3, R4 carrying a single microphone at time T0 is T 1 、T 2 、T 3 、T 4 Then controlling four robots to randomly perform the process towards the sound source at different speeds respectively, and keeping a fixed time interval
Figure FDA0002689805530000012
After the time T1, acquiring the arrival time of the sound again, and recording the arrival time T of the sound acquired by the four robots R1, R2, R3 and R4 at the time T1 1 、T 2 、T 3 、T 4 The method comprises the steps of carrying out a first treatment on the surface of the Every fixed time +.>
Figure FDA0002689805530000013
Collecting sound arrival time, recording the arrival time of sound obtained at each collecting time, and passing N fixed times +.>
Figure FDA0002689805530000014
Obtaining a data set of the arrival time of the N+1 sounds;
thirdly, calculating sound source coordinates:
when calculating by utilizing the sound arrival time difference, the sound signal received by the microphone carried by the robot is approximately plane wave, namely the sound field is a far-field model; setting a microphone with a fixed position as a reference microphone, wherein the distance between the reference microphone and the real position of the sound source is known;
setting the sound signal models received by the microphones of all robots to be ideal models, namely, the environment noise is approximately replaced by Gaussian white noise, setting the sound signals generated by sound sources to be S (t), wherein t represents the corresponding time of sound generation of each sound source, and each robot receives the sound signals S i (t) is represented by the following formula (1):
S i (t)=αS(t-T i )+n(t) (1)
in the formula (1), S i (T) represents the functional relation of the sound signal received by the ith robot and the time T, i= (1, 2,3, 4), T is the time corresponding to the sound signal received by the corresponding microphone, alpha represents the amplitude attenuation coefficient of the sound signal to the microphone, and T i Time delay from the sound signal to the ith robot, n (T) ambient noise signal, S (T-T) i ) Representing a time delay T i Sound signal generated by sound source S i (t) and n (t) are independent of each other; calculating the arrival time of the sound at the current moment according to the waveform of the sound signal received by the microphone on each robot;
arrival time T of sound collected by four robots R1, R2, R3, R4 carrying a single microphone at each moment 1 、T 2 、T 3 、T 4 Knot(s)The coordinate data of the four robots at the current moment are combined, and the coordinate of the sound source at the current moment is calculated, wherein the specific operation mode is as follows: the four robots carrying microphones are divided into two groups, the robots R1 and R2 are one group, the robots R3 and R4 are one group, and the maximum value of the arrival time of the sound collected by the robots in the same group is taken, namely, the moment t is assumed:
A=max(T 1 ,T 2 ) (2)
B=max(T 3 ,T 4 ) (3)
defining the coordinates of the robots R1, R2, R3 and R4 at the time t as follows respectively
Figure FDA0002689805530000021
Figure FDA0002689805530000022
The real distance between the reference microphone Ms and the sound source is ds, and the time for receiving the sound is T s ,T s =ds/C, C being the speed of sound;
thus, the sound time difference distance between times R1 and R2 at time t is d 12 C is the sound velocity, and has:
d 12 =C·(T 1 -T 2 ) (4)
the straight line distance between R1 and R2 at time t is L 12
Figure FDA0002689805530000023
the linear distance between the times R1 and R2 and the estimated sound source position angle are theta 1
Figure FDA0002689805530000024
Obtaining estimated values (x) of sound source positions from the coordinate values of robots R1 and R2 at time t and formulas (4), (5) and (6) s1·r12 ,y s1·r12 ):
Figure FDA0002689805530000031
Figure FDA0002689805530000032
Similarly, an estimated value (x) of the sound source position of the robots R3, R4 at the time t s2·r34 ,y s2·r34 ):
Figure FDA0002689805530000033
Figure FDA0002689805530000034
Calculating the average value of the estimated values of the two groups of sound source positions according to the estimated values of the two groups of sound source positions obtained by the two groups of robots, calculating the difference distribution weight of the estimated value of each group of sound source positions relative to the average value, multiplying the estimated value of each group of sound source positions by the difference distribution weight, and obtaining the estimated value of the sound source coordinates at the moment t after the accumulated addition
Figure FDA0002689805530000035
The coordinate mean of the estimates of the two sets of sound source positions is calculated as follows:
Figure FDA0002689805530000036
Figure FDA0002689805530000037
the difference between the estimates of the two sets of sound source locations is assigned a weight:
Figure FDA0002689805530000038
Figure FDA0002689805530000039
Figure FDA00026898055300000310
/>
Figure FDA00026898055300000311
estimated value of sound source coordinates at time t
Figure FDA00026898055300000312
Figure FDA00026898055300000313
Figure FDA00026898055300000314
According to the operation mode, calculating the estimated value of the sound source coordinates at each moment in sequence;
fourth, calculating the optimal sound source position:
and (3) averaging the sound source coordinate estimated value at each moment calculated in the step (III), calculating the difference distribution weight of the sound source coordinate estimated value at each moment relative to the average value, multiplying the sound source coordinate estimated value at each moment by the difference distribution weight, and carrying out cumulative addition to obtain the optimal sound source coordinate estimated value.
2. Indoor robot-based system as claimed in claim 1The static sound source positioning method is characterized in that the radius of each robot is 5cm, and the initial relative position of each robot is fixed, so that
Figure FDA0002689805530000041
N=2,/>
Figure FDA0002689805530000042
/>
CN202010991601.9A 2020-09-18 2020-09-18 Indoor stationary sound source positioning method based on group robot Active CN112098940B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010991601.9A CN112098940B (en) 2020-09-18 2020-09-18 Indoor stationary sound source positioning method based on group robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010991601.9A CN112098940B (en) 2020-09-18 2020-09-18 Indoor stationary sound source positioning method based on group robot

Publications (2)

Publication Number Publication Date
CN112098940A CN112098940A (en) 2020-12-18
CN112098940B true CN112098940B (en) 2023-06-09

Family

ID=73760136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010991601.9A Active CN112098940B (en) 2020-09-18 2020-09-18 Indoor stationary sound source positioning method based on group robot

Country Status (1)

Country Link
CN (1) CN112098940B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114355292B (en) * 2021-12-28 2022-09-23 华南理工大学 Wireless earphone and microphone positioning method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106405499A (en) * 2016-09-08 2017-02-15 南京阿凡达机器人科技有限公司 Method for robot to position sound source
JP2018034221A (en) * 2016-08-29 2018-03-08 Kddi株式会社 Robot system
JP2018063200A (en) * 2016-10-14 2018-04-19 日本電信電話株式会社 Sound source position estimation device, sound source position estimation method, and program
CN108802689A (en) * 2018-06-14 2018-11-13 河北工业大学 Space microphone localization method based on acoustic source array
CN109471145A (en) * 2018-10-17 2019-03-15 中北大学 A kind of alliteration positioning and orientation method based on acoustic passive location array with four sensors platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007129731A1 (en) * 2006-05-10 2007-11-15 Honda Motor Co., Ltd. Sound source tracking system, method and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018034221A (en) * 2016-08-29 2018-03-08 Kddi株式会社 Robot system
CN106405499A (en) * 2016-09-08 2017-02-15 南京阿凡达机器人科技有限公司 Method for robot to position sound source
JP2018063200A (en) * 2016-10-14 2018-04-19 日本電信電話株式会社 Sound source position estimation device, sound source position estimation method, and program
CN108802689A (en) * 2018-06-14 2018-11-13 河北工业大学 Space microphone localization method based on acoustic source array
CN109471145A (en) * 2018-10-17 2019-03-15 中北大学 A kind of alliteration positioning and orientation method based on acoustic passive location array with four sensors platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
移动机器人听觉定位技术研究;容茂成;祖丽楠;杨鹏;;机器人技术与应用(第01期);全文 *

Also Published As

Publication number Publication date
CN112098940A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN103308889B (en) Passive sound source two-dimensional DOA (direction of arrival) estimation method under complex environment
CN108646221B (en) Space microphone positioning method based on sound source array
CN106842128B (en) The acoustics tracking and device of moving target
US8290178B2 (en) Sound source characteristic determining device
CN104035065A (en) Sound source orienting device on basis of active rotation and method for applying sound source orienting device
CN105044676A (en) Energy-based sound source localization method
CN107167770A (en) A kind of microphone array sound source locating device under the conditions of reverberation
CN103278801A (en) Noise imaging detection device and detection calculation method for transformer substation
CN105607042A (en) Method for locating sound source through microphone array time delay estimation
CN112098940B (en) Indoor stationary sound source positioning method based on group robot
Nakadai et al. Robust tracking of multiple sound sources by spatial integration of room and robot microphone arrays
CN102901949A (en) Two-dimensional spatial distribution type relative sound positioning method and device
CN109212481A (en) A method of auditory localization is carried out using microphone array
CN107290721B (en) A kind of indoor localization method and system
CN109164416B (en) Sound source positioning method of three-plane five-element microphone array
CN107884743A (en) Suitable for the direction of arrival intelligence estimation method of arbitrary structures sound array
CN109905846B (en) Underwater wireless sensor network positioning method based on autonomous underwater vehicle
CN116299182A (en) Sound source three-dimensional positioning method and device
Brutti et al. Speaker localization based on oriented global coherence field
CN110208731A (en) A kind of high frame per second is without fuzzy hydrolocation method
Mattos et al. Passive sonar applications: target tracking and navigation of an autonomous robot
CN113534164B (en) Target path tracking method based on active-passive combined sonar array
CN114994608A (en) Multi-device self-organizing microphone array sound source positioning method based on deep learning
Li et al. A distributed sound source surveillance system using autonomous vehicle network
CN113376578A (en) Sound source positioning method and system based on matching of arrival angle and sound intensity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant