CN108089154A - Distributed acoustic source detection method and the sound-detection robot based on this method - Google Patents

Distributed acoustic source detection method and the sound-detection robot based on this method Download PDF

Info

Publication number
CN108089154A
CN108089154A CN201711221413.2A CN201711221413A CN108089154A CN 108089154 A CN108089154 A CN 108089154A CN 201711221413 A CN201711221413 A CN 201711221413A CN 108089154 A CN108089154 A CN 108089154A
Authority
CN
China
Prior art keywords
robot
sound
detection
sound source
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711221413.2A
Other languages
Chinese (zh)
Other versions
CN108089154B (en
Inventor
陈建峰
祁文涛
戚茜
李晓强
闫青丽
周荣艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN201711221413.2A priority Critical patent/CN108089154B/en
Publication of CN108089154A publication Critical patent/CN108089154A/en
Application granted granted Critical
Publication of CN108089154B publication Critical patent/CN108089154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/026Acoustical sensing devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Manipulator (AREA)

Abstract

The present invention relates to a kind of distributed acoustic source detection method and the sound-detection robot based on this method, compared to conventional individual device voice source detecting system, multirobot collaboration sound source detection system based on distributed sound source location technology is larger with investigative range, detection accuracy is higher, the advantages such as environmental suitability and fault-tolerance are strong, as the new tool of Group Robots environment sensing, robot environment's perception is greatly improved, forms into columns to cooperate for intelligent robot and has established good basis.

Description

Distributed acoustic source detection method and the sound-detection robot based on this method
Technical field
The present invention relates to a kind of distributed acoustic source detection method and the design of the sound-detection robot system based on this method with It realizes.
Background technology
In recent years, technologies of intelligent mobile is flourished, and image and sound are most main as robot perception environment The two ways wanted, all comes with some shortcomings.Specifically, it is big there are data volume based on the environment perception technology of image, easily by Light influences, detection visual angle, apart from it is limited the problems such as, and there are investigative ranges to have for the existing environment perception technology based on sound The problems such as limit, poor anti jamming capability, reliability is weak.Therefore, steady, accurate Robotics Sound cognition technology is studied to improving machine The performance of device people has great importance.
The main task of Robotics Sound cognition technology is to make robot real-time, and accurately the acoustic information in environment is made and being sentenced It is disconnected, obtain spatial position of the target with respect to itself, i.e. auditory localization.The existing robot sonic location system overwhelming majority is base In single robot platform and the system of microphone array auditory localization technology.This kind of system is realized to a certain extent to sound source The orientation of target, but the structure snd size of individual machine people's platform are limited to, the size of microphone cannot be excessive, causes robot The investigative range of sonic location system is small, and precision is low, and reliability is not high, and the direction of sound source is can only obtain under far field situation, and It can not realize accurate auditory localization.
Distributed sound source location technology is that the numbers of spatially multiple separation microphone arrays are merged using wireless sensor network According to the auditory localization new method of co-located, the sonic location system compared to single array effectively increases investigative range, positioning Precision, single array damage will not cause whole system function to be paralysed, highly reliable, overcome single microphone array in general It is insufficient existing for row.
Multi-robot formation collaboration be the current research hotspot of robot field, in terms of architecture and hardware resource with distribution Formula auditory localization technology has many common ground.It is found by consulting available data and paper, is currently based on distributed auditory localization There has been no correlative studys and application for the multirobot collaboration sound source Detection Techniques of method.
The content of the invention
Present invention solves the technical problem that it is:In view of the deficienciess of the prior art, and intelligent robot in perception of sound side The demand in face proposes to combine distributed sound source location technology and robot team formation coordination technique in communication system, hardware resource Common ground, it is proposed that the distributed acoustic source detection new method based on multi-robot system, this method optimize and enhance existing machine Device people allows multiple robots to complete more complicated collaboration feature, which can answer the detectivity of acoustic target For military affairs, all various aspects such as industrial production and domestic robot have broad application prospects.
The technical scheme is that:A kind of distributed acoustic source detection new method, it is characterised in that:
This method cooperates with sound source based on the multi-robot formation being made of 1 monitoring computer and several sound-detection robots Detection system, this method comprise the following steps.
Step 1, system layout:A sound-detection robots of M (>=3) are disperseed to be placed by optimal location method to be needed to monitor Region, monitoring computer be placed in region or outside region within d meters (d≤500).The robot optimal location method is determined Justice is as follows:
Quantity the characteristics of monitoring region with robot in formation as needed, system optimal layout are referred in two dimensional surface During layout, the geometric figure that line forms between adjacent machines people is made to tend to regular polygon, such as when system includes 3 robots When, then using equilateral triangle array layout, conditions permit and in the case of meeting system signal noise ratio requirement, divides as far as possible at the scene It dissipates.
Step 2, role's distribution:A sound-detection robots of M (>=3) are 1 team leader robot according to Partition of role regular partition With M-1 slave machine people, robot team formation is formed.Communication and data processing centre of its squadron officer robot as formation, Team leader robot is interacted by wireless communication with each slave machine people.The Partition of role rule is defined as follows:
Each robot after power-up, carries out system initialization, is perceived certainly by sensing system in system initialisation phase Body position, course information gather the ambient noise size in itself residing region;Each robot respectively will after initialization Above- mentioned information uploads to monitoring computer;
Monitoring computer sorts successively to the ambient noise size of each robot, the artificial team leader of the small machine of designated environment noise Robot, formation serial number 1, the artificial slave machine people of remaining machine, formation sequence number are followed successively by 2 ... M;The ambient noise is surveyed Amount method is as follows:In the robot initial stage, all sound-detection robots gather the environmental sound signal of T seconds, in calculating The mean power of signal is stated, obtains ambient noise present grade;
Step 3, information Perception and signal detection:All sound-detection robots under static state, real-time perception itself posture Information and location information;Collected sound signal, by time domain short-time energy, short-time zero-crossing rate and the frequency domain of the voice signal received Compared with the time domain short-time energy of sub-belt energy and initial setting, short-time zero-crossing rate and frequency domain sub-band energy threshold, when equal During less than or equal to respective threshold value, that is, think to detect useful signal, and obtain respective useful signal starting point, then continue to execute Otherwise 4th step cycles and performs the 3rd step.
Step 4, sound source direction and information exchange:Each sound-detection robot is respectively from the effective sound letter each detected Number starting light, the data of identical points are chosen in signal sequence, using broad sense correlation time-delay estimate sound orient calculate Method is calculated, and obtains azimuth of the acoustic target with respect to the microphone array of itself.Wherein slave machine people detects sound source As a result uploaded to own location information, attitude information at team leader robot.
Step 5, data fusion realize auditory localization:Team leader robot is obtaining the detection of the sound source from each slave machine people As a result with combine the data that itself detect after their own position, attitude information, the current all robots of emerging system Location information, course information and sound source goal orientation are as a result, obtain all robots with respect to the sound source direction under earth axes Orientation angles obtain acoustic target orientation using distributed sound source location algorithm, while target acquisition result are uploaded monitoring meter Calculation machine.
Step 6, formation cooperative achievement positioning and optimizing:Team leader robot judges sound source according to the situation of the acoustic target detected Type, if in time Tcontinue(Tcontinue>=10s) in it is continuous m (m >=5) is secondary detects same acoustic target, judge the sound source It is on the contrary to be judged as doubtful burst sound source for doubtful continuous acoustic target.
If continuous probe is remained unchanged to acoustic target position, sound producing body is judged for doubtful static object, otherwise to be judged as Doubtful dynamic object.Team leader's Robot Planning System task, for doubtful static and doubtful dynamic burst sound source, team leader robot It sends all robots of control instruction command system and keeps the current static detection acoustic target in formation original place.
For the continuous acoustic target of doubtful dynamic, team leader robot according to initial probe as a result, analyze the position of continuous sound source with The relativeness of current system sound-detection robot space geometry line topological structure includes successively to the transmission of each robot It specifies the control instruction of optimization point position coordinates and task type information, each slave machine people autonomous during commander forms into columns To different position, optimization system Distributed probing formation makes target be intended in current robot space geometry line topology System sound source positioning, the precision of track following are improved in the center of structure.
The present invention further technical solution be:A kind of sound-detection robot based on this method, including robot platform, machine Device people control system and robot sensor system.The robot platform includes mechanical structure and Electrified Transmission mechanism.It is described Control system includes ARM microcontroller, electric power system and motor drive module.The sensing system includes, sound source direction mould Block, locating module, wireless communication module, avoidance sensor assembly and attitude transducer module.
The sound source direction module of the sensing system of the sound-detection robot includes microphone array, simulation pretreatment circuit With data acquisition and signal processing unit.
The microphone array of the sound source direction module of the sensing system of the sound-detection robot include two connecting rods and Four microphones, two of which connecting rod are located at same level, and connection forms cross;Four microphones are located at connection respectively The rod end of bar and its to cross-shape center axial line distance it is equal.The simulation of the sensing system sound source direction module is located in advance Reason circuit builds amplification and filter circuit using operational amplifier, and the analog signal of microphone array output is pre-processed, The analog signal incoming data acquisition process unit of output is subjected to sampling processing, calculates acoustic target direction.The sound source is determined Data acquisition to module is built with signal processing unit using A/D chip with dsp chip, and wherein A/D chip is sampled using highest Frequency is more than or equal to the multichannel synchronousing collection A/D chip of 20KHz.
The locating module of the sensing system of the sound-detection robot can be used GPS positioning module or Beidou positioning module or GLONASS locating module, for carrying out positioning and time service in real time to sound-detection robot.
The wireless communication module of the robot sensor system can be used polytype, for example, support 4G mobile networks, The wireless communication module of ZigBee or WIFI agreements, for the information exchange of the multiple robots of system.
Dissimilar sensor can be used in the attitude transducer module of the robot sensor system of the sound-detection robot, Such as attitude transducer based on MEMS technology, for perceiving the real-time course attitude information of robot.
Infrared or ultrasonic sensing can be used in the avoidance sensor assembly of the robot sensor system of the sound-detection robot Device module realizes the real-time detection to robot direction of advance obstacle.
The signal flow direction of the single sound-detection robot is as follows:
Sound source direction module real-time perception ambient sound in robot sensor system, when occurring acoustic target in environment, Sound bearing is calculated, result is passed to control system.Attitude transducer, sonac difference in robot sensor system The current three-axis attitude data of robot and obstacle detection data are exported in real time to robot control system.For in robot Slave machine people for, the data for the sensing system that control system is passed to need to be uploaded to team leader's machine by wireless module Device people.For the team leader robot in robot, control system is led to while itself sensing system data is received The other slave machine personal datas of wireless receiving are crossed, after fusion obtains acoustic target orientation result, result is passed through into radio communication mold Block is uploaded to monitoring computer.
Invention effect
The technical effects of the invention are that:Compared to conventional individual device voice source detecting system, based on distributed sound source location technology Multirobot collaboration sound source detection system have that investigative range is larger, and detection accuracy is higher, good environmental adaptability, fault-tolerance and The advantages such as survival ability is strong, as the new tool of Group Robots environment sensing, greatly improve robot environment's perception, are Good basis has been established in intelligent robot environment sensing, cooperation of forming into columns.
Description of the drawings
The step of Fig. 1 is the distributed sound localization method based on multiple sound-detection robots of one embodiment of the invention is flowed Journey schematic diagram;
Fig. 2 is the system architecture schematic diagram being made of multiple sound-detection robots of one embodiment of the invention;
Fig. 3 is the system initialization communication protocol of one embodiment of the invention;
Fig. 4 is the communication protocol between the system monitoring computer of one embodiment of the invention and team leader robot;
Fig. 5 is the communication protocol between the system team leader robot of one embodiment of the invention and slave machine people;
Fig. 6 is the principle signal of the robot collaboration sound source detection system distribution sound source location algorithm of one embodiment of the invention Figure;
Fig. 7 is the hardware architecture diagram of the single sound-detection robot of one embodiment of the invention;
Fig. 8 is the electric characteristic functional block diagram of the single sound-detection robot of one embodiment of the invention;
Fig. 9 is the sound source direction module of the robot sensor system of the single sound-detection robot of one embodiment of the invention Quaternary cross microphone array structure diagram;
Figure 10 is the sound source direction mould of the robot sensor system of the single sound-detection robot of one embodiment of the invention The schematic block circuit diagram of block;
Reference sign:1-locating module;2-wireless communication module;3-attitude transducer module;4-avoidance sensor Module;5-Electrified Transmission mechanism;6-robot control system;7-sound source direction module;8-motor drive module/power supply system System.
Specific embodiment
The core concept of the present invention is the robot with sound source detectivity using multiple movements, passes through wireless sensor network Network using the sound source bearing information for intersecting Direction Finding Algorithm fusion multirobot, is realized to the real-time positioning of acoustic target and track Tracking.
Distributed acoustic source detection method based on the present embodiment is as follows:
The distributed acoustic source detection method of the real-time example is based on being made of 1 monitoring computer and 3 sound-detection robots Multi-robot formation cooperates with sound source detection system, and this method comprises the following steps, as shown in Figure of description 1.
Step 1, system layout:By 3 sound-detection robots in 50m × 50m size square areas, according to live item Part puts into equilateral triangle topological structure, 30 meters of adjacent machines people distance as far as possible by optimal location method, and monitoring computer is placed in Outside region within 50 meters.The optimal location method is as follows:
Quantity the characteristics of monitoring region with robot in formation as needed, system optimal layout are referred in two dimensional surface During layout, the geometric figure that line forms between adjacent machines people is made to tend to regular polygon as far as possible, as system includes 3 machines People, then using equilateral triangle array layout.When strength of sound source is 80dB, the distance between adjacent robot is 50 during layout Within rice.
Step 2, role's distribution:3 sound-detection robots of system are divided into 1 team leader robot according to Partition of role method With 2 slave machine people, robot team formation is formed.Its squadron officer robot is as the communication and data processing centre formed into columns, team Long robot is interacted by wireless communication with each slave machine people.System structure is as shown in Figure 2.The Partition of role rule Then it is defined as follows:
Each robot after power-up, carries out system initialization, is perceived certainly by sensing system in system initialisation phase Body position, course information gather the ambient noise size in itself residing region;Each robot respectively will after initialization Above- mentioned information uploads to monitoring computer;
Monitoring computer sorts successively to the ambient noise size of each robot, the artificial team leader of the small machine of designated environment noise Robot, formation serial number 1, the artificial slave machine people of remaining machine, formation sequence number are followed successively by 2 ... M;The ambient noise is surveyed Amount method is as follows:In the robot initial stage, all sound-detection robots gather the environmental sound signal of T seconds, in calculating The mean power of signal is stated, obtains ambient noise present grade;
It is sorted successively according to the ambient noise size of each robot, the artificial team leader robot of the small machine of designated environment noise, Formation serial number 1, the artificial slave machine people of remaining machine, formation sequence number are followed successively by 2,3.The environmental noise measurement method is such as Under:In the robot initial stage, all sound-detection robots are with the environment of certain sample rate acquisition T (5s≤T≤10s) seconds Voice signal, add up summation after taking absolute value to above-mentioned signal sequence, is averaged to obtain ambient noise present size.
Step 3, information Perception and signal detection:All sound-detection robots under static state, gather sound letter in real time Number, using speech detection algorithm, it is detected and analyzed.Testing principle is the time domain short-time energy that will receive voice signal, The time domain short-time energy of short-time zero-crossing rate and frequency domain sub-band energy and initial setting, short-time zero-crossing rate and frequency domain sub-band energy threshold Value is compared, and when being respectively less than equal to respective threshold value, that is, thinks to detect useful signal, and is obtained respective useful signal and risen Initial point;
Step 4, sound source direction and information exchange:Each sound-detection robot, respectively from the useful signal each detected Initial point rises, and the sound signal datas of identical points is chosen in signal sequence, during using sound source direction algorithm, i.e. broad sense cross-correlation Prolong estimation voice orientation algorithm to be calculated, obtain azimuth of the acoustic target with respect to the microphone array of itself and pitch angle, Above-mentioned sound bearing result of detection and self-position, attitude information are uploaded team leader's machine by slave machine people according to system communication protocol Device people;
Step 5, data fusion realize auditory localization:Team leader robot is obtaining the detection of the sound source from each slave machine people As a result with combine its data after their own position, attitude information, the location information of the current all robots of emerging system, Course information and sound source goal orientation are as a result, obtain all robots with respect to the sound source bearing information under earth axes, structure System topology adjusts the parameter of distributed location method, and acoustic target position is obtained using direction finding crossover algorithm, simultaneously will Target acquisition result uploads monitoring computer according to system communication protocol.
Step 6, formation cooperative achievement positioning and optimizing:Team leader robot judges sound source according to the situation of the acoustic target detected Type if detecting same acoustic target continuous 5 times in 10 seconds, judges the sound source for doubtful continuous acoustic target, otherwise sentences Break as doubtful burst sound source.
For continuous acoustic target, team leader robot is according to initial probe as a result, analyzing position and the current system of continuous sound source The relativeness of sound-detection robot space geometry line topological structure is sent to each robot comprising optimization point position successively The control instruction of coordinate is put, each slave machine people is arrived using inertial navigation algorithm and pid control algorithm autonomous in formation An optimization point position, the layout structure of Optimum distribution formula sound source detection system improve system to the positioning of sound source and track following Precision.
The target sound source of the present embodiment is as follows:
The target sound that multi-robot system in the present embodiment is directed to is mainly that (frequency range is in 100Hz for broadband stationary signal Between~3000Hz), such as the engine sound of automobile, panzer, helicopter etc. and short-time non-stationary signal, such as shot and clapping.
System communication protocol in the present embodiment is as follows:
Multi-robot system in the present embodiment includes 3 machines in total using centralized communication, control structure in formation People, each robot are gathered around there are one unique fixed remote address --- i.e. mailing address, respectively 0x001F, 0x0020, 0x0021.System initialisation phase, the division of robot role in forming into columns is completed by monitoring computer, and the system is drawn by role It is divided into 1 team leader robot and 2 slave machine people, message center and Data processing of the squadron officer robot as system The heart, responsible system data summarize and merge, and slave machine people detects environmental information under the control of team leader robot.Integrated communication control System processed is as shown in Figure 2.
System initialisation phase communication protocol, the monitoring computer of normal work stage and team leader are specifically introduced separately below Communication protocol between robot and team leader robot and slave machine people.
(1) system initialisation phase communication protocol
When just start powers on all robots of system, 3 robots are counted all in holding state, system by monitoring in formation Using broadcast mode, all robot sending systems into formation set data packet to calculation machine, realize that the role of system robot draws Point.Robot is divided into team leader robot and two slave machine people, and it is respectively 0,1,2 to define formation sequence number.All robots It is receiving system settings data bag after accomplishing the setting up, response message is replied successively to monitoring computer according to the formation sequence number of oneself Place, the robot of serial number 0 respond immediately to monitoring computer, the robot delay T of serial number 10Monitoring computer, sequence are replied afterwards Number for 2 robot be delayed 2 × T0Monitoring computer is replied afterwards, and stagger communication time slot.Blue portion is set for the first time for system, RED sector represents that system is reset.As shown in Figure 3.
Comprising 1 primary fields of Partition of role in system settings data bag, with a byte representation.We are fixed in the present embodiment Adopted system settings data bag form is as shown in table 1.
1 system settings data bag form of table
As shown above, the packet header of system settings data bag is 0xFE, wraps a length of data length and adds 4, defines source port number and mesh Port numbers for 0xA4, represent type of data packet and instruction be set for system.Remote address is 0xFFFF, is represented using broadcaster All nodes can all receive system settings data bag in formula, i.e. network.The value of Partition of role field and corresponding formation sequence number are such as Shown in table 2.
2 system actor of table divides the table of comparisons
System sets the definition of response data packet ACK as shown in table 3.
3 system of table sets response data packet ACK forms
System sets the source port number of response data packet as shown above and destination slogan is 0xA5, and remote address is monitoring Computer mailing address is 0x0022, confirms field value 0x00, represents setup failed or 0x01, and expression is set successfully.
(2) computer and team leader robot are monitored
After monitoring computer is successfully completed to the setting of formation, appointed team leader robot, which starts to shoulder formation data, to be melted The function of summarizing is closed, with cycle T1It is uploaded to monitoring computer comprising each robot current pose information in forming into columns, position letter Breath, the formation state data packets of information about power.When acoustic target occurs, acoustic target orientation is being calculated in team leader robot Afterwards, upload immediately comprising the sound source bearing information that all robot probes arrive in current form into columns, position, the course of all robots With the formation probe data packet of information about power to monitoring computer.Monitor the communication sequential such as attached drawing of computer and team leader robot Shown in 4.Table 4 is formation state data packets form, and table 5 is the detailed content of 4 robotary information of table.
4 system formation state data packets form of table
5 robotary information detail list of table
As shown above, the packet header of formation state data packets is 0xFE, wraps a length of data length and adds 4, i.e., 37, is represented with 16 systems For 0x25.It is 0xA0 to define source port number and destination slogan, represents type of data packet as formation state data packets.Remote address To monitor computer mailing address 0x0022.1 byte representation of robot electric quantity, takes 0x00 or 0x01,0x00 to represent electricity Deficiency, it is normal that 0x01 represents electricity.2 byte representations in robot course, longitude, latitude use 4 byte representations.In table The status information of slave machine people is consistent with the form of team leader robot.Table 6 is system formation probe data packet form, and table 7 is The detailed content of robot directed information and oneself state information bar in table 6.
6 formation probe data packet form of table
7 robot directed information of table and status information are detailed
As shown above, the packet header of formation probe data packet is 0xFE, wraps a length of data length and adds 4, i.e., 47, is represented with 16 systems For 0x2F.It is 0xA1 to define source port number and destination slogan, represents type of data packet as formation probe data packet.Remote address It is consistent with the team leader robot address of system initialization to monitor computer mailing address 0x0022.Formation positioning result is with 4 Byte representation, preceding 2 byte representation x coordinates, rear 2 byte representation y-coordinates.1 byte representation of robot electric quantity, takes 0x00 Or 0x01,0x00 represent not enough power supply, it is normal that 0x01 represents electricity.Robot sound source bearing information and course information are respectively adopted 4 byte representations are respectively adopted in 2 byte representations, longitude, latitude.The directed information of slave machine people and oneself state letter in table It is consistent with the form of team leader robot to cease data format.
(3) between team leader robot and slave machine people
When system works, on the one hand each slave machine people is according to the sequence number in each comfortable formation, using GPS time services pulse to be synchronous Time tag, the slave machine people of serial number 1 responds immediately to monitoring computer, serial number 2 after GPS time services pulse per second (PPS) triggering Slave machine people be delayed T after GPS time services pulse per second (PPS) triggering2Monitoring computer is replied afterwards, and stagger communication time slot, difference per second Itself unit state data packets is uploaded to team leader robot.When slave machine people detects acoustic target, upload immediately respectively Its unit probe data packet is to team leader robot.On the other hand, team leader robot issues control number using broadcast, the mode of unicast According to bag to slave machine people, slave machine people's working condition is controlled.The unit state data packets of robot include the current machine The information such as position, course and the electricity of people.Robot probe's data packet includes the sound source bearing information of the current robot, itself The information such as position, course and electricity, communication sequential is as shown in Figure 5.Table 8 is slave machine people's unit state data packets form.
8 slave machine people's unit state data packets of table
As shown above, the packet header of unit state data packets is 0xFE, wraps a length of data length and adds 4, i.e., 15, is represented with 16 systems For 0x0F.It is 0xB0 to define source port number and destination slogan, represents type of data packet as unit state data packets.Remote address It is consistent with the team leader robot address of system initialization for team leader's robot communication address.2 words of single robot course information Section represents that robot longitude and latitude use 4 byte representations.
Table 9 is slave machine people's unit probe data packet form.
9 slave machine people's unit probe data packet of table
As shown above, the packet header of unit probe data packet is 0xFE, wraps a length of data length and adds 4, i.e., 17, is represented with 16 systems For 0x11.It is 0xB1 to define source port number and destination slogan, represents type of data packet as unit probe data packet.Remote address It is consistent with the team leader robot address of system initialization for team leader's robot communication address.Single robot directed information and course Information uses 4 byte representations using 2 byte representations, robot longitude and latitude.
The sound-detection Robotics Sound signal detecting method of the present embodiment is as follows:
The voice signal detection method of the sound-detection robot based on to total length be N target sound signal s (n) when Domain, frequency-domain analysis, n are discrete time.Framing is carried out to s (n), frame number, N are represented with i1Represent the signal length of each frame, Then the i-th frame voice signal and its short-time energy, short-time zero-crossing rate, frequency domain sub-band energy are represented sequentially as xi(n), n=1, 2...N1, EiAnd Zi.The frequency spectrum of s (n) is remembered for S (w), and framing is carried out to frequency spectrum, the sequence number of subband spectrum, N are represented with j2Represent frequency The length of each frame, that is, subband spectrum of spectrum, the energy of j-th of subband spectrum are expressed as Pj.Target sound signal s is analyzed respectively (n) and the time domain short-time energy E of ambient noise noise (n)i, short-time zero-crossing rate ZiWith frequency domain sub-band energy PjThese three are main special Sign obtains the normalization threshold value of above-mentioned three kinds of characteristic quantities of alternative sounds signal, for the voice signal arrived to system acquisition It is detected and identifies.Since entire signal sequence, the size of above-mentioned three kinds of characteristic parameters and thresholding is compared, when above-mentioned three kinds When thresholding is both less than threshold value, it is believed that detect signal, and using the moment as the effective starting point of echo signal, after the starting point 512 point datas are intercepted from sequence, are defined as the direct-path signal of sound.Wherein
(1) the microphone array arbitrarily all the way short-time energy E of voice signali
(2) the microphone array arbitrarily all the way short-time zero-crossing rate Z of voice signali
Wherein sgn [] is mathematic sign function, is defined as
(3) the microphone array arbitrarily all the way frequency domain sub-band energy P of voice signalj
Wherein S (w) is the microphone array arbitrarily frequency spectrum of voice signal all the way
The sound orientation algorithm of the sound-detection robot of the present embodiment is as follows:
The sound orientation algorithm of the sound-detection robot uses broad sense correlation time-delay estimate algorithm.Detecting target After sound source occurs, four tunnel wheats are obtained using broad sense correlation time-delay estimate algorithm to the direct-path signal of above-mentioned No. four microphone The time difference that gram wind number is arrived, horizontal azimuth of the voice signal compared with array is obtained according to the geometrical relationship of quaternary battle array.
If si(n) and sj(n) the i-th road obtained after tested for system and jth road sound direct-path signal
(1) then si(n) and sj(n) broad sense correlation time-delay estimate R'[n] be
WhereinFor si(n) and sj(n) frequency domain auto-correlation
Wherein, Si(ω) and Sj(ω) is respectively si(n) and sj(n) frequency spectrum, calculation formula is as shown in (5) formula, Wn(ω) is frequency Domain weighting function.
(2) direction resolves
To the R'[n of above-mentioned acquisition] search maximum, find out the position of maximum, you can obtain sound source to two-way microphone when Between difference τ, with reference to microphone to the distance between, according to geometrical relationship realize Sounnd source direction resolving.
The direction finding crossover algorithm of the system sound source positioning of the present embodiment is as follows:
System is using team leader robot position o as coordinate origin, using geographical direct north as positive direction of the x-axis, geographical due east Two dimension xoy plane coordinate systems are established in direction for positive direction of the y-axis.
Team leader robot is based on intersecting Direction Finding Algorithm, the acoustic target orientation angle of M sound-detection robot of fusion synchronization φiWith itself current course angleObtain the sound bearing angle θ under earth coordinatesi, all machines in being formed into columns by GPS The latitude and longitude information of people, the position coordinates p being converted under the earth coordinates compared with team leader robot locationi(xi,yi), Using DF and location algorithm and least-squares algorithm is intersected, target location is realizedEstimation, subscript i represent robot Sequence number, i=1,2 ... M, concrete principle as shown in Figure 6, wherein target bearingFor:
Wherein
C=[g1 g2 ... gN]T (11)
gi=xisinθi-yicosθi (12)
Sound bearing angle θ wherein under earth coordinatesiCalculating method method it is as follows:
WhenWhen, the Sounnd source direction angle under earth coordinates:
WhenWhen, the Sounnd source direction angle under earth coordinates:
The microphone array of system sounds sniffing robot sound source direction module is followed successively by 0~360 degree in a clockwise direction.
Sound-detection robot based on the present embodiment distributed acoustic source detection method:
The monitoring computer of robot sound source cooperative detection system in the present embodiment be PC ends program, using MATLABGUI into Row is developed, and the team leader robot and slave machine people in the present embodiment are just the same on hardware and function, all possess sound spy It surveys, wireless communication, GPS positioning, posture perception and obstacle detection function.
This example squadron officer robot is as communication center node, according to system communication protocol upwardly through Radio Link and monitoring Computer communicate, pass downwardly through wirelessly with 2 slave machine people's tdm communications.The team leader robot is according to each machine of forming into columns The acoustic target detection information of people obtains acoustic target position and trace information using corresponding auditory localization algorithm, plans simultaneously The follow-up work of system, the movement of commander's subordinate sound-detection robot.
Sound-detection robot in the present embodiment, including robot platform, robot control system and robot sensor system System, as shown in Figure 7.
The robot platform of sound-detection robot in the present embodiment includes mechanical structure and Electrified Transmission mechanism.In this example Robot platform selects wheeled 4 wheel driven intelligent carriage robot, and platform size 23cm*19cm, stationarity, mobility is preferable, such as Shown in attached drawing 7.
The robot control system of sound-detection robot in the present embodiment includes ARM microcontroller, electric power system and motor Drive module.In the present embodiment 9V is used using 32 high-performance ARM microcontrollers of STM32F103ZETB, electric power system 5000 milliamperes of charged lithium cells power supplies, motor drive module use high current motor drive ic L293D, and control system performance is steady It is fixed, it is powerful.
The robot sensor system of sound-detection robot in the present embodiment includes, sound source direction module, locating module, nothing Line communication module, avoidance sensor assembly and attitude transducer module.
The sound source direction module of the robot sensor system of sound-detection robot in the present embodiment includes microphone array, Simulation pretreatment circuit and data acquisition and signal processing unit.As shown in Figure 10.Sound source direction module passes through in this example SPI interface is connected with robot control system, as shown in Figure 7.
The quaternary cross microphone of the robot sensor system sound source orientation module of sound-detection robot in the present embodiment Array, which is used, to be built by the good electret microphone of garbled correlation, four microphones at grade, with ten Font is arranged, and when car body is horizontal, the plane is parallel with horizontal plane, 0.2 meter of each pair microphone array element spacing, such as attached Shown in Fig. 9.
The simulation pretreatment circuit of the robot sensor system sound source orientation module of sound-detection robot in the present embodiment Filtering and amplifying circuit is built using AD8656 operational amplifiers, the analog signal of microphone array output is pre-processed, it will It exports the acquisition of analog signal incoming data and carries out sampling processing with signal processing unit.Sound-detection robot in the present embodiment Data acquisition and the signal processing unit of sound source direction module of robot sensor system use multichannel synchronousing collection AD Chip is built with dsp chip, and wherein A/D chip selects four-way high speed synchronous sample chip AD7606, the highest sampling of the chip Frequency is 200KHz, and DSP uses TMS320F2812 chips.
The locating module of the sensing system of sound-detection robot selects UBLOX NEO-M8NGPS modules in the present embodiment, leads to UART interface is crossed with robot control system to be connected.As shown in Figure 8.
In the present embodiment the robot sensor system of sound-detection robot wireless communication module select 2.4G Zigbee without Wire module DL-LN32P, 500 meters of the module maximum communication distance are connected by UART interface with robot control system.It is such as attached Shown in Fig. 8.
The robot sensor system avoidance sensor assembly of sound-detection robot selects two pairs of infrared tubes in the present embodiment Sensor, 0.5 meter of maximum detectable range are connected using common I/O port language control system.As shown in Figure 8.
The robot sensor posture sensor of sound-detection robot selects nine axis postures of GY953 sensing in the present embodiment Device, can directly export 3 axis Eulerian angles, be connected by SPI interface with robot control system.
The innovation of this patent is that previous robot sound source detection system is all based on individual machine people, due to individual machine people Size limit, the microphone array aperture carried in robot is smaller, the sound source direction precision of this kind of system be also possible that but by It is larger in range error, cause positioning accuracy not high, and detection range is limited.The system combines distributed sound source location technology With multirobot coordination technique, there are the advantages such as detection accuracy is higher, and environmental suitability and fault-tolerance are strong, as Group Robots The new tool of environment sensing greatly improves robot environment's perception, is intelligent robot environment sensing, and cooperation of forming into columns is established Good basis, has broad application prospects.
Present disclosure is not limited to cited by embodiment, and those of ordinary skill in the art are by reading description of the invention and right Any equivalent conversion that technical solution of the present invention is taken is that claim of the invention is covered.

Claims (10)

  1. A kind of 1. distributed acoustic source detection method, it is characterised in that:
    This method comprises the following steps;
    Step 1, system layout:By a sound-detection robot dispersed placements of M (>=3) in the region that needs monitor, monitoring calculates Machine is placed within range for wireless communication;
    Step 2, role's distribution:A sound-detection robots of M (>=3) are 1 team leader robot according to Partition of role regular partition With M-1 slave machine people, robot team formation is formed;Communication and data processing centre of its squadron officer robot as formation, Team leader robot is interacted by wireless communication with each slave machine people;The Partition of role rule is defined as follows:
    Each robot after power-up, carries out system initialization, is perceived certainly by sensing system in system initialisation phase Body position, course information gather the ambient noise size in itself residing region;Each robot respectively will after initialization Above- mentioned information uploads to monitoring computer;
    Monitoring computer sorts successively to the ambient noise size of each robot, the artificial team leader of the small machine of designated environment noise Robot, formation serial number 1, the artificial slave machine people of remaining machine, formation sequence number are followed successively by 2 ... M;The ambient noise is surveyed Amount method is as follows:In the robot initial stage, all sound-detection robots gather the environmental sound signal of T seconds, in calculating The mean power of signal is stated, obtains ambient noise present grade;
    Step 3, information Perception and signal detection:All sound-detection robots real-time perception itself attitude information and position letter Breath;Collected sound signal, by the time domain short-time energy of the voice signal received, short-time zero-crossing rate and frequency domain sub-band energy with just The time domain short-time energy of beginning setting, short-time zero-crossing rate and frequency domain sub-band energy threshold are compared, respective when being all higher than being equal to During threshold value, that is, think to detect useful signal, and obtain respective useful signal starting point, then continue to execute the 4th step, otherwise Xun Huan performs the 3rd step;
    Step 4, sound source direction and information exchange:Each sound-detection robot is respectively from the effective sound letter each detected Number starting light, the data of identical points are chosen in signal sequence, using broad sense correlation time-delay estimate sound orient calculate Method is calculated, and obtains azimuth of the acoustic target with respect to the microphone array of itself;Wherein slave machine people detects sound source As a result uploaded to own location information, attitude information at team leader robot;
    Step 5, data fusion realize auditory localization:Team leader robot is obtaining the detection of the sound source from each slave machine people As a result with combine the data that itself detect after their own location information, course information, the current all machines of emerging system The location information of people, course information and sound source goal orientation are as a result, obtain all robots with respect to the sound source under earth axes Directional bearing angle obtains acoustic target orientation using distributed sound source location algorithm, while target acquisition result is uploaded and is supervised Control computer;
    Step 6, formation cooperative achievement positioning and optimizing:For continuous acoustic target, team leader robot according to initial probe as a result, The position of continuous sound source and the relativeness of current system sound-detection robot space geometry line topological structure are analyzed, successively The control instruction for including optimization point position coordinates is sent to each robot, each slave machine people uses inertial navigation in formation Algorithm and pid control algorithm autonomous improve system to point position, the layout structure of Optimum distribution formula sound source detection system is optimized System is to the positioning of sound source and the precision of track following.
  2. 2. the sound-detection robot of distributed acoustic source detection method structure according to claim 1, puts down including robot Platform, robot control system and robot sensor system;The robot platform includes mechanical structure and Electrified Transmission mechanism; The robot control system includes ARM microcontroller, power supply module and motor drive module;The robot sensor system Including, sound source direction module, locating module, wireless communication module, avoidance sensor assembly and attitude transducer module;Robot Sound source direction module real-time perception ambient sound in sensing system when occurring acoustic target in environment, calculates sound source side Position, control system is passed to by result;Attitude transducer module, avoidance sensor assembly in robot sensor system is real respectively When export the current three-axis attitude data of robot and obstacle detection data to robot control system;For in system from For belonging to robot, the data for the sensing system that control system is passed to need to be uploaded to team leader's machine by wireless module People;For the team leader robot in system, control system passes through nothing while itself sensing system data is received Line receives other slave machine personal datas, and after fusion obtains acoustic target orientation result, result is passed through on wireless communication module Reach monitoring computer.
  3. 3. sound-detection robot according to claim 2, which is characterized in that the sound source of the robot sensor system Orientation module includes microphone array, simulation pretreatment circuit and data acquisition and signal processing unit.
  4. 4. sound-detection robot according to claim 3, which is characterized in that the sound source of the robot sensor system The microphone array of orientation module includes two connecting rods and four microphones, and two of which connecting rod is located at same level, Connection forms cross;Four microphones are located at the rod end of connecting rod respectively and it arrives the axial line distance phase of cross-shape center Deng;Electret microphone or MEMS microphone can be selected in the microphone.
  5. 5. sound-detection robot according to claim 2, which is characterized in that the sound source of the robot sensor system The simulation pretreatment circuit of orientation module builds amplification and filter circuit using operational amplifier, to the mould of microphone array output Intend signal to be amplified and filter, by the analog signal incoming data acquisition of output and signal processing unit is sampled and signal Processing calculates acoustic target direction.
  6. 6. sound-detection robot according to claim 3, which is characterized in that the sound source of the robot sensor system The data acquisition of orientation module is built with signal processing unit using multichannel synchronousing collection A/D chip with dsp chip, wherein AD Chip highest sample frequency is more than or equal to 20KHz.
  7. 7. sound-detection robot according to claim 2, which is characterized in that the positioning of the robot sensor system GPS positioning module or Beidou positioning module or GLONASS locating module can be used in module, for sound-detection robot into Row positioning and time service in real time.
  8. 8. sound-detection robot according to claim 3, which is characterized in that the robot sensor system it is wireless Polytype can be used in communication module, for example supports 4G mobile networks, the wireless communication module of ZigBee or WIFI agreements, is used for The information exchange of the multiple robots of system.
  9. 9. sound-detection robot according to claim 3, which is characterized in that the posture of the robot sensor system Dissimilar sensor, such as attitude transducer based on MEMS technology can be used in sensor assembly, real-time for perceiving robot Course attitude information.
  10. 10. sound-detection robot according to claim 3, which is characterized in that the robot sensor system is kept away Infrared or sonac module can be used in barrier sensor assembly, realizes the real-time detection to robot direction of advance obstacle.
CN201711221413.2A 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same Active CN108089154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711221413.2A CN108089154B (en) 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711221413.2A CN108089154B (en) 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same

Publications (2)

Publication Number Publication Date
CN108089154A true CN108089154A (en) 2018-05-29
CN108089154B CN108089154B (en) 2021-06-11

Family

ID=62173279

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711221413.2A Active CN108089154B (en) 2017-11-29 2017-11-29 Distributed sound source detection method and sound detection robot based on same

Country Status (1)

Country Link
CN (1) CN108089154B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958202A (en) * 2018-07-27 2018-12-07 齐齐哈尔大学 A kind of method that multirobot collaboration is explored
CN110300355A (en) * 2019-05-07 2019-10-01 广东工业大学 A kind of Intelligent microphone for following sound source position mobile
CN110764053A (en) * 2019-10-22 2020-02-07 浙江大学 Multi-target passive positioning method based on underwater sensor network
CN111110490A (en) * 2019-12-13 2020-05-08 南方医科大学南方医院 Multifunctional operation nursing trolley
CN113326899A (en) * 2021-06-29 2021-08-31 西藏新好科技有限公司 Piglet compression detection method based on deep learning model
CN113331135A (en) * 2021-06-29 2021-09-03 西藏新好科技有限公司 Statistical method for death and washout rate and survival rate of pressed piglets
CN113791727A (en) * 2021-08-10 2021-12-14 广东省科学院智能制造研究所 Edge acquisition equipment applied to industrial acoustic intelligent sensing
CN116448231A (en) * 2023-03-24 2023-07-18 安徽同钧科技有限公司 Urban environment noise monitoring and intelligent recognition system
CN117111139A (en) * 2023-08-04 2023-11-24 中国水利水电科学研究院 Multi-point rapid detection device and technology for termite nest of high-coverage dam

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08145714A (en) * 1994-11-18 1996-06-07 Shigeo Hirose Information fusing system
CN1297394A (en) * 1999-03-24 2001-05-30 索尼公司 Robot
US7298868B2 (en) * 2002-10-08 2007-11-20 Siemens Corporate Research, Inc. Density estimation-based information fusion for multiple motion computation
CN102411138A (en) * 2011-07-13 2012-04-11 北京大学 Robot sound source positioning method
US20130335407A1 (en) * 2011-08-26 2013-12-19 Reincloud Corporation Coherent presentation of multiple reality and interaction models
CN205067729U (en) * 2015-08-17 2016-03-02 旗瀚科技股份有限公司 Realize sound localization processing module of robot sense of hearing function
CN105425212A (en) * 2015-11-18 2016-03-23 西北工业大学 Sound source locating method
CN106405499A (en) * 2016-09-08 2017-02-15 南京阿凡达机器人科技有限公司 Method for robot to position sound source
CN206643934U (en) * 2017-03-31 2017-11-17 长春理工大学 Multi-information acquisition perceives search and rescue robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08145714A (en) * 1994-11-18 1996-06-07 Shigeo Hirose Information fusing system
CN1297394A (en) * 1999-03-24 2001-05-30 索尼公司 Robot
US7298868B2 (en) * 2002-10-08 2007-11-20 Siemens Corporate Research, Inc. Density estimation-based information fusion for multiple motion computation
CN102411138A (en) * 2011-07-13 2012-04-11 北京大学 Robot sound source positioning method
US20130335407A1 (en) * 2011-08-26 2013-12-19 Reincloud Corporation Coherent presentation of multiple reality and interaction models
CN205067729U (en) * 2015-08-17 2016-03-02 旗瀚科技股份有限公司 Realize sound localization processing module of robot sense of hearing function
CN105425212A (en) * 2015-11-18 2016-03-23 西北工业大学 Sound source locating method
CN106405499A (en) * 2016-09-08 2017-02-15 南京阿凡达机器人科技有限公司 Method for robot to position sound source
CN206643934U (en) * 2017-03-31 2017-11-17 长春理工大学 Multi-information acquisition perceives search and rescue robot

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LIU HONG: "Bi-Direction Interaural Matching Filter and Decision Weighting Fusion for Sound Source Localization in Noisy Environments", 《IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS》 *
YU-HAN CHENG,QING-HAO MENG,YING-JIE LIU,MING ZENG,LE XUE: "Fusing Sound and Dead Reckoning for Multi-robot Cooperative", 《 2016 12TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION》 *
吴玉秀,孟庆浩,曾明: "基于声音的分布式多机器人相对定位", 《自动化学报》 *
张竹,陈建峰,程萍: "一种分布式麦克风阵列定位算法及性能分析", 《微型机与应用》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108958202A (en) * 2018-07-27 2018-12-07 齐齐哈尔大学 A kind of method that multirobot collaboration is explored
CN108958202B (en) * 2018-07-27 2020-11-24 齐齐哈尔大学 Multi-robot collaborative exploration method
CN110300355A (en) * 2019-05-07 2019-10-01 广东工业大学 A kind of Intelligent microphone for following sound source position mobile
CN110764053A (en) * 2019-10-22 2020-02-07 浙江大学 Multi-target passive positioning method based on underwater sensor network
CN111110490A (en) * 2019-12-13 2020-05-08 南方医科大学南方医院 Multifunctional operation nursing trolley
CN113326899A (en) * 2021-06-29 2021-08-31 西藏新好科技有限公司 Piglet compression detection method based on deep learning model
CN113331135A (en) * 2021-06-29 2021-09-03 西藏新好科技有限公司 Statistical method for death and washout rate and survival rate of pressed piglets
CN113791727A (en) * 2021-08-10 2021-12-14 广东省科学院智能制造研究所 Edge acquisition equipment applied to industrial acoustic intelligent sensing
CN116448231A (en) * 2023-03-24 2023-07-18 安徽同钧科技有限公司 Urban environment noise monitoring and intelligent recognition system
CN117111139A (en) * 2023-08-04 2023-11-24 中国水利水电科学研究院 Multi-point rapid detection device and technology for termite nest of high-coverage dam
CN117111139B (en) * 2023-08-04 2024-03-05 中国水利水电科学研究院 Multi-point rapid detection device and technology for termite nest of high-coverage dam

Also Published As

Publication number Publication date
CN108089154B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN108089154A (en) Distributed acoustic source detection method and the sound-detection robot based on this method
CN108226852B (en) Unmanned aerial vehicle operator positioning system and method based on aerial radio monitoring platform
CN103313194B (en) Based on indoor locating system personnel movement track acquisition device and the method for beacon location technology
Aman et al. Reliability evaluation of iBeacon for micro-localization
CN102901949A (en) Two-dimensional spatial distribution type relative sound positioning method and device
CN103068038A (en) Indoor bidirectional positioning method based on Zigbee network
CN108413966A (en) Localization method based on a variety of sensing ranging technology indoor locating systems
CN102547973B (en) RSSI (received signal strength indicator)-based multi-sensor fusion mobile node tracking method
Liu et al. The performance evaluation of hybrid localization algorithm in wireless sensor networks
Whitmire et al. Acoustic sensors for biobotic search and rescue
CN103702413A (en) Real-time following and positioning system of indoor camera based on Zigbee network
CN116847321B (en) Bluetooth beacon system, bluetooth positioning device and readable storage medium
Zhou et al. Visible light-based robust positioning under detector orientation uncertainty: A gabor convolutional network-based approach extracting stable texture features
CN207448485U (en) A kind of service robot
Genco Three step bluetooth positioning
CN113825100B (en) Positioning object searching method and system
CN110907894A (en) Remote control type life detection device and detection method thereof
WO2015186218A1 (en) Wireless communication system
CN113483811B (en) Air index parameter distributed monitoring node device based on raspberry group
CN112954591B (en) Cooperative distributed positioning method and system
CN205404789U (en) Indoor positioning system
CN112720448A (en) Positioning robot for self-recognition and positioning system thereof
Sabale et al. An analysis of path planning mechanisms in wireless sensor networks
Li et al. A distributed sound source surveillance system using autonomous vehicle network
Wanqing et al. Improved PSO-extreme learning machine algorithm for indoor localization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant