CN115209526A - Positioning method based on multi-source fusion - Google Patents
Positioning method based on multi-source fusion Download PDFInfo
- Publication number
- CN115209526A CN115209526A CN202111657856.2A CN202111657856A CN115209526A CN 115209526 A CN115209526 A CN 115209526A CN 202111657856 A CN202111657856 A CN 202111657856A CN 115209526 A CN115209526 A CN 115209526A
- Authority
- CN
- China
- Prior art keywords
- pnt
- sensors
- positioning
- terminal
- integrating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000003993 interaction Effects 0.000 claims abstract description 11
- 238000004891 communication Methods 0.000 claims description 18
- 230000011664 signaling Effects 0.000 claims description 11
- QVFWZNCVPCJQOP-UHFFFAOYSA-N chloralodol Chemical compound CC(O)(C)CC(C)OC(O)C(Cl)(Cl)Cl QVFWZNCVPCJQOP-UHFFFAOYSA-N 0.000 claims description 8
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 230000004044 response Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 3
- 230000002567 autonomic effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W64/00—Locating users or terminals or network equipment for network management purposes, e.g. mobility management
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
Abstract
The invention discloses a positioning method based on multi-source fusion, which has the core idea that a device or a terminal integrating various positioning navigation time service (PNT) sensors reports the positioning capability based on the multi-source fusion to a network side, or directly performs positioning capability interaction based on the multi-source fusion with other devices or terminals integrating various PNT sensors. The positioning capability based on multi-source fusion comprises three aspects, namely multi-task role capability, autonomous positioning capability and cooperative positioning capability. Under the cooperation of the three capabilities, the network side can configure according to different capabilities, and in a complex scene, one or more devices or terminals integrating various PNT sensors in the network are planned and deployed in a centralized manner to cooperatively complete a reconnaissance task; or a plurality of devices or terminals integrating various PNT sensors, and the reconnaissance tasks are cooperatively completed in a distributed mode. The method is suitable for the fields of unmanned clustering, man-machine cooperation and the like in complex scenes, and has wide application prospects in the application fields of military and military-civil integrated unmanned clustering.
Description
Technical Field
The invention relates to the field of multi-source fusion positioning and unmanned cluster application.
Background
Various applications based on unmanned systems have penetrated into various fields of national economy, such as hotel guest-welcoming robots, floor sweeping robots, inspection robots, unmanned vehicles, unmanned aerial vehicles, unmanned ships and the like. With the rapid development of the technology, the application trend is gradually evolving from a mode of completing tasks by a single unmanned device, an unmanned terminal and unmanned equipment to a mode of completing tasks by multiple devices in a coordinated manner, so that the high-precision positioning navigation and the coordinated communication positioning application of an unmanned cluster in a complex scene become a hot spot concerned in the industry.
Taking military reconnaissance scenes as an example, unmanned cluster tactics are a cooperative combat method which looks random and ordered in practice. Taking a micro unmanned aerial vehicle as an example, a plurality of small unmanned aerial vehicles can be continuously and rapidly launched by utilizing a plurality of launchers. The novel intelligent reconnaissance device has the characteristics of quickness, flexibility, concealment, portability and the like, and can reconnaissance strange environments and even accurately reconnaissance battle targets. The small-sized rotor unmanned reconnaissance aircraft is small in size, can adopt a vertical take-off and landing mode, is low in flying speed and small in flying turning radius, can walk through reconnaissance on main streets erected in high-rise buildings, can fly in narrow street lanes, can enter indoor building scenes, and achieves autonomous seamless positioning navigation under indoor and outdoor complex scenes. Meanwhile, the unmanned reconnaissance plane can be placed in a special case due to small volume, convenient and fast disassembly and assembly and simple control, is carried on the back by an individual soldier, and can be quickly assembled and then flown after being maneuvered to a proper command and control position. Under the tactical reconnaissance scene, the danger of individual soldier exposure can be reduced, and the efficiency and the safety of the 'individual soldier-unmanned aerial vehicle' cooperation reconnaissance are improved. After the unmanned aerial vehicle launches and rises to the air, the folding wings are rapidly opened to fly. Unmanned aerial vehicle can carry out the task alone, can the shared information again, forms the bee colony through autonomic flight and automatic marshalling, and this kind of cluster fight can promote efficiency greatly. The system can optimize an attack plan in real time according to information such as an attack target state, a task completion condition, single unmanned aerial vehicle load and position and the like, assign the most appropriate unmanned aerial vehicle to execute a task, and ensure that a cooperative task is completed through manual intervention of necessary nodes. Obviously, this kind of cluster is in coordination not merely exist between many unmanned aerial vehicles, including unmanned vehicle and unmanned aerial vehicle in coordination, unmanned vehicle and unmanned vehicle in coordination, individual soldier and unmanned vehicle in coordination, unmanned ship and unmanned aerial vehicle in coordination are equal. However, in the above applications, there are many technical difficulties to be solved, and what is important is how to obtain reliable, continuous and high-precision positioning in the GNSS denial scenario. No matter whether the carrier is an unmanned aerial vehicle, an unmanned ship, a robot or other machine devices, different carriers have modules or units corresponding to positioning and navigation of the carrier, and the modules or the units are collectively called as unmanned devices or unmanned terminals in the invention; because the sensors integrated by various unmanned devices and unmanned terminals are different, the respective positioning capabilities are different, some unmanned devices and unmanned terminals can realize autonomous positioning and drawing establishment in strange scenes, some unmanned devices and unmanned terminals need to realize positioning in scenes with anchor points laid in advance, some unmanned devices and unmanned terminals support cooperative communication and positioning with other devices or terminals, and some unmanned terminals can only acquire basic information but do not have positioning capabilities. Meanwhile, in a scenario of cooperative positioning, if some device or terminal assists other devices or terminals in positioning, the position of the device or terminal may drift due to its capability limitation, or the device or terminal cannot report to the network side in real time, which may cause divergence of errors of cooperative positioning, which is also a capability that may be required for an unmanned device or an unmanned terminal in a scenario of cooperative positioning. Therefore, under the cluster tactics, each unmanned device or unmanned terminal participating in the task is required to have the same positioning capability, which is obviously unrealistic, and after all, the positioning requirement of some terminals is high, a plurality of integrated sensors are provided, but the concealment is poor; some terminals have lower positioning requirements, even do not need a positioning function, and only send position information by a network side, but have good concealment and can cooperate with reconnaissance. When devices or terminals with different positioning capabilities are well coordinated, a network side and an unmanned device or an unmanned terminal are required to interact, and the network side determines whether to deploy a single device to independently complete a reconnaissance task or deploy multiple devices to jointly complete the reconnaissance task through a cooperation mechanism. The same problem exists in the interaction between different devices, which is not described in detail.
The invention discloses a positioning method based on multi-source fusion, which has the core idea that a device or a terminal integrating various positioning navigation time service (PNT) sensors reports the positioning capability based on the multi-source fusion to a network side, or directly performs positioning capability interaction based on the multi-source fusion with other devices or terminals integrating various PNT sensors. The positioning capability based on multi-source fusion comprises three aspects, namely multi-task role capability, autonomous positioning capability and cooperative positioning capability. Under the cooperation of the three capabilities, the network side can configure according to different capabilities, and in a complex scene, one or more devices or terminals integrating various PNT sensors in the network are planned and deployed in a centralized manner to cooperatively complete a reconnaissance task; or a plurality of devices or terminals integrating various PNT sensors, and the reconnaissance tasks are cooperatively completed in a distributed mode. The method is suitable for the fields of unmanned clustering, man-machine cooperation and the like in complex scenes, and has wide application prospects in the application fields of unmanned clustering of military use and military and civil integration, including random combination and cooperative positioning among unmanned aerial vehicles, unmanned ships, robots and individual soldiers.
Disclosure of Invention
The invention discloses a positioning method based on multi-source fusion, which is characterized by comprising the following steps:
the device or the terminal integrating various positioning navigation time service (PNT) sensors reports the positioning capability based on multi-source fusion to a network side, or directly performs positioning capability interaction based on multi-source fusion with other devices or terminals integrating various PNT sensors.
The positioning capability based on multi-source fusion comprises three aspects, namely multi-task role capability, autonomous positioning capability and cooperative positioning capability.
After the network side acquires the positioning capability based on multi-source fusion of the device or the terminal integrating the multiple PNT sensors, one or more devices or terminals integrating the multiple PNT sensors in the network can be planned and deployed in a centralized manner to cooperatively complete a reconnaissance task;
or the device or the terminal integrating various PNT sensors and other devices or terminals integrating various PNT sensors can be cooperatively matched with each other in a distributed manner to complete a reconnaissance task after the interaction based on the positioning capability of multi-source fusion.
A positioning method based on multi-source fusion is characterized in that:
the multitask role capability comprises a task role switching indication whether or not to be supported and a multitask role parallel indication whether or not to be supported; the task roles include, but are not limited to, a scout node role, an anchor node role.
The task role switching indication whether is supported or not comprises two states; if supporting task role switching, the device or the terminal integrating various PNT sensors can be used as a scout node and an anchor node, two role states are executed based on a time division mode, and role state switching can be completed by a mode of issuing a control signaling from a network side; if the device or the terminal integrating the multiple PNT sensors does not support task role switching, the device or the terminal integrating the multiple PNT sensors can only be used as any one of a scout node or an anchor node.
Whether the multitask role parallel indication is supported or not comprises two states; if the multitask role parallel is supported, the device or the terminal integrating the multiple PNT sensors can simultaneously take the roles of a scout node and an anchor node; if the multitask parallel is not supported, the device or the terminal integrating the multiple PNT sensors can only be used as a scout node or an anchor node on the basis of a time division mode.
A positioning method based on multi-source fusion is characterized in that:
if the device or the terminal integrating the multiple PNT sensors supports multitask role parallelism, the device or the terminal integrating the multiple PNT sensors needs to meet any one of the following conditions or two combinations on hardware.
The device or the terminal integrating various PNT sensors has the advantages that under the condition one, the device or the terminal integrating various PNT sensors must comprise any one of a vision sensor or a laser radar sensor, or both the vision sensor and the laser radar sensor have the functions of positioning and mapping (SLAM) at the same time, and the device or the terminal can support an anchor role; meanwhile, the integrated PNT sensor also needs to include at least one sensor or module for close-range cooperative communication and positioning, including but not limited to ultra-wideband (UWB), bluetooth, WIFI, NBIoT, loRa, and 5G, so as to support the role of a scout node.
The device or the terminal integrating the multiple PNT sensors must include at least two sensors or modules for close-range cooperative communication and positioning, including but not limited to ultra-wideband (UWB), bluetooth, WIFI, NBIoT, loRa, and 5G, such as two UWB modules, two bluetooth modules, two WIFI modules, 1 UWB module, and 1 bluetooth module, and so on; the two close-range cooperative communication and positioning sensors or modules need to perform the same ID authentication binding, and the device or the terminal integrating various PNT sensors can simultaneously support an anchor role and a reconnaissance role.
A positioning method based on multi-source fusion is characterized in that:
the autonomous positioning capability comprises three types of complete autonomous positioning capability, common positioning capability and incapability of positioning capability.
The fully autonomous positioning capability refers to that the device or the terminal integrating multiple types of PNT sensors can resolve absolute position information (longitude and latitude height in a geodetic coordinate system) or relative position information (XYZ in a relative coordinate system) of the device or the terminal integrating multiple types of PNT sensors through the multiple types of PNT sensors integrated by the device or the terminal integrating multiple types of PNT sensors under any complex scene (including a GNSS rejection scene) without depending on other network nodes or the assistance of network side auxiliary information besides a satellite GNSS.
The general positioning capability refers to that the device or the terminal integrating multiple types of PNT sensors can solve absolute position information (longitude and latitude height in a geodetic coordinate system) or relative position information (XYZ in a relative coordinate system) of the device or the terminal integrating multiple types of PNT sensors in any complex scene (including a GNSS rejection scene) by relying on other network nodes besides a satellite GNSS or assistance of network side auxiliary information.
The device or the terminal integrating the multiple PNT sensors cannot solve absolute position information (longitude and latitude height under a geodetic coordinate system) or relative position information (XYZ under a relative coordinate system) of the device or the terminal integrating the multiple PNT sensors in any complex scene (including a GNSS rejection scene), and only can be solved and issued by a network side or other devices or terminals integrating the multiple PNT sensors.
A positioning method based on multi-source fusion is characterized in that:
if the device or the terminal integrating the multiple PNT sensors has the completely autonomous positioning capability, the device or the terminal integrating the multiple PNT sensors should meet the requirement on hardware that the device or the terminal integrating the multiple PNT sensors must include a GNSS module, a micro inertial navigation module and any one or both of a vision sensor and a laser radar sensor, and have the functions of positioning and mapping (SLAM) at the same time.
If the device or terminal integrating the multiple PNT sensors has the ordinary positioning capability, the device or terminal integrating the multiple PNT sensors should meet the requirement on hardware that the device or terminal integrating the multiple PNT sensors must include a GNSS module and a micro inertial navigation module, and optionally include at least one near field cooperative communication and positioning sensor or module, including but not limited to Ultra Wideband (UWB), bluetooth, WIFI, NBIoT, loRa, and 5G.
The device or the terminal integrating multiple PNT sensors does not have positioning capability, and the device or the terminal integrating multiple PNT sensors does not need to comprise a GNSS module, a micro inertial navigation module, a vision sensor and a laser radar sensor on hardware; but need to include at least one sensor or module for close range cooperative communication and positioning, including but not limited to Ultra Wideband (UWB), bluetooth, WIFI, NBIoT, loRa, 5G; this type can minimize the cost and power consumption of the device or terminal.
A positioning method based on multi-source fusion is characterized in that:
the cooperative positioning capability refers to an indication of whether the device or the terminal integrating the multiple PNT sensors supports cooperative positioning, and comprises two states; if the device or the terminal integrating the multiple PNT sensors supports the cooperative positioning, the device or the terminal integrating the multiple PNT sensors can perform cooperative communication with other devices or terminals integrating the multiple PNT sensors, and the function of cooperative positioning is realized; if the cooperative positioning is not supported, the device or the terminal integrating the multiple PNT sensors does not support cooperative communication and cooperative positioning.
A positioning method based on multi-source fusion is characterized in that:
the role of the scout node refers to that under a complex scene, the scout node can reach a destination and complete a scout task in an autonomous positioning or cooperative positioning mode, and does not need to provide any positioning navigation auxiliary information for other network devices, terminals and nodes at this stage;
the role of the anchor node is to assist the scout node to complete the positioning and navigation functions by sending the position coordinate, the ranging information or other necessary auxiliary information of the anchor node to the scout node in the network in a complex scene.
A positioning method based on multi-source fusion is characterized in that:
after the network side acquires the positioning capability based on multi-source fusion of the device or the terminal integrating the multiple PNT sensors, the network side sends information of executing tasks to the device or the terminal integrating the multiple PNT sensors, wherein the information includes but is not limited to destination position coordinates, role information, task stages and task types.
The network side can arrange and plan the device or the terminal integrating the multiple PNT sensors to independently complete a reconnaissance task according to the positioning capacity based on the multi-source fusion of the device or the terminal integrating the multiple PNT sensors; or arranging the other devices or terminals integrating the multiple PNT sensors to cooperate with the devices or terminals integrating the multiple PNT sensors to complete the reconnaissance task, and sending related task information to the other devices or terminals integrating the multiple PNT sensors, wherein the related task information comprises but is not limited to destination position coordinates, role information, task phases and task types.
The positioning method based on multi-source fusion provided by the invention explains in detail how to integrate a device or a terminal of various PNT sensors with a network side or the positioning capability interaction of other devices or terminals of various PNT sensors. The positioning capability based on multi-source fusion comprises three aspects, namely multi-task role capability, autonomous positioning capability and cooperative positioning capability, wherein the multi-task role capability comprises two indications, the autonomous positioning capability comprises three types, and the cooperative positioning capability also comprises two indications, so that various cooperative positioning combination modes possibly used in unmanned cluster application are covered. Under the cooperation of the three capabilities, the network side can configure according to different capabilities, and in a complex scene, one or more devices or terminals integrating various PNT sensors in the network are planned and deployed in a centralized manner to cooperatively complete a reconnaissance task; or a plurality of devices or terminals integrating various PNT sensors, and the reconnaissance tasks are cooperatively completed in a distributed mode. The method is suitable for complex scenes, particularly the fields of unmanned clustering, man-machine cooperation and the like in a GNSS rejection environment, and has wide application prospect in the application field of military and civil-integrated unmanned clustering, including arbitrary combination and cooperative positioning among unmanned aerial vehicles, unmanned ships, robots and individual soldiers.
Drawings
Fig. 1 is a general technical framework diagram of the present invention.
Fig. 2 is a signaling flow diagram of embodiment 1.
Fig. 3 is a hardware composition diagram of the unmanned device or terminal of embodiment 1.
Fig. 4 is a signaling flow diagram of embodiment 2.
Fig. 5 is a hardware composition diagram of the unmanned device or terminal of embodiment 2.
Fig. 6 is a signaling flow diagram of embodiment 3.
Fig. 7 is a hardware composition diagram of the unmanned device or terminal of embodiment 3.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Example 1:
The signaling flow of example 1 is as follows (as shown in fig. 2):
step 201: reporting the positioning capacity based on multi-source fusion to a network side by a plurality of PNT unmanned devices or terminals;
the Multi-source Fusion Positioning Capability is represented by a structure MSFPCap (Multi-source Fusion Positioning Capability), the Multi-task role Capability is represented by a field MRCap (Multi rows Capability), the autonomous Positioning Capability is represented by a field SPCap (Self-Positioning Capability), and the Cooperative Positioning Capability is represented by a field CPCap (Cooperative Positioning Capability).
Two indications of the multi-task role capability MRCap, whether the task role switching indication is supported and whether the multi-task role parallel indication is supported, may be respectively represented by 1bit, 0 represents not support, and 1 represents support. In embodiment 1, the multi-PNT unmanned device or terminal supports task role switching and supports multitask role parallelization, and therefore, the transmission value of the MRCap is (1,1).
The SPCap with the autonomous positioning capability has three types of completely autonomous positioning capability, ordinary positioning capability and no positioning capability, and can be represented by binary, decimal or other modes. In this embodiment, a decimal manner is taken as an example, where when the positioning capability is not available, the SPCap value is 0; when the positioning device has the common positioning capacity, the value of the SPCap is 1; with full autonomous positioning capability, SPCap takes the value of 2. In this embodiment, if the multi-PNT unmanned device or terminal has the autonomous positioning capability, the output value of the SPCap is 2.
The CPCap is an indication of whether or not to support cooperative positioning, and takes a 2-bit system as an example, where 1bit represents that the CPCap is not supported, and 1 represents that the CPCap is supported. In this embodiment, the multi-PNT unmanned device or terminal does not support the cooperative positioning, and is mainly based on the completely autonomous positioning, so that the output value of the CPCap is 0.
In summary, in this embodiment, the actual sending information of the multi-source fusion positioning capability MSFPCap is:
step 202: and the network side receives the positioning capability report information from the multiple PNT unmanned devices or terminals and carries out related task planning. The unmanned device or the terminal has strong positioning capability and does not have cooperative positioning capability, so that a single machine reconnaissance mode is adopted.
Step 203: the network side issues the task information to the multi-PNT unmanned device or terminal, wherein the task information comprises destination position coordinates, role information, task stages and task types (in the embodiment, the task stages are single reconnaissance instructions). Of course, the network side may also issue other information, such as map information.
In this embodiment, a hardware composition diagram of the multi-PNT unmanned device or terminal is shown in fig. 3. The PNT sensor of the terminal mainly comprises a GNSS module, a micro inertial navigation module and a laser radar supporting SLAM. In addition, the system also comprises a central processing unit CPU/GPU and a power supply unit. Such application scenarios of single-machine reconnaissance can be applied to an unmanned vehicle (carrying a multi-PNT unmanned device or terminal) or an unmanned vehicle (carrying a multi-PNT unmanned device or terminal) in complex scenario reconnaissance.
Example 2:
The signaling flow of example 2 is as follows (as shown in fig. 4):
step 401: and the first unmanned device or the terminal reports the positioning capability based on multi-source fusion to the network side.
Wherein the first unmanned device or terminal supports task role switching and supports multitask role parallelism; has complete autonomous positioning capability; while supporting co-location capabilities. Therefore, the actual sending information of the multi-source fusion positioning capability MSFPCap of the first unmanned device or terminal is:
step 402: and the second unmanned device or the terminal reports the positioning capability based on the multi-source fusion to the network side.
The second unmanned device or the terminal only supports task role switching, but does not support multitask role parallel; in the aspect of autonomous positioning capability, the method has common positioning capability; while supporting co-location capabilities. Therefore, the actual sending information of the multi-source fusion positioning capability MSFPCap of the second unmanned device or terminal is:
step 403: and the network side receives the positioning capability report information of the first unmanned device or terminal and the second unmanned device or terminal and carries out related task planning. Because the environment conditions for executing the task are more complex, and the positioning capabilities of the first unmanned device or terminal and the second unmanned device or terminal are different, a short board which can be detected by a single machine can be compensated by adopting cooperative positioning. Therefore, a mode of cooperative reconnaissance is adopted.
Step 404: the network side issues the task information to the first unmanned device or terminal, including the destination location coordinate, the role information, the task stage, the task type (in this embodiment, the cooperative reconnaissance indication), and the cooperative terminal ID (the ID of the second unmanned device or terminal). Of course, the network side may also issue other information, such as map information.
Step 405: the network side issues the task information to the second unmanned device or terminal, including the destination location coordinate, the role information, the task stage, the task type (in this embodiment, the cooperative reconnaissance indication), and the cooperative terminal ID (the ID of the first unmanned device or terminal). Of course, the network side may also issue other information, such as map information.
Step 406: the first unmanned device or terminal detects ID information of the cooperative terminal (second unmanned device or terminal).
Step 407: and after the detection is finished, the first unmanned device or terminal sends the cooperative positioning request information to the second unmanned device or terminal.
Step 408: after receiving the cooperative positioning request of the first unmanned device or terminal, the second unmanned device or terminal feeds back cooperative positioning response information to the first unmanned device or terminal.
In this embodiment, a hardware composition diagram of the first unmanned device or the terminal is shown in fig. 5. The PNT sensor of the terminal mainly comprises a GNSS module, a micro inertial navigation module, a binocular camera supporting vision SLAM and a UWB module supporting cooperative positioning. In addition, the system also comprises a central processing unit CPU/GPU and a power supply unit. The application scene of the cooperative reconnaissance can be applied to scenes such as unmanned vehicle-unmanned aerial vehicle cooperation, unmanned aerial vehicle swarm, individual soldier-unmanned aerial vehicle cooperation (respectively carrying a first unmanned device or terminal and a second unmanned device or terminal), and the like in complex scene reconnaissance.
Example 3:
embodiment 3 is also a typical Case, and there are three network elements of a plurality of PNT unmanned devices or terminals (hereinafter, the description uses "first unmanned device or terminal" to show the distinction), other plurality of PNT unmanned devices or terminals (hereinafter, the description uses "second unmanned device or terminal" to show the distinction), and the network side in the whole embodiment. The first unmanned device or terminal has a certain positioning capability, and the second unmanned device or terminal has a positioning capability substantially equivalent to that of the first unmanned device or terminal, both of which support co-positioning. The difference from embodiment 2 is that only the first unmanned device or terminal has a signaling interaction with the network side. The network side judges that a reconnaissance task needs to be executed in a cooperative mode based on the multi-source fusion positioning capability of the first unmanned device or the terminal, and sends related task information to the first unmanned device or the terminal. When the first unmanned device or terminal executes the task, the first unmanned device or terminal needs to send Poll information to the network side by itself and waits for Response of the second unmanned device or terminal in the network. If a Response of the second unmanned device or terminal is received, the first unmanned device or terminal and the second unmanned device or terminal can selectively interact with the cooperative positioning request/Response and the positioning capability of the multi-source fusion.
The signaling flow of embodiment 3 is as follows (as shown in fig. 6):
step 601: and the first unmanned device or the terminal reports the positioning capability based on multi-source fusion to the network side.
Wherein the first unmanned device or terminal supports task role switching and supports multitask role parallelism; the method only has common positioning capability and does not support mapping in an SLAM mode; while supporting co-location capabilities. In view of the above combination of capabilities, the first unmanned device or terminal does not integrate a lidar, or a binocular camera, etc., and therefore must include at least two sensors or modules for close-range cooperative communication and positioning. In this embodiment, two UWB modules are used to support the anchor role and the scout role, respectively. Therefore, the actual sending information of the multi-source fusion positioning capability MSFPCap of the first unmanned device or terminal is:
step 602: and the network side receives the first unmanned device or terminal and performs related task planning. Since the environmental conditions for performing the task are more complex, the first unmanned device or terminal cannot perform the reconnaissance task alone. Therefore, a cooperative reconnaissance mode is required.
Step 603: the network side issues the task information to the first unmanned device or terminal, including the destination location coordinate, the role information, the task phase, the task type (in this embodiment, the cooperative reconnaissance instruction), and the map information. The reason why the map information is issued in this embodiment is that the first unmanned device or the terminal cannot establish a map in an unfamiliar scene, and therefore the first unmanned device or the terminal can become the prior or known information only by issuing the map information through the network side.
Step 604: the first unmanned device or terminal works in a scout node mode, sends Poll information to the wireless space based on the UWB module, and waits for feedback of other devices or terminals.
Step 605: the second unmanned device or terminal, working in anchor mode, receives Poll information from the first unmanned device or terminal, and then can feed back Response information to the first unmanned device or terminal.
Step 606: and the first unmanned device or terminal sends the co-location request and the multi-source fusion location capability of the first unmanned device or terminal to the second unmanned device or terminal.
Step 607: and the second unmanned device or terminal sends the co-location response and the multi-source fusion location capability of the second unmanned device or terminal to the first unmanned device or terminal.
It is worth mentioning that: step 606 and step 607 are two alternative signaling interaction modes. In step 604, the first unmanned device or terminal carries the co-location request and its own multi-source convergence location capability when sending the Poll information (access request); in step 605, when sending Response information, the second unmanned device or terminal also carries the co-location request and the multi-source convergence location capability of itself. The invention is not limited, and the invention is within the scope of protection as long as two unmanned devices or terminals have the multi-source fusion positioning capability of interacting with each other.
In this embodiment, a hardware composition diagram of the first unmanned device or the terminal is shown in fig. 7. The PNT sensor of the terminal mainly comprises a GNSS module, a micro inertial navigation module and a UWB module supporting cooperative positioning, in the embodiment, because a first unmanned device or terminal supports multitask role paralleling, two UWB modules are adopted, namely a UWB module 1 (executing an anchor role) and a UWB module 2 (executing a scout node role). Adopt the effect of two UWB modules to lie in: firstly, the UWB module 1 performs an anchor role, which can provide a coordinate reference to a second unmanned device or terminal, or other unmanned devices or terminals, to assist other devices or terminals in positioning; secondly, the UWB module 2 executes the role of the scout node, and can avoid the drift and divergence of the position in the motion process of the scout node, so that the role of the scout node can continuously modify the position information of the scout node depending on other reference nodes, anchor points or other signal sources in the network, so as to more accurately acquire the position information of the scout node.
In addition, the first unmanned device or the terminal further comprises a processing chip MCU and a power supply unit. The application scene of the cooperative reconnaissance can be applied to scenes such as unmanned vehicle-unmanned aerial vehicle cooperation, unmanned aerial vehicle swarm, individual soldier-unmanned aerial vehicle (respectively carrying a first unmanned device or terminal and a second unmanned device or terminal), cooperation of anchor points deployed in advance in certain specific areas and the like in complex scene reconnaissance.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (8)
1. A positioning method based on multi-source fusion is characterized in that:
the device or the terminal integrating various positioning navigation time service (PNT) sensors reports the positioning capability based on multi-source fusion to a network side, or directly performs positioning capability interaction based on multi-source fusion with other devices or terminals integrating various PNT sensors;
the positioning capability based on the multi-source fusion comprises three aspects, namely multi-task role capability, autonomous positioning capability and cooperative positioning capability;
after the network side acquires the positioning capability based on multi-source fusion of the device or the terminal integrating the multiple PNT sensors, one or more devices or terminals integrating the multiple PNT sensors in the network can be planned and deployed in a centralized manner to cooperatively complete a reconnaissance task;
or the device or the terminal integrating various PNT sensors and the other device or the terminal integrating various PNT sensors can be mutually cooperated in a distributed mode to complete a reconnaissance task after interaction based on the positioning capability of multi-source fusion.
2. The multi-source fusion-based positioning method according to claim 1, wherein:
the multitask role capability comprises a task role switching indication whether or not to be supported and a multitask role parallel indication whether or not to be supported;
the task roles include, but are not limited to, a scout node role, an anchor node role;
the task role switching indication whether is supported or not comprises two states; if supporting task role switching, the device or the terminal integrating various PNT sensors can be used as a scout node and an anchor node, two role states are executed based on a time division mode, and role state switching can be completed by a mode of issuing a control signaling from a network side; if the device or the terminal integrating the multiple PNT sensors does not support task role switching, the device or the terminal integrating the multiple PNT sensors can only be used as any one of a scout node or an anchor node;
whether the multitask role parallel indication is supported or not comprises two states; if the multitask role parallel is supported, the device or the terminal integrating the multiple PNT sensors can simultaneously play roles of a scout node and an anchor node; if the multitask parallel is not supported, the device or the terminal integrating the multiple PNT sensors can only be used as a scout node or an anchor node on the basis of a time division mode.
3. The multi-source fusion-based positioning method according to claim 1 and claim 2, wherein:
if the device or the terminal integrating the multiple PNT sensors supports multitask role parallelism, the device or the terminal integrating the multiple PNT sensors needs to meet any one or two combinations of the following conditions on hardware;
the device or the terminal integrating multiple PNT sensors has the advantages that under the condition one, the device or the terminal integrating multiple PNT sensors must comprise any one of a vision sensor or a laser radar sensor, or both the vision sensor and the laser radar sensor, has the function of positioning and mapping (SLAM) at the same time, and can support an anchor role; meanwhile, the integrated PNT sensor also needs to include at least one sensor or module for close-range cooperative communication and positioning, including but not limited to Ultra Wideband (UWB), bluetooth, WIFI, NBIoT, loRa, and 5G, so as to support a role of a scout node;
the device or the terminal integrating the multiple PNT sensors must include at least two sensors or modules for close-range cooperative communication and positioning, including but not limited to ultra-wideband (UWB), bluetooth, WIFI, NBIoT, loRa, and 5G, such as two UWB modules, two bluetooth modules, two WIFI modules, 1 UWB module, and 1 bluetooth module, and so on; the two close-range cooperative communication and positioning sensors or modules need to perform the same ID authentication binding, and the device or the terminal integrating various PNT sensors can simultaneously support an anchor role and a reconnaissance role.
4. The multi-source fusion-based positioning method according to claim 1, wherein:
the autonomous positioning capability comprises three types of complete autonomous positioning capability, common positioning capability and incapability of positioning capability;
the completely autonomous positioning capability refers to that the device or the terminal integrating multiple types of PNT sensors can, in any complex scene (including a GNSS rejection scene), resolve absolute position information (longitude and latitude height in a geodetic coordinate system) or relative position information (XYZ in a relative coordinate system) of the device or the terminal integrating multiple types of PNT sensors through the multiple types of PNT sensors integrated by the device or the terminal integrating multiple types of PNT sensors without depending on other network nodes or assistance of network side auxiliary information besides satellite GNSS;
the common positioning capability refers to that the device or the terminal integrating multiple PNT sensors can solve absolute position information (longitude and latitude height under a geodetic coordinate system) or relative position information (XYZ under a relative coordinate system) of the device or the terminal integrating multiple PNT sensors under any complex scene (including a GNSS rejection scene) by relying on other network nodes besides a satellite GNSS or assistance of network side auxiliary information;
the device or the terminal integrating the multiple PNT sensors cannot solve absolute position information (longitude and latitude height under a geodetic coordinate system) or relative position information (XYZ under a relative coordinate system) of the device or the terminal integrating the multiple PNT sensors in any complex scene (including a GNSS rejection scene), and only can be solved and issued by a network side or other devices or terminals integrating the multiple PNT sensors.
5. The multi-source fusion-based positioning method according to claim 1 and claim 4, wherein:
if the device or the terminal integrating the multiple PNT sensors has the completely autonomous positioning capability, the device or the terminal integrating the multiple PNT sensors needs to meet the requirement on hardware that the device or the terminal integrating the multiple PNT sensors must comprise a GNSS module, a micro inertial navigation module and any one or both of a vision sensor and a laser radar sensor, and has the functions of positioning and mapping (SLAM) at the same time;
if the device or terminal integrating the multiple PNT sensors has the ordinary positioning capability, the device or terminal integrating the multiple PNT sensors should meet the requirement on hardware that a GNSS module and a micro inertial navigation module are necessarily included, and at least one near field cooperative communication and positioning sensor or module can be selectively included, including but not limited to Ultra Wideband (UWB), bluetooth, WIFI, NBIoT, loRa, and 5G;
if the device or the terminal integrating the multiple PNT sensors does not have the positioning capability, the device or the terminal integrating the multiple PNT sensors does not need to comprise a GNSS module, a micro inertial navigation module, a vision sensor and a laser radar sensor on hardware; but need to include at least one sensor or module for close-range cooperative communication and positioning, including but not limited to ultra-wideband (UWB), bluetooth, WIFI, NBIoT, loRa, 5G; this type can minimize the cost and power consumption of the device or terminal.
6. The multi-source fusion-based positioning method according to claim 1, wherein:
the cooperative positioning capability refers to an indication of whether the device or the terminal integrating the multiple PNT sensors supports cooperative positioning, and comprises two states; if the device or the terminal integrating the multiple PNT sensors supports the cooperative positioning, the device or the terminal integrating the multiple PNT sensors can perform cooperative communication with other devices or terminals integrating the multiple PNT sensors, and the function of cooperative positioning is realized; if the device or the terminal integrating the multiple PNT sensors does not support the cooperative communication and the cooperative positioning, the device or the terminal integrating the multiple PNT sensors does not support the cooperative communication and the cooperative positioning.
7. The multi-source fusion-based positioning method according to claim 1 and claim 2, wherein:
the role of the scout node refers to that under a complex scene, the scout node can reach a destination and complete a scout task in an autonomous positioning or cooperative positioning mode, and does not need to provide any positioning navigation auxiliary information for other network devices, terminals and nodes at this stage;
the role of the anchor node is to assist the scout node to complete the positioning and navigation functions by sending the position coordinate, the ranging information or other necessary auxiliary information of the anchor node to the scout node in the network in a complex scene.
8. The multi-source fusion-based positioning method according to claim 1, wherein:
after the network side acquires the positioning capability of the device or the terminal integrating the multiple PNT sensors based on multi-source fusion, the network side sends information of executing tasks to the device or the terminal integrating the multiple PNT sensors, wherein the information includes but is not limited to destination position coordinates, role information, task stages and task types;
the network side can arrange and plan the device or the terminal integrating the multiple PNT sensors to independently complete a reconnaissance task according to the positioning capacity based on the multi-source fusion of the device or the terminal integrating the multiple PNT sensors; or arranging the other devices or terminals integrating the multiple PNT sensors to cooperate with the devices or terminals integrating the multiple PNT sensors to complete the reconnaissance task, and sending related task information to the other devices or terminals integrating the multiple PNT sensors, wherein the related task information comprises but is not limited to destination position coordinates, role information, task phases and task types.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111657856.2A CN115209526A (en) | 2021-12-31 | 2021-12-31 | Positioning method based on multi-source fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111657856.2A CN115209526A (en) | 2021-12-31 | 2021-12-31 | Positioning method based on multi-source fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115209526A true CN115209526A (en) | 2022-10-18 |
Family
ID=83574044
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111657856.2A Pending CN115209526A (en) | 2021-12-31 | 2021-12-31 | Positioning method based on multi-source fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115209526A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102215450A (en) * | 2010-04-02 | 2011-10-12 | 中兴通讯股份有限公司 | Positioning capability information transmission method and transmission system |
CN107534843A (en) * | 2015-05-11 | 2018-01-02 | 高通股份有限公司 | For the base station selected of positioning/addressing for being indicated based on capacity |
CN109714421A (en) * | 2018-12-28 | 2019-05-03 | 国汽(北京)智能网联汽车研究院有限公司 | Intelligent network based on bus or train route collaboration joins automobilism system |
CN111405656A (en) * | 2019-01-02 | 2020-07-10 | 北京金坤科创技术有限公司 | Positioning method and system based on ultra-wideband technology |
CN112929838A (en) * | 2021-01-27 | 2021-06-08 | 中国人民解放军军事科学院国防科技创新研究院 | Testing method and device for communication and high-precision three-dimensional positioning integrated module |
CN113495287A (en) * | 2020-04-03 | 2021-10-12 | 北京金坤科创技术有限公司 | High-precision positioning method for swarm unmanned aerial vehicle |
-
2021
- 2021-12-31 CN CN202111657856.2A patent/CN115209526A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102215450A (en) * | 2010-04-02 | 2011-10-12 | 中兴通讯股份有限公司 | Positioning capability information transmission method and transmission system |
CN107534843A (en) * | 2015-05-11 | 2018-01-02 | 高通股份有限公司 | For the base station selected of positioning/addressing for being indicated based on capacity |
CN109714421A (en) * | 2018-12-28 | 2019-05-03 | 国汽(北京)智能网联汽车研究院有限公司 | Intelligent network based on bus or train route collaboration joins automobilism system |
CN111405656A (en) * | 2019-01-02 | 2020-07-10 | 北京金坤科创技术有限公司 | Positioning method and system based on ultra-wideband technology |
CN113495287A (en) * | 2020-04-03 | 2021-10-12 | 北京金坤科创技术有限公司 | High-precision positioning method for swarm unmanned aerial vehicle |
CN112929838A (en) * | 2021-01-27 | 2021-06-08 | 中国人民解放军军事科学院国防科技创新研究院 | Testing method and device for communication and high-precision three-dimensional positioning integrated module |
Non-Patent Citations (1)
Title |
---|
朱锋: "GNSS/SINS/视觉多传感器融合的精密定位定姿方法与关键技术", 中国博士学位论文全文数据库, 15 August 2020 (2020-08-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016197986A1 (en) | High-precision autonomous obstacle-avoidance flying method for unmanned plane | |
WO2018028358A1 (en) | Method, apparatus and system for implementing formation flying | |
CN112180985A (en) | Small airborne cooperative control system supporting cluster control of multiple unmanned aerial vehicles | |
US20180046177A1 (en) | Motion Sensing Flight Control System Based on Smart Terminal and Terminal Equipment | |
Lu et al. | Toward uav-based airborne computing | |
CN105571588A (en) | Method for building three-dimensional aerial airway map of unmanned aerial vehicle and displaying airway of three-dimensional aerial airway map | |
CN106444423A (en) | Indoor multi unmanned aerial vehicle formation flight simulation verification platform and achieving method thereof | |
CN106970650A (en) | Multiple no-manned plane communication control system and control method | |
CN206741310U (en) | Multiple no-manned plane communication control system | |
CN110673614A (en) | Mapping system and mapping method of small robot group based on cloud server | |
CN113778132B (en) | Integrated parallel control platform for sea-air collaborative heterogeneous unmanned system | |
CN108965124A (en) | A kind of unmanned aerial vehicle control system | |
CN113238576A (en) | Positioning method for unmanned aerial vehicle and related device | |
CN113467514A (en) | Multi-unmanned aerial vehicle distributed control system, cooperative control method, medium and unmanned aerial vehicle formation | |
Chen et al. | Path planning and cooperative control for multiple UAVs based on consistency theory and Voronoi diagram | |
CN115209526A (en) | Positioning method based on multi-source fusion | |
CN107291092A (en) | A kind of air-ground coordination UAS of WiFi supports | |
Maxwell et al. | Turning remote-controlled military systems into autonomous force multipliers | |
Hou et al. | Uas delivery multi-rotor autopilot based on ardu-pilot framework using s-bus protocol | |
CN210072405U (en) | Unmanned aerial vehicle cooperative control verification platform | |
Wang et al. | A vision-aided navigation system by ground-aerial vehicle cooperation for UAV in GNSS-denied environments | |
Wu et al. | Uwb-based multi-source fusion positioning for cooperative uavs in complex scene | |
Xu et al. | Decentralized multi-uav cooperative search based on ros1 and ros2 | |
Olivieri et al. | An ubiquitous based approach for Movement Coordination of Swarms of Unmanned Aerial Vehicles using mobile networks | |
de Souza | An approach for movement coordination of swarms of unmanned aerial vehicles using mobile networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |