CN117826141A - Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment - Google Patents

Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment Download PDF

Info

Publication number
CN117826141A
CN117826141A CN202311873606.1A CN202311873606A CN117826141A CN 117826141 A CN117826141 A CN 117826141A CN 202311873606 A CN202311873606 A CN 202311873606A CN 117826141 A CN117826141 A CN 117826141A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
key frame
distributed
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311873606.1A
Other languages
Chinese (zh)
Inventor
钟毅
李郑嘉
鲁仁全
刘畅
徐雍
杨立鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202311873606.1A priority Critical patent/CN117826141A/en
Publication of CN117826141A publication Critical patent/CN117826141A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention is suitable for the technical field of unmanned aerial vehicles, and particularly relates to a co-positioning method of a distributed unmanned aerial vehicle group in a complex environment. According to the invention, the fisheye camera and the UWB module enable the unmanned aerial vehicle group not to be limited by the view field, and the relative state can still be accurately estimated under the condition of no abundant common environmental characteristics; the outlier measured by the UWB module can be effectively removed through the outlier removing module, so that the UWB measurement is more accurate, and the co-location precision is prevented from being reduced or even the estimation error is avoided; the mechanism for calculating the redundancy value of the key frame is used for limiting the calculation resources used by the unmanned aerial vehicle, so that the earlier key frame is effectively reserved; the active loop detection method can ensure that a single unmanned aerial vehicle can ensure good positioning in a complex environment, and can provide a good pose initial value when relative state estimation is carried out after unmanned aerial vehicle groups meet, so that the positioning effect from thick to thin is realized, and the global consistency of multiple unmanned aerial vehicles is effectively ensured.

Description

Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment
Technical Field
The invention is suitable for the technical field of unmanned aerial vehicles, and particularly relates to a co-positioning method of a distributed unmanned aerial vehicle group in a complex environment.
Background
In recent years, research on multiple unmanned aerial vehicle systems has received increasing attention. Compared with a single unmanned aerial vehicle, the multi-unmanned aerial vehicle system can better complete tasks, for example, can complete exploration of an area faster and more accurately in other tasks such as autonomous exploration or search and rescue, share individual information with other individuals, and coordinate respective tasks and planning.
Most of the positioning of the multiple unmanned aerial vehicles at present depends on external devices to provide state estimation, such as UWB (Ultra-wide band) based on anchor points, a motion capture system and a GPS (global positioning system), but the centralized systems need to deploy a large number of external devices to perform state estimation on the unmanned aerial vehicle group, and when the external devices are deployed in some complex environments such as open places or outdoors, the difficulty is greatly increased, and the method is not suitable for being applied to actual exploration tasks, so that the problem of the co-positioning of the distributed multiple unmanned aerial vehicles without deploying the external devices needs to be solved.
In addition, the method for estimating the relative state of the multi-unmanned aerial vehicle commonly used at present also has a certain problem: (1) The unmanned aerial vehicle's visual odometer (Visual Inertial Odometry, VIO) can get rid of the deployment problem of external devices, but the state estimation in a few-texture or feature-deficient scene has drift errors. (2) UWB modules can provide relative distances between unmanned aerial vehicles, but UWB modules can suffer from non-line-of-sight or insufficient long-range communications, resulting in reduced accuracy. (3) Insufficient observability of unmanned aerial vehicles can lead to the fact that identical road signs are difficult to identify between unmanned aerial vehicles on the one hand, and on the other hand, adjacent unmanned aerial vehicles are difficult to identify between unmanned aerial vehicles.
In summary, the vision autonomous positioning method can complete the position estimation with higher precision, does not need to deploy any external facilities, has strict requirements on illumination and texture, has the possibility of drifting phenomenon in state estimation, and has the problem of different pose estimation reference systems when being applied to multiple unmanned aerial vehicles; the UWB positioning method has no estimation drift problem, can provide a global reference system, but has lower estimation accuracy than a visual method and is easy to be interfered by electromagnetic interference. Therefore, research on a method for distributed co-location of multiple unmanned aerial vehicles in a complex environment is necessary to improve the co-location precision and stability of the multiple unmanned aerial vehicles in the complex environment.
Disclosure of Invention
The invention provides a co-location method of a distributed unmanned aerial vehicle group in a complex environment, which aims to solve the problem of co-location of a distributed multi-unmanned aerial vehicle without deploying external equipment.
The co-location method comprises the following steps:
s1, installing a fisheye lens camera and a UWB module for each unmanned aerial vehicle in a unmanned aerial vehicle group;
s2, acquiring a fisheye image through the fisheye camera, and preprocessing the fisheye image to obtain an optimized image;
s3, extracting a visual inertial odometer in the optimized image, and estimating the motion of each unmanned aerial vehicle according to the visual inertial odometer to obtain a measurement model of each unmanned aerial vehicle;
s4, performing mutual loop detection on the optimized image through a distributed loop detection method, estimating the relative pose of each unmanned aerial vehicle through the fisheye camera, and eliminating drift errors of the visual inertial odometer;
s5, measuring the relative distance between each unmanned aerial vehicle through the UWB module;
s6, a redundant key frame deleting mechanism is established, and key frames with highest redundant values in the key frames of the visual inertial odometer are deleted according to the redundant key frame deleting mechanism; wherein the key frame of the visual odometer comprises the optimized image and external parameters of the fisheye camera;
s7, estimating the state of the unmanned aerial vehicle group according to the measurement model;
s8, detecting each unmanned aerial vehicle in the unmanned aerial vehicle group, judging whether adjacent unmanned aerial vehicles can be observed, if so, enabling the currently detected unmanned aerial vehicle to be closer to other unmanned aerial vehicles, and executing a close-range high-precision distributed co-location strategy; if not, the unmanned aerial vehicle detected at present can not be successfully detected or the relative distance between the unmanned aerial vehicle detected at present and other unmanned aerial vehicles is far, and the unmanned aerial vehicle detected at present is set as an independent unmanned aerial vehicle to carry out autonomous positioning through an active loop detection strategy.
Preferably, in step S2, the process of preprocessing the fisheye image includes the following steps:
s21, converting the fisheye image according to a cylindrical projection model to obtain the optimized image;
the cylindrical projection model satisfies the following calculation formula:
wherein,represents the cylinder radial distance, phi=atan2 (X, Z) represents the azimuth angle, f φ And f Y Represents focal lengths along the X-axis and Y-axis, respectively, u 0 And v 0 Representing a main point; u and v represent coordinates of projection points on the fisheye image; x, Y and Z represent coordinate values of the X-axis, Y-axis and Z-axis, respectively.
Preferably, the measurement model satisfies the following relation:
wherein,representing the relative pose of the unmanned plane i from the moment t-1 to the moment t, < ->Representing the four-degree-of-freedom pose of the unmanned aerial vehicle i at the moment t, n vio Representing gaussian noise.
Preferably, the distributed loop detection method includes:
defining a first unmanned aerial vehicle, and when the first unmanned aerial vehicle receives a key frame of the visual inertial odometer, extracting features of the key frame to obtain feature information; integrating and packaging the characteristic information and the state information of the first unmanned aerial vehicle to obtain a packaging key frame; broadcasting and communicating the encapsulation key frame to other unmanned aerial vehicles;
defining a second unmanned aerial vehicle, and establishing a local database and a common view database, wherein the local database is used for storing key frames generated by the unmanned aerial vehicle; the common view database is used for storing key frames received by communication with adjacent unmanned aerial vehicles; after the second unmanned aerial vehicle receives the encapsulation key frame, judging whether the numbers of the first unmanned aerial vehicle and the second unmanned aerial vehicle are the same, if so, storing the encapsulation key frame into the local database; if not, storing the encapsulation key frame into the common view database;
retrieving a key frame with highest matching degree with the encapsulation key frame from the local database or the common view database as a return key frame, and outputting the return key frame to the first unmanned aerial vehicle; wherein the return keyframe includes a global descriptor, a landmark, and a speed and acceleration of the second drone;
performing distributed loop closure detection on a loop between the first unmanned aerial vehicle and the second unmanned aerial vehicle according to the packaging key frame and the return key frame;
judging whether loop back between the first unmanned aerial vehicle and the second unmanned aerial vehicle is consistent; if not, the loop is taken as an abnormal value to be removed.
Preferably, the UWB module measures the relative distance between each of the unmanned aerial vehicles to satisfy the following calculation formula:
wherein,representing the distance measurement of unmanned plane i to unmanned plane j at time t,/>Representing unmanned plane stateIs a translation part, n D Representing Gaussian noise, n nlos Representing measurement errors at non-line of sight.
Preferably, the unmanned aerial vehicle further comprises an outlier rejection module, and the outlier rejection module is used for rejecting measurement errors generated when the UWB module measures the relative distance between each unmanned aerial vehicle.
Preferably, the redundancy value of the key frame of the visual inertial odometer satisfies the following calculation formula:
where i represents the sequence number of key frame F, ζ i Representing key frame F i The set of all map points observed, j represents the keyframe F i Sequence number of observed map point, ob (M j ) Representing key frame F i The observed map point with the sequence number j.
Preferably, the active loop detection strategy is:
maintaining the path points of the independent unmanned aerial vehicle through a KD tree, and grading the suitability of each path point serving as an active closed-loop path point to obtain a suitable value result;
selecting the path point with the minimum cost from the current position of the independent unmanned aerial vehicle according to the fit value result and the KD tree, and selecting an optimal path point;
and calculating the cost between the path of the active closed loop detection of the independent unmanned aerial vehicle and the optimal path point to correct the accumulated drift error of the independent unmanned aerial vehicle.
Compared with the prior art, the invention ensures that the unmanned aerial vehicle group is not limited by the field of view through the fisheye camera and the UWB module, and can accurately estimate the relative state under the condition of no abundant common environmental characteristics; by adopting the distributed inter-loop detection method, the dependence of the traditional centralized type on the ground station is avoided, and each unmanned aerial vehicle can independently operate self-state estimation and relative state estimation and can not run due to the exit of a certain unmanned aerial vehicle; the outlier measured by the UWB module can be effectively removed through the outlier removing module, so that the UWB measurement is more accurate, and the co-location precision is prevented from being reduced or even the estimation error is avoided; the calculation resource used by the unmanned aerial vehicle is limited through a key frame redundancy value calculation mechanism, and the key frame with the highest redundancy value can be deleted through calculating the redundancy value of each key frame, so that the earlier key frame is effectively reserved, the map becomes more sparse, and fewer drifting and better global consistency can be caused; aiming at the problem that accumulated drift can be generated when the unmanned aerial vehicle group is degenerated into a single VIO and operated for a long time, an active loop detection method is provided for eliminating drift errors of the unmanned aerial vehicle. By the method, a single unmanned aerial vehicle can be guaranteed to be well positioned in a complex environment, a good pose initial value can be provided for relative state estimation after the unmanned aerial vehicles meet, a positioning effect from thick to thin is achieved, and global consistency of multiple unmanned aerial vehicles is effectively guaranteed.
Drawings
The present invention will be described in detail with reference to the accompanying drawings. The foregoing and other aspects of the invention will become more apparent and more readily appreciated from the following detailed description taken in conjunction with the accompanying drawings. In the accompanying drawings:
fig. 1 is a flowchart of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a distributed co-location policy of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a framework of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 4 is a schematic diagram of virtual-to-actual transformation of a fisheye image of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a distributed loop detection method of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 6 is a schematic diagram of loop anomaly value rejection for distributed loop detection in a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 7 is a schematic view of UWB module ranging according to the co-location method of a distributed unmanned aerial vehicle group in a complex environment according to the embodiment of the present invention;
fig. 8 is a schematic flow chart of a redundant keyframe deletion mechanism of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention;
fig. 9 is a schematic diagram of an active closed-loop candidate point path number of a co-location method of a distributed unmanned aerial vehicle group in a complex environment according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1-9, the present invention provides a method for co-locating a distributed unmanned aerial vehicle group in a complex environment, the method for co-locating includes the following steps:
s1, installing a fisheye lens camera and a UWB module for each unmanned aerial vehicle in a unmanned aerial vehicle group;
in the embodiment of the invention, each unmanned aerial vehicle of the unmanned aerial vehicle group is provided with an on-board computer, a flight controller, a UWB module, two monocular fisheye lens cameras and a WiFI module for communication. The UWB module is installed as an on-board sensor on each drone, and it does not require deployment of anchor points in the environment, and is only responsible for periodically measuring the relative distance between the drones. Two monocular fish-eye cameras are used as important modules for omni-directional perception, the visual angle of the fish-eye cameras is up to 220 ℃, two fish-eye cameras are deployed, one device is arranged on the top of the unmanned aerial vehicle, and the other device is arranged on the bottom of the unmanned aerial vehicle, so that surrounding environment can be observed in an omni-directional mode, and the framework of the distributed co-location system is shown in fig. 3. At the same time, timestamp synchronization is a problem that needs attention in the robot population. And the UWB module completes mutual time stamp synchronization in the ranging and communication process, and sends the synchronous time stamp to the on-board computer through the serial port. Thus, UWB timestamps are selected as a time reference to obtain timestamps between multiple drones.
S2, acquiring a fisheye image through the fisheye camera, and preprocessing the fisheye image to obtain an optimized image;
in the embodiment of the invention, the process of preprocessing the fisheye image comprises the following steps:
s21, converting the fisheye image according to a cylindrical projection model to obtain the optimized image;
the cylindrical projection model satisfies the following calculation formula:
wherein,representing the cylinder radial distance, phi=atan2 (X, Z) represents the azimuth angle,f φ And f Y Represents focal lengths along the X-axis and Y-axis, respectively, u 0 And v 0 Representing a main point; u and v represent coordinates of projection points on the fisheye image; x, Y and Z represent coordinate values of the X-axis, Y-axis and Z-axis, respectively.
As shown in FIG. 4, real coordinates [ X, Y, Z ] can be obtained assuming that the virtual pinhole camera and the eigen matrix of the cylindrical projection are equal]And transformation between virtual coordinates
The corresponding inverse transform is:
the 3D size of the object and its heteromorphic orientation is deduced from its appearance, which is similar to its appearance in a perspective image in a cylindrical projection. The relation between the azimuth angle and the elevation angle of the virtual object and the real azimuth angle and the elevation angle is as follows:
they can be used with heterogeneous rotations to switch between real and virtual yaw and pitch.
The fisheye image after cylindrical projection and the object after virtual-to-actual transformation are closer to the object of the normal image, so that the preprocessed fisheye image can be applied to a 3D vision neural network trained by using the normal image data set, and the similar effect can be obtained.
S3, extracting a visual inertial odometer in the optimized image, and estimating the motion of each unmanned aerial vehicle according to the visual inertial odometer to obtain a measurement model of each unmanned aerial vehicle;
in an embodiment of the invention, the individual of the unmanned aerial vehicle group is provided with a fisheye camera and an inertial measurement unit (Inertial Measurement Unit, IMU) to complete the estimation of the unmanned aerial vehicle self-movement. The VIO is extracted based on the preprocessed fisheye image to provide real-time local pose and speed estimation, the original odometer is not directly fused, and the four-degree-of-freedom relative pose extracted from the VIO is fused at the rear end due to the long-term drift of the VIO, and the measurement model meets the following relational expression:
wherein,representing the relative pose of the unmanned plane i from the moment t-1 to the moment t, < ->Representing the four-degree-of-freedom pose of the unmanned aerial vehicle i at the moment t, n vio Representing gaussian noise. The key frame of the VIO of the unmanned aerial vehicle comprises the preprocessed image and the external parameters of the camera, and the real-time attitude estimation is utilized for further processing, so that redundant calculation is avoided.
S4, performing mutual loop detection on the optimized image through a distributed loop detection method, estimating the relative pose of each unmanned aerial vehicle through the fisheye camera, and eliminating drift errors of the visual inertial odometer;
in an embodiment of the present invention, the distributed loop detection method includes:
defining a first unmanned aerial vehicle, and when the first unmanned aerial vehicle receives a key frame of the visual inertial odometer, extracting features of the key frame to obtain feature information; integrating and packaging the characteristic information and the state information of the first unmanned aerial vehicle to obtain a packaging key frame; broadcasting and communicating the encapsulation key frame to other unmanned aerial vehicles;
defining a second unmanned aerial vehicle, and establishing a local database and a common view database, wherein the local database is used for storing key frames generated by the unmanned aerial vehicle; the common view database is used for storing key frames received by communication with adjacent unmanned aerial vehicles; after the second unmanned aerial vehicle receives the encapsulation key frame, judging whether the numbers of the first unmanned aerial vehicle and the second unmanned aerial vehicle are the same, if so, storing the encapsulation key frame into the local database; if not, storing the encapsulation key frame into the common view database;
retrieving a key frame with highest matching degree with the encapsulation key frame from the local database or the common view database as a return key frame, and outputting the return key frame to the first unmanned aerial vehicle; wherein the return keyframe includes a global descriptor, a landmark, and a speed and acceleration of the second drone;
performing distributed loop closure detection on a loop between the first unmanned aerial vehicle and the second unmanned aerial vehicle according to the packaging key frame and the return key frame;
judging whether loop back between the first unmanned aerial vehicle and the second unmanned aerial vehicle is consistent; if not, the loop is taken as an abnormal value to be removed.
Specifically, aiming at the problem that centralized co-location of multiple unmanned aerial vehicles depends on a host, the invention designs a distributed loop detection method of an unmanned aerial vehicle group, which is independently operated on each unmanned aerial vehicle. And performing inter-loop detection on the optimized image. The relative positioning estimation is achieved by identifying the locations visited by all unmanned aerial vehicles by means of fish eye cameras and eliminating the drift of the VIO.
In order to store and retrieve VIO keyframes locally generated by the drone and keyframes received by the adjacent drone broadcast, two databases are built on the basis of the vector similarity retrieval database: and the local database is used for storing key frames generated by the unmanned aerial vehicle. And the common view database is used for storing key frames received by communication with the adjacent unmanned aerial vehicle.
As shown in fig. 5, the core of the distributed loop detection is divided into four modules, which are key frame encapsulation, database retrieval, pose estimation and outlier rejection, and the distributed loop detection is divided into four sub-modules:
key frame encapsulation sub-module: when drone i (i.e., the first drone) receives the VIO key frame, a lightweight deep learning model mobilenethold is used to extract global features, and a deep learning model SuperPoint for image feature extraction is used to extract landmarks and corresponding descriptors. Correspondence between landmarks from the forward and backward fisheye cameras is established by performing feature matching, and the matched landmarks are triangulated to estimate their 3D position in the partial frame. Global descriptors and landmarks, and speed, acceleration and other information of the unmanned aerial vehicle, are packaged in key framesBroadcast to the entire drone swarm. Since a part of the features cannot establish a correspondence, only feature points having successfully estimated 3D positions are broadcasted to reduce traffic.
Database detection submodule: when unmanned aerial vehicle j (i.e., the second unmanned aerial vehicle) receives the key frameAnd judging the source of the key frame by the unmanned aerial vehicle, if i=j, indicating that the key frame is generated locally, searching in a common view database, adding the key frame to the local database after searching, otherwise, if i=j, indicating that the key frame is a key frame broadcast by an adjacent unmanned aerial vehicle, searching in the local database, and adding the key frame to the common view database after searching. The unmanned aerial vehicle can search the most similar key frames in the database, and then package the global descriptors, landmarks and unmanned aerial vehicle speed of the matching frames with the highest similarity scores, and acceleration information into key frames>Returned to UAVi (i.e. drone i).
The pose estimation sub-module: when a returned closed loop key frame is received, a storm is usedForce matching establishes two keyframes (keyframesAnd Key frame->) 2d-3d of the key frame and then estimating the relative transform between the two matching key frames using RANSAC PnP (persistence-n-Point with Random Sample Consensus). If enough interior points are found in RANSAC and geometric tests are passed, the distributed loop closure detection between two drones is considered a good loop, modeled as:
wherein n is z As a result of the gaussian noise,for key frame->To key frame->Is a relative pose of (a). />The state of the unmanned aerial vehicle i in the local frame under the body coordinates is shown. If unmanned plane i completes the relative pose estimation, then the relative pose +.>Back to UAVj (i.e., drone j).
When i+.j, map-based measurementsProviding sufficient observability of the relative pose of drone i and drone j. This results inMap-based measurements are essential in observability verification and initialization of state estimation.
If t0+.t1, the relative pose of the unmanned aerial vehicle accessing the same location at different times is indicated, eliminating the accumulated drift error of the VIO.
Abnormal value rejection submodule: in order to avoid introducing error inter-loop, the invention provides a sub-module for eliminating inter-loop outlier, as shown in fig. 6, the sub-module is mainly used for suppressing inter-loop outlier and mainly used for checking whether closed loops between paired unmanned aerial vehicles are consistent with each other. The following is used to determine if loops between two drones are in agreement in pairs:
the invention considers that the close loop and the odometer are identical, when the Mahalanobis distance (Mahalanobis) between the loop and the odometer of the unmanned aerial vehicle is larger than the threshold sigma L When this is considered an erroneous inter-loop, it is rejected.
S5, measuring the relative distance between each unmanned aerial vehicle through the UWB module;
in the embodiment of the invention, as the UWB base station is difficult to build in an unknown complex environment, the UWB module is not deployed in the environment, but is deployed on each unmanned aerial vehicle and is responsible for periodically measuring the relative distance between each unmanned aerial vehicle, and as shown in fig. 7, the UWB module measures the relative distance between each unmanned aerial vehicle and meets the following calculation formula:
wherein,representing the distance measurement of unmanned plane i to unmanned plane j at time t,/>Representing unmanned plane stateIs a translation part, n D Representing Gaussian noise, n nlos Representing measurement errors at non-line of sight.
In the embodiment of the invention, the unmanned aerial vehicle further comprises an outlier removing module, wherein the outlier removing module is used for removing measurement errors generated when the UWB module measures the relative distance between each unmanned aerial vehicle.
When no view exists between unmanned aerial vehicles, multipath effects and other influences exist under the condition of Non-Line-of-Sight (NLOS), so that distance measurement errors deviate from normal distribution, and according to the principle of graph optimization, if outliers are not removed and are directly used for optimization, the method is equivalent to adding an edge which should not exist to a pose graph, and the method tends to cause reduced co-location precision and even estimation errors.
Therefore, aiming at the measurement error of the UWB module, the invention provides an outlier eliminating module for eliminating the measurement error of the UWB module.
The outlier rejection module rejects the measurement error as follows:
when the relative elevation angle of the two unmanned aerial vehicles is large, the UWB can generate a remarkable abnormal value. This is due to the fuselage of the drone being shielded, thus marking the measurementAnd satisfies the following relationship:
wherein,and->Estimated altitude, τ, of unmanned aerial vehicles i and j, respectively angle For the angle threshold value, it may be set to 37 ° in practice. τ angle Is selected according to the angle of the unmanned aerial vehicle body shielding the UWB module.
And then, the displacement increment of the two unmanned aerial vehicles can be obtained through calculation by utilizing the speed and the acceleration of the adjacent unmanned aerial vehicles obtained through communication between the unmanned aerial vehicles.
Wherein,for instant speed of time t-1 and time t unmanned plane i +.>Acceleration of unmanned plane i for time t-1, +.>For the instant speed of the unmanned plane j at time t-1 and time t, +>The acceleration of the unmanned plane j at time t-1. />For the displacement increment of unmanned plane i from time t-1 to time t,/for unmanned plane i>Is the displacement increment of the unmanned aerial vehicle j from the time t-1 to the time t. />The maximum displacement increment which can be generated by the unmanned aerial vehicle i and the unmanned aerial vehicle j from the time t-1 to the time t.
If the difference between the front distance and the rear distance is larger than the maximum relative displacement increment which can be generated by the two unmanned aerial vehicles, the current distance measurement value is an outlier.
Then, the distance measurement of the UWB module at a certain moment is consistent with the relative pose obtained by the distributed mutual loop detection calculation.
When the unmanned aerial vehicle group is located in a scene with rich characteristics, if the error between the UWB distance measurement value and the relative pose obtained by the mutual loop detection is large, the current UWB distance measurement value is considered to be an outlier.
Finally, the residual with the Huber loss is used to evaluate whether the current relative distance measurement is consistent with the relative state estimate.
The expected error of UWB measurement is not more than 10cm, and the invention selects it as tau D To ensure that the correct value is not over filtered. When τ is D When too large, outliers may interfere with the estimate, when τ D Too small, the correct measurement will be rejected due to inaccurate initialization.
S6, a redundant key frame deleting mechanism is established, and key frames with highest redundant values in the key frames of the visual inertial odometer are deleted according to the redundant key frame deleting mechanism; wherein the key frame of the visual odometer comprises the optimized image and external parameters of the fisheye camera;
in the embodiment of the invention, aiming at the problem of large calculation amount when the back end of the unmanned aerial vehicle is optimized, the invention provides a redundant key frame deleting mechanism to limit the calculation resources used by the back end; once the group keyframes in the graph grow to greater than m max And deleting one group key frame according to the redundancy degree of the group key frame. The intuition of the method is that in the operation process of the unmanned aerial vehicle, some road mark points can be observed by a plurality of key frames, and when a part of key frames corresponding to the road mark points are discarded, the estimation of the road mark points cannot be influenced, but when a part of the road mark points are observed by only a few key frames, the part of key frames have important influence on map points and the position estimation of the unmanned aerial vehicle. By this method, the earlier group key frames have less impact on the estimation, are not immediately discarded, but become more sparse, so the earlier group key frames can be used for optimization without affecting the computation speed because of too many key frames.
As the number of times a road sign point is observed increases, the redundancy of a key frame to a single map point increases rapidly, so that in order to better quantify the relationship between the key frame and the map point, a redundant key frame evaluation function is defined to represent the redundancy of the key frame to the single map point, where the redundant key frame evaluation function is as follows:
the nonlinear function can well quantify the redundancy degree of different key frames according to the observed times of the road mark points. In order to facilitate adjustment of subsequent scale changes, the redundancy values of the redundant key frames are normalized to be between [0,1], and the formula is as follows:
cost (x) is the redundancy value of the normalized key frame to a single map point.
The redundancy value of the key frame of the visual inertial odometer meets the following calculation formula:
where i represents the sequence number of key frame F, ζ i Representing key frame F i The set of all map points observed, j represents the keyframe F i Sequence number of observed map point, ob (M j ) Representing key frame F i The observed map point with the sequence number j.
Through the steps, the redundancy value of each key frame can be accurately obtained, and the key frames are ordered in a descending order, so that the key frame which is deleted preferentially when the key frame needs to be deleted can be determined, and the flow of deleting the redundancy key frame is shown in fig. 8.
S7, estimating the state of the unmanned aerial vehicle group according to the measurement model;
in the embodiment of the invention, the Maximum A Posteriori (MAP) inference of the factor graph is solved by using a nonlinear least squares optimization method, so that the population state estimation is performed. For unmanned aerial vehicle k, define full state vector X of unmanned aerial vehicle group state estimation problem k
Wherein,the state vector is in four-degree-of-freedom gesture, n is the number of unmanned aerial vehicles in the bee colony system, and m is the number of key frames in the figure.
The nonlinear least squares optimization problem of MAP reasoning is expressed as the following formula:
wherein ρ (·) is the Huber norm, II (·) II For the Mahalanobis norm, S is the set of all mileage factors, U is the set of all distance factors, L is the set of map-based factors,is the residual of the distance factor, where r p (·,X k ) The residual error of the relative pose is suitable for self-movement factors and map-based factors. />Residual error representing self-movement factor, ensuring local consistency of unmanned plane i state, ++>The residuals representing map-based factors ensure global consistency and observability of the relative states. Since distance measurements and map-based factor measurements may produce some outliers, the Huber norm ρ(s) is employed to reduce the impact of possible outlier factors.
According to the measurement model, defining relative pose residual errors as follows:
wherein the method comprises the steps ofRepresenting relative pose measurements, including self-motion measurements, measurements based on inter-loop detection.
According to the measurement equation of the UWB module, the distance residual is:
finally, an open source c++ library Ceres solver developed by Google for modeling and solving a nonlinear least squares optimization problem is adopted to solve the optimization problem, wherein a trust domain method using sparse normal Cholesky decomposition is selected as an optimization algorithm.
S8, detecting each unmanned aerial vehicle in the unmanned aerial vehicle group, judging whether adjacent unmanned aerial vehicles can be observed, if so, enabling the currently detected unmanned aerial vehicle to be closer to other unmanned aerial vehicles, and executing a close-range high-precision distributed co-location strategy; if not, the unmanned aerial vehicle detected at present can not be successfully detected or the relative distance between the unmanned aerial vehicle detected at present and other unmanned aerial vehicles is far, and the unmanned aerial vehicle detected at present is set as an independent unmanned aerial vehicle to carry out autonomous positioning through an active loop detection strategy.
In the embodiment of the invention, in order to avoid collision among unmanned aerial vehicles, when unmanned aerial vehicles approach each other, a high-precision relative positioning is needed among the unmanned aerial vehicles, when the unmanned aerial vehicles become far away or are invisible from each other, the accurate relative positioning precision among the unmanned aerial vehicles is not needed, and the accurate positioning of single unmanned aerial vehicles is more needed to be ensured, so that when the following unmanned aerial vehicles meet each other and approach each other, an accurate pose initial value can be provided, and the change of the positioning precision from thick to thin is realized.
In order to solve the above problems, as shown in fig. 1, the present invention proposes a distributed co-location strategy for distinguishing multiple unmanned aerial vehicles in a short distance from a long distance by using whether adjacent unmanned aerial vehicles are visible as a sign.
The target detection module is mainly used for detecting targets and relative distances of other unmanned aerial vehicles in the cluster. By using one of the visual target detection methods based on the convolutional neural network, namely YOLOv4-tiniy, to detect a two-dimensional bounding box of the unmanned aerial vehicle, and using the preprocessed fisheye image, the unmanned aerial vehicle trained by using the custom data can be effectively detected.
When the adjacent unmanned aerial vehicles are successfully detected, the fact that the distance between the unmanned aerial vehicles is relatively close is indicated, high-precision relative positioning is needed between the unmanned aerial vehicles, a close-range high-precision distributed cooperative positioning strategy is executed, and if the unmanned aerial vehicles cannot be successfully detected or the relative distance is relatively far, the fact that the unmanned aerial vehicles cannot obtain relatively precise relative positioning is indicated, and therefore independent positioning is carried out by degrading to single VIO is indicated.
According to the actual scene, distinguishing the situation that the individual of the unmanned aerial vehicle group is degenerated to be single VIO autonomous positioning, wherein the individual of the unmanned aerial vehicle group is degenerated to be single VIO for autonomous positioning when the following situations occur:
(1) When communication is limited or interrupted, communication between the unmanned aerial vehicles is not possible, and the unmanned aerial vehicles will degrade into a single VIO positioning system.
(2) When the distance between unmanned aerial vehicles becomes far, or unmanned aerial vehicles can not be observed by adjacent unmanned aerial vehicles for a long time, the unmanned aerial vehicle group can not accurately conduct relative positioning any more, and therefore the unmanned aerial vehicle group is degenerated into a single VIO positioning system.
In the embodiment of the present invention, the active loop detection strategy is:
maintaining the path points of the independent unmanned aerial vehicle through a KD tree, and grading the suitability of each path point serving as an active closed-loop path point to obtain a suitable value result;
selecting the path point with the minimum cost from the current position of the independent unmanned aerial vehicle according to the fit value result and the KD tree, and selecting an optimal path point;
and calculating the cost between the path of the active closed loop detection of the independent unmanned aerial vehicle and the optimal path point to correct the accumulated drift error of the independent unmanned aerial vehicle.
Specifically, when multiple unmanned aerial vehicles perform tasks such as autonomous search, the distance between the unmanned aerial vehicles becomes far, and high-precision relative positioning estimation cannot be performed any more, so that the unmanned aerial vehicles are degraded into positioning of a single VIO. And because unmanned aerial vehicle is in the in-process of exploring, very little repeated passthrough same place, this also leads to unmanned aerial vehicle very little can carry out the loop detection, appears accumulating the condition of drift.
Aiming at the problems that multiple unmanned aerial vehicles are degenerated into single VIO, loop detection cannot be effectively carried out, and accumulated drift can be generated during long-term operation, the invention provides an active loop detection method for correcting accumulated drift errors and guaranteeing global consistency of unmanned aerial vehicles.
The strategy of active loop detection specifically comprises the following steps:
the KD tree is used for maintaining historical path points and scoring each path point, as shown in fig. 8, when the unmanned aerial vehicle is in a multi-path, the path points are easier to form a closed loop, and the paths near the multi-path are more in road signs and rich in characteristic points, so that the unmanned aerial vehicle is more suitable to be used as candidate path points of an active closed loop. The adaptive value function of the active closed loop path point provided by the invention is as follows:
F(p_loop i .x i )=ln(p_loop i .x i +1)*w_d
wherein p_loop i .x i For the path point p_loop i The corresponding number of paths, w_d, is the weight value, F (x i ) Is a fitness value of the active closed loop path point, and represents the fitness of the current path point as the active closed loop path point.
Searching a path point with the minimum cost from the current position in the KD tree as an active closed loop candidate point:
wherein c (·) is the path cost between two points, w_c is the weight of the path cost, the optimal active closed loop candidate point is related to the suitable value of the active closed loop and the path cost between the two points, and the optimal path point is found as the active closed loop point by balancing the two points.
Meanwhile, in order to balance the active closed loop detection and the exploration efficiency, the cost between the path containing the active closed loop detection and the optimal path needs to be calculated and compared:
C_loop=c(p_current,p_loop best )+c(p_loop best ,p_goal)
C_plan=c(p_start,p_goal)
where c_loop is the path cost including active closed loop detection and c_plan is the cost not including active closed loop detection.
C_loop≤(1+ω cost )*C_plan
Omega because active closed loop typically bypasses reducing the exploration efficiency cost Representing a trade-off between exploration efficiency and global consistency. Omega cost The greater weight means that more loop closure cost expenditures will need to be tolerated to correct the cumulative drift error of the drone and maintain a more globally consistent map.
The active loop detection strategy provided by the invention can ensure that a single unmanned aerial vehicle (namely an independent unmanned aerial vehicle) can ensure a good positioning under a complex environment, so that a good initial value can be provided for the unmanned aerial vehicle group after meeting, the positioning effect from thick to thin is realized, and the global consistency of multiple unmanned aerial vehicles is effectively ensured.
Compared with the prior art, the invention ensures that the unmanned aerial vehicle group is not limited by the field of view through the fisheye camera and the UWB module, and can accurately estimate the relative state under the condition of no abundant common environmental characteristics; by adopting the distributed inter-loop detection method, the dependence of the traditional centralized type on the ground station is avoided, and each unmanned aerial vehicle can independently operate self-state estimation and relative state estimation and can not run due to the exit of a certain unmanned aerial vehicle; the outlier measured by the UWB module can be effectively removed through the outlier removing module, so that the UWB measurement is more accurate, and the co-location precision is prevented from being reduced or even the estimation error is avoided; the calculation resource used by the unmanned aerial vehicle is limited through a key frame redundancy value calculation mechanism, and the key frame with the highest redundancy value can be deleted through calculating the redundancy value of each key frame, so that the earlier key frame is effectively reserved, the map becomes more sparse, and fewer drifting and better global consistency can be caused; aiming at the problem that accumulated drift can be generated when the unmanned aerial vehicle group is degenerated into a single VIO and operated for a long time, an active loop detection method is provided for eliminating drift errors of the unmanned aerial vehicle. By the method, a single unmanned aerial vehicle can be guaranteed to be well positioned in a complex environment, a good pose initial value can be provided for relative state estimation after the unmanned aerial vehicles meet, a positioning effect from thick to thin is achieved, and global consistency of multiple unmanned aerial vehicles is effectively guaranteed.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
While the embodiments of the present invention have been illustrated and described in connection with the drawings, what is presently considered to be the most practical and preferred embodiments of the invention, it is to be understood that the invention is not limited to the disclosed embodiments, but on the contrary, is intended to cover various equivalent modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (8)

1. The co-location method of the distributed unmanned aerial vehicle group in the complex environment is characterized by comprising the following steps of:
s1, installing a fisheye lens camera and a UWB module for each unmanned aerial vehicle in a unmanned aerial vehicle group;
s2, acquiring a fisheye image through the fisheye camera, and preprocessing the fisheye image to obtain an optimized image;
s3, extracting a visual inertial odometer in the optimized image, and estimating the motion of each unmanned aerial vehicle according to the visual inertial odometer to obtain a measurement model of each unmanned aerial vehicle;
s4, performing mutual loop detection on the optimized image through a distributed loop detection method, estimating the relative pose of each unmanned aerial vehicle through the fisheye camera, and eliminating drift errors of the visual inertial odometer;
s5, measuring the relative distance between each unmanned aerial vehicle through the UWB module;
s6, a redundant key frame deleting mechanism is established, and key frames with highest redundant values in the key frames of the visual inertial odometer are deleted according to the redundant key frame deleting mechanism; wherein the key frame of the visual odometer comprises the optimized image and external parameters of the fisheye camera;
s7, estimating the state of the unmanned aerial vehicle group according to the measurement model;
s8, detecting each unmanned aerial vehicle in the unmanned aerial vehicle group, judging whether adjacent unmanned aerial vehicles can be observed, if so, enabling the currently detected unmanned aerial vehicle to be closer to other unmanned aerial vehicles, and executing a close-range high-precision distributed co-location strategy; if not, the unmanned aerial vehicle detected at present can not be successfully detected or the relative distance between the unmanned aerial vehicle detected at present and other unmanned aerial vehicles is far, and the unmanned aerial vehicle detected at present is set as an independent unmanned aerial vehicle to carry out autonomous positioning through an active loop detection strategy.
2. The method for co-locating a distributed unmanned aerial vehicle group in a complex environment according to claim 1, wherein in step S2, the process of preprocessing the fisheye image comprises the steps of:
s21, converting the fisheye image according to a cylindrical projection model to obtain the optimized image;
the cylindrical projection model satisfies the following calculation formula:
wherein,representing the cylinder radial distance, phi=atan2 (X, Z) represents the azimuth angle,f φ And f Y Represents focal lengths along the X-axis and Y-axis, respectively, u 0 And v 0 Representing a main point; u and v represent coordinates of projection points on the fisheye image; x, Y and Z represent coordinate values of the X-axis, Y-axis and Z-axis, respectively.
3. The method for co-locating a distributed unmanned aerial vehicle group in a complex environment according to claim 1, wherein the measurement model satisfies the following relation:
wherein,representing the relative pose of the unmanned plane i from the moment t-1 to the moment t, < ->Representing the four-degree-of-freedom pose of the unmanned aerial vehicle i at the moment t, n vio Representing gaussian noise.
4. The method for co-locating a distributed unmanned aerial vehicle group in a complex environment according to claim 1, wherein the distributed loop detection method comprises:
defining a first unmanned aerial vehicle, and when the first unmanned aerial vehicle receives a key frame of the visual inertial odometer, extracting features of the key frame to obtain feature information; integrating and packaging the characteristic information and the state information of the first unmanned aerial vehicle to obtain a packaging key frame; broadcasting and communicating the encapsulation key frame to other unmanned aerial vehicles;
defining a second unmanned aerial vehicle, and establishing a local database and a common view database, wherein the local database is used for storing key frames generated by the unmanned aerial vehicle; the common view database is used for storing key frames received by communication with adjacent unmanned aerial vehicles; after the second unmanned aerial vehicle receives the encapsulation key frame, judging whether the numbers of the first unmanned aerial vehicle and the second unmanned aerial vehicle are the same, if so, storing the encapsulation key frame into the local database; if not, storing the encapsulation key frame into the common view database;
retrieving a key frame with highest matching degree with the encapsulation key frame from the local database or the common view database as a return key frame, and outputting the return key frame to the first unmanned aerial vehicle; wherein the return keyframe includes a global descriptor, a landmark, and a speed and acceleration of the second drone;
performing distributed loop closure detection on a loop between the first unmanned aerial vehicle and the second unmanned aerial vehicle according to the packaging key frame and the return key frame;
judging whether loop back between the first unmanned aerial vehicle and the second unmanned aerial vehicle is consistent; if not, the loop is taken as an abnormal value to be removed.
5. The method for co-locating a distributed unmanned aerial vehicle group in a complex environment according to claim 1, wherein the UWB module measures the relative distance between each unmanned aerial vehicle to satisfy the following calculation formula:
wherein,representing the distance measurement of unmanned plane i to unmanned plane j at time t,/>Representing unmanned plane state->Is a translation part, n D Representing Gaussian noise, n nlos Representing non-line of sightThe following measurement errors.
6. The method for co-locating a distributed unmanned aerial vehicle group in a complex environment according to claim 5, wherein the unmanned aerial vehicle further comprises an outlier rejection module, and the outlier rejection module is configured to reject measurement errors generated when the UWB module measures a relative distance between each unmanned aerial vehicle.
7. The co-location method of a distributed unmanned aerial vehicle group in a complex environment according to claim 1, wherein the redundancy value of the key frame of the visual inertial odometer satisfies the following calculation formula:
where i represents the sequence number of key frame F, ζ i Representing key frame F i The set of all map points observed, j represents the keyframe F i Sequence number of observed map point, ob (M j ) Representing key frame F i The observed map point with the sequence number j.
8. The method for co-locating a distributed unmanned aerial vehicle group in a complex environment according to claim 1, wherein the active loop detection strategy is:
maintaining the path points of the independent unmanned aerial vehicle through a KD tree, and grading the suitability of each path point serving as an active closed-loop path point to obtain a suitable value result;
selecting the path point with the minimum cost from the current position of the independent unmanned aerial vehicle according to the fit value result and the KD tree, and selecting an optimal path point;
and calculating the cost between the path of the active closed loop detection of the independent unmanned aerial vehicle and the optimal path point to correct the accumulated drift error of the independent unmanned aerial vehicle.
CN202311873606.1A 2023-12-29 2023-12-29 Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment Pending CN117826141A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311873606.1A CN117826141A (en) 2023-12-29 2023-12-29 Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311873606.1A CN117826141A (en) 2023-12-29 2023-12-29 Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment

Publications (1)

Publication Number Publication Date
CN117826141A true CN117826141A (en) 2024-04-05

Family

ID=90522513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311873606.1A Pending CN117826141A (en) 2023-12-29 2023-12-29 Collaborative positioning method for distributed unmanned aerial vehicle group in complex environment

Country Status (1)

Country Link
CN (1) CN117826141A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111024066A (en) * 2019-12-10 2020-04-17 中国航空无线电电子研究所 Unmanned aerial vehicle vision-inertia fusion indoor positioning method
KR20200083349A (en) * 2018-12-31 2020-07-08 한국항공대학교산학협력단 Method for improving position estimation performance by suppressing sidelobe clutter of random phase modulation signal and uwb radar system for preventing collision in multi-path envirionment
CN112581590A (en) * 2020-12-28 2021-03-30 广东工业大学 Unmanned aerial vehicle cloud edge terminal cooperative control method for 5G security rescue networking
CN113108771A (en) * 2021-03-05 2021-07-13 华南理工大学 Movement pose estimation method based on closed-loop direct sparse visual odometer
CN113506342A (en) * 2021-06-08 2021-10-15 北京理工大学 SLAM omnidirectional loop correction method based on multi-camera panoramic vision
CN113702918A (en) * 2021-08-31 2021-11-26 广东工业大学 Nonlinear phase-locked loop Beidou signal tracking system
CN113781645A (en) * 2021-08-31 2021-12-10 同济大学 Indoor parking environment-oriented positioning and mapping method
CN115267820A (en) * 2022-07-15 2022-11-01 西安邮电大学 Fire scene map construction method and system fusing laser radar/vision/UWB
CN116295340A (en) * 2023-03-09 2023-06-23 苏州三介飞航无人机科技有限公司 Unmanned aerial vehicle binocular vision SLAM method based on panoramic camera
WO2023155258A1 (en) * 2022-02-21 2023-08-24 武汉大学 Visual inertial odometry method that contains self-calibration and is based on keyframe sliding window filtering

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20200083349A (en) * 2018-12-31 2020-07-08 한국항공대학교산학협력단 Method for improving position estimation performance by suppressing sidelobe clutter of random phase modulation signal and uwb radar system for preventing collision in multi-path envirionment
CN111024066A (en) * 2019-12-10 2020-04-17 中国航空无线电电子研究所 Unmanned aerial vehicle vision-inertia fusion indoor positioning method
CN112581590A (en) * 2020-12-28 2021-03-30 广东工业大学 Unmanned aerial vehicle cloud edge terminal cooperative control method for 5G security rescue networking
CN113108771A (en) * 2021-03-05 2021-07-13 华南理工大学 Movement pose estimation method based on closed-loop direct sparse visual odometer
CN113506342A (en) * 2021-06-08 2021-10-15 北京理工大学 SLAM omnidirectional loop correction method based on multi-camera panoramic vision
CN113702918A (en) * 2021-08-31 2021-11-26 广东工业大学 Nonlinear phase-locked loop Beidou signal tracking system
CN113781645A (en) * 2021-08-31 2021-12-10 同济大学 Indoor parking environment-oriented positioning and mapping method
WO2023155258A1 (en) * 2022-02-21 2023-08-24 武汉大学 Visual inertial odometry method that contains self-calibration and is based on keyframe sliding window filtering
CN115267820A (en) * 2022-07-15 2022-11-01 西安邮电大学 Fire scene map construction method and system fusing laser radar/vision/UWB
CN116295340A (en) * 2023-03-09 2023-06-23 苏州三介飞航无人机科技有限公司 Unmanned aerial vehicle binocular vision SLAM method based on panoramic camera

Similar Documents

Publication Publication Date Title
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN113269098A (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
Huang et al. Structure from motion technique for scene detection using autonomous drone navigation
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
CN113359810A (en) Unmanned aerial vehicle landing area identification method based on multiple sensors
CN107690840B (en) Unmanned plane vision auxiliary navigation method and system
CN113485441A (en) Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology
CN113345018A (en) Laser monocular vision fusion positioning mapping method in dynamic scene
CN113625774B (en) Local map matching and end-to-end ranging multi-unmanned aerial vehicle co-location system and method
Shetty et al. Uav pose estimation using cross-view geolocalization with satellite imagery
Ding et al. Vehicle pose and shape estimation through multiple monocular vision
Desaraju et al. Vision-based landing site evaluation and informed optimal trajectory generation toward autonomous rooftop landing
Bao et al. Vision-based horizon extraction for micro air vehicle flight control
CN102607532B (en) Quick low-level image matching method by utilizing flight control data
Kwon et al. A robust mobile target localization method for cooperative unmanned aerial vehicles using sensor fusion quality
Peršić et al. Online multi-sensor calibration based on moving object tracking
KR102289752B1 (en) A drone for performring route flight in gps blocked area and methed therefor
CN114549738A (en) Unmanned vehicle indoor real-time dense point cloud reconstruction method, system, equipment and medium
Desaraju et al. Vision-based Landing Site Evaluation and Trajectory Generation Toward Rooftop Landing.
Magree et al. Monocular visual mapping for obstacle avoidance on UAVs
CN114459467B (en) VI-SLAM-based target positioning method in unknown rescue environment
CN117218350A (en) SLAM implementation method and system based on solid-state radar
CA2785384C (en) Method for classifying objects in an imaging surveillance system
Kaufmann et al. Shadow-based matching for precise and robust absolute self-localization during lunar landings
Schleiss et al. VPAIR--Aerial Visual Place Recognition and Localization in Large-scale Outdoor Environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination