CN116295342A - Multi-sensing state estimator for aircraft survey - Google Patents

Multi-sensing state estimator for aircraft survey Download PDF

Info

Publication number
CN116295342A
CN116295342A CN202310243900.8A CN202310243900A CN116295342A CN 116295342 A CN116295342 A CN 116295342A CN 202310243900 A CN202310243900 A CN 202310243900A CN 116295342 A CN116295342 A CN 116295342A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
state estimator
imu
optimization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310243900.8A
Other languages
Chinese (zh)
Inventor
孙艺东
吴恩铭
唐昊
吴皓楠
李光印
罗飞扬
马世超
许悰瑞
符椿梅
王晓娜
张馨玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Civil Aviation University of China
Original Assignee
Civil Aviation University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Civil Aviation University of China filed Critical Civil Aviation University of China
Priority to CN202310243900.8A priority Critical patent/CN116295342A/en
Publication of CN116295342A publication Critical patent/CN116295342A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a multi-sensing state estimator for aircraft survey, which comprises the steps of establishing a detailed mathematical model for a four-rotor unmanned aerial vehicle, carrying out model description on the multi-sensor state estimator based on optimization, solving the exploration problem of the unmanned aerial vehicle on an unknown environment, realizing tight coupling estimation of images and IMU data, tracking visual images, completing inertial sensing pre-integration, and entering initialization after completing the steps, wherein the initialization comprises three links. The invention combines the advantages of tight coupling and loose coupling, utilizes the short-time stability of the low-cost inertial measurement unit (InertialMeasurementUnit, IMU) in the image sampling interval time, thereby realizing higher sampling frequency of a vision system and enabling the unmanned aerial vehicle to perform stable pose tracking under the conditions of acceleration and sharp rotation. Finally, combining with an unmanned aerial vehicle cluster control algorithm and a strategy, the method for simultaneously positioning and constructing the map of the clustered unmanned aerial vehicle based on the fusion frame theory is further studied, and therefore the multi-sensing state estimator with high precision and good real-time performance is designed.

Description

Multi-sensing state estimator for aircraft survey
Technical Field
The invention relates to the technical field of flight control sensing, in particular to a multi-sensing state estimator for aircraft survey.
Background
The positioning method of the unmanned aerial vehicle can be divided into an indoor positioning method and an outdoor positioning method from the applicable level. GPS is an effective positioning mode in outdoor open long-distance scenes, but can only be roughly estimated in a large range.
The state estimation is a multi-sensor data fusion method, and the measurement accuracy of a measurement system is improved by utilizing the fusion of more than two types of inaccurate information.
The traditional indoor positioning algorithm is mainly divided into an odometer method and an inertial navigation method; the inertial navigation method utilizes an accelerometer and a gyroscope to realize positioning, and the positioning accuracy of inertial navigation is generally poor due to the existence of integral accumulated errors; the odometer method is further divided into an optical flow method, a visual odometer method and a visual inertial odometer method according to different sensors; the optical flow method is widely applied to the indoor positioning process of the unmanned aerial vehicle due to the characteristics of simple principle, easy implementation and the like.
Positioning technology based on visual odometry (visual odometry) is a positioning technology which is novel and more studied at present; the VO collects image data through a camera rigidly mounted on the unmanned aerial vehicle, and performs pose estimation according to image characteristics and motion constraints, so that accumulated errors are not generated; however, when the data sampling rate of the vision sensor and the movement speed of the unmanned aerial vehicle are relatively high, the number of the feature points is reduced, and the estimation accuracy is reduced.
Disclosure of Invention
It is an object of the present invention to provide a multi-sensor state estimator for aircraft surveys that solves the above mentioned problems with the background art.
In order to achieve the above purpose, the present invention provides the following technical solutions: a multi-sensing state estimator for aircraft survey comprises the steps of establishing a detailed mathematical model for a four-rotor unmanned aerial vehicle, carrying out model description on the multi-sensor state estimator based on optimization, solving the exploration problem of the unmanned aerial vehicle on an unknown environment, realizing tight coupling estimation of images and IMU data, tracking visual images, completing inertial sensing pre-integration, and entering initialization after the steps are completed, wherein the method comprises the following steps: solving relative rotation between the camera and the IMU, initializing the camera, and aligning the IMU with visual information;
and loop detection is performed simultaneously, the unmanned aerial vehicle can fly in the obstacle by utilizing the local path planning method, and after a single unmanned aerial vehicle has local path planning capability, the collaborative area search of the unmanned aerial vehicle can be realized by means of the SLAM target allocation strategy of the unmanned aerial vehicle cluster.
Preferably, a motion model is required to be built for the four-rotor unmanned aerial vehicle, and a simplified collective motion equation can be obtained through stress analysis and moment analysis of a machine body:
Figure SMS_1
Figure SMS_2
Figure SMS_3
Figure SMS_4
Figure SMS_5
Figure SMS_6
preferably, model description needs to be carried out on a multi-sensor state estimator based on optimization, the state estimator realizes real-time pose estimation of the unmanned aerial vehicle in a complex environment, and the problem that the traditional unmanned aerial vehicle SLAM needs to carry too many sensors is solved. Finally, aiming at the problems of exploration and map construction of the four-rotor unmanned aerial vehicle cluster to the unknown environment, researching an online unmanned aerial vehicle motion planning algorithm suitable for a multi-sensor state estimator; an improved A-type algorithm and a B spline track optimization method based on dynamic path search are provided, and the online motion decision and planning method of a single unmanned aerial vehicle is solved; and designing an unmanned aerial vehicle cluster SLAM control strategy to complete planning of target points and path points of the unmanned aerial vehicle in a cluster state, thereby completing the closed loop of the method framework.
Preferably, the unmanned aerial vehicle solves the exploration problem of the unknown environment, and mainly solves three problems. Namely a state estimation problem, a path planning problem and a motion control problem of the unmanned aerial vehicle.
Preferably, tight coupling estimation of the image and IMU data is required to be realized, a depth vision image is utilized to acquire camera motion and a pre-integration track of an IMU data acquisition carrier, and the tight coupling estimation is realized by the track data calculated by the image and the IMU through a design optimization process; the difference between the visual measurement value and the IMU estimation value, namely the residual error, is minimized by a least square optimization method, so that tight coupling of the image and the IMU data is realized.
Preferably, the visual image tracking design needs to be completed, and the image frame state is restored by using the pixel coordinates of a plurality of groups of characteristic points and the depths of the characteristic points, wherein the state comprises: spatial Position (P), velocity (V) and rotation (Q) of the Quaternion representation of the camera; comprises three main processing steps: sparse optical flow (KLT) tracking, SFM three-dimensional motion reconstruction and sliding window key frame selection.
Preferably, inertial sensing pre-integration needs to be completed, and the visual observation value and the inertial observation value are used for coupling: solving a visual observation value and calculating a residual error, wherein a jacobian matrix of the residual error is a descending direction in optimization, and a covariance matrix is a weight corresponding to the observation value; in particular, a pre-integration link is provided for solving the problem of high difficulty in later optimization of continuous time integration, and the problem is solved by carrying out pre-integration deduction on continuous moments of an inertial sensor under a world coordinate system.
Preferably, after completing the above claims, an initialization procedure is required, and in particular, the visual inertial tight coupling system needs to recover and calibrate system parameters through an initialization process, where the recovered parameters include camera dimensions, gravity, speed and measurement errors (Bias) of the IMU; since visual three-dimensional motion reconstruction (SFM) has better performance in the process of initialization, SFM is mainly used in the process of initialization; by aligning the pre-integration result of the IMU with the visual three-dimensional motion reconstruction result, the measurement error of the IMU can be further initialized, and the method mainly comprises three links: the relative rotation between the camera and the IMU, the initialization of the camera, and the alignment of the IMU and the visual information are obtained.
Preferably, loop detection is required, in the loop detection process, after a new key frame is generated, a FAST feature point detection algorithm is used for searching new feature points, and the feature points are different from the feature points searched in KTL optical flow tracking; extracting descriptors by a BRIEF method, matching the descriptors with the historical descriptors, and storing and retrieving the descriptor information by using a DBow word bag dictionary library; if the feature points corresponding to the descriptors in the key frame are the same as the historical feature points stored in the word bags, matching the corresponding feature point systems to find the return points; if a loop point is found, a key frame which appears earliest in the loop is found, and the pose of the key frame is set to be fixed; in the loop detection process, since the speed of loop detection is always slower than the generation speed of key frames, in order to keep loop detection from being behind key frame generation, a frame skipping method is generally adopted to reject part of key frames, so that the detection efficiency is ensured; after loop detection is completed, the multi-sensor state estimator needs to return loop frame information to a back-end joint optimization process in a quick repositioning mode so as to update optimization data; the unmanned aerial vehicle is required to be ensured not to collide with the obstacle, the unmanned aerial vehicle is enabled to generate one or more unmanned aerial vehicle local paths on line by utilizing the generated map and the obstacle information updated by the sensor in real time by utilizing a local path planning method, the unmanned aerial vehicle can fly in the obstacle, and a B spline optimization method is adopted to optimize the track generated by dynamic search so as to improve the smoothness of the path and solve the problem that the gap between the path and the obstacle is too small.
Preferably, the region is partitioned using a DARP algorithm by implementing a coordinated region search of multiple drones by means of a multiple drone cluster SLAM target allocation strategy, wherein the DARP algorithm is fully named as Divideo area Algorithm for optimal multi-Robot coverage Path planing; the DARP algorithm divides the areas according to the initial position of the mobile unmanned aerial vehicle, ensures that the areas of all the areas are approximately equal and communicated, and can execute a set search strategy according to the areas after the unmanned aerial vehicle exploration area division is completed.
Compared with the prior art, the invention has the beneficial effects that:
by combining the advantages of tight coupling and loose coupling and utilizing the short-time stability of the low-cost inertial measurement unit (InertialMeasurementUnit, IMU) in the image sampling interval time, the system realizes higher sampling frequency of a vision system and enables the unmanned aerial vehicle to perform stable pose tracking under the conditions of acceleration and sharp rotation. Finally, combining with an unmanned aerial vehicle cluster control algorithm and a strategy, the method for simultaneously positioning and constructing the map of the clustered unmanned aerial vehicle based on the fusion frame theory is further studied, and therefore the multi-sensing state estimator with high precision and good real-time performance is designed.
Drawings
FIG. 1 is a structural modeling block diagram of unknown environmental exploration;
FIG. 2 is a position control block diagram; a PID block diagram for horizontal position, altitude control and attitude control;
FIG. 3 is a block diagram of a visual image tracking system;
FIG. 4 is a schematic view of a sliding window;
FIG. 5 is a schematic diagram of a quick positioning process;
FIG. 6 is a schematic diagram of random search and parallel search;
FIG. 7 is a schematic diagram of a tight coupling process of an image with an IMU;
fig. 8 is a schematic diagram of pre-integration.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1-8, the present invention provides a technical solution: a multi-sensing state estimator for aircraft survey is used for providing unmanned aerial vehicle positioning and environment composition functions under multiple scenes, and is applied to an actual unmanned aerial vehicle cluster SLAM system, and autonomous flight and map construction of the cluster system under unknown environmental conditions are completed by combining an unmanned aerial vehicle control algorithm and a strategy. By combining the advantages of tight coupling and loose coupling and utilizing the short-time stability of the low-cost inertial measurement unit (InertialMeasurementUnit, IMU) in the image sampling interval time, the system realizes higher sampling frequency of a vision system and enables the unmanned aerial vehicle to perform stable pose tracking under the conditions of acceleration and sharp rotation. Finally, combining an unmanned aerial vehicle cluster control algorithm and a strategy, the method for simultaneously positioning and constructing the map of the clustered unmanned aerial vehicle based on the fusion frame theory is studied in depth, so that the unmanned aerial vehicle SLAM system with high precision and good instantaneity is realized. The system is assembled on a QAV250 engine body of the quad-rotor unmanned aerial vehicle to form a closed area survey aircraft which can meet GPS indoor high-precision positioning and can be deployed on a large scale. The following is a study protocol for a multi-state sensor estimation system:
firstly, a detailed mathematical model is built for the four-rotor unmanned aerial vehicle, and a simplified collective motion equation can be obtained through stress analysis and moment analysis of a machine body:
Figure SMS_7
Figure SMS_8
Figure SMS_9
Figure SMS_10
Figure SMS_11
Figure SMS_12
then, carrying out model description on the multi-sensor state estimator based on optimization; the state estimator realizes real-time pose estimation of the unmanned aerial vehicle in a complex environment, and solves the problem that the traditional unmanned aerial vehicle SLAM needs to be provided with too many sensors; and finally, researching an online unmanned aerial vehicle motion planning algorithm suitable for the multi-sensor state estimator aiming at the problems of exploration and map construction of the four-rotor unmanned aerial vehicle cluster to an unknown environment. The improved A-type algorithm and B-spline track optimization method based on dynamic path search are provided, and the online motion decision and planning method of a single unmanned aerial vehicle is solved. And designing an unmanned aerial vehicle cluster SLAM control strategy to complete planning of target points and path points of the unmanned aerial vehicle in a cluster state, thereby completing the closed loop of the method framework.
In particular, three problems need to be solved in order to complete the exploration of the unknown environment by the unmanned aerial vehicle. Namely a state estimation problem, a path planning problem and a motion control problem of the unmanned aerial vehicle. In practical engineering, these three types of problems are distributed to three different processors for processing respectively, so as to reduce the operation and real-time pressure of the system, and the roles and correlations are shown in fig. 1.
The flight control system of the small four-rotor unmanned aerial vehicle generally adopts a single-chip microcontroller as a calculation unit of the flight controller. And the four-rotor aircraft is provided with strapdown inertial sensors, height position sensors and other sensors for providing necessary feedback data for four-rotor aircraft. The four-rotor unmanned aerial vehicle is a nonlinear under-actuated controlled object, and a classical proportional-integral-derivative control algorithm is adopted to design the attitude control, the position control and the controller of the unmanned aerial vehicle. Fig. 2 is a pose control block diagram, a horizontal position control PID structure diagram, a height control PID structure diagram and a pose control PID structure diagram from top to bottom in sequence.
The control distributor converts the expected pulling force and moment into the expected motor rotating speed, and the single screw pulling force when the rotor unmanned aerial vehicle hovers can be expressed as:
Figure SMS_13
c T is constant and can be measured by experiment. The counter torque on the fuselage produced by a single propeller when the rotorcraft hovers can be expressed as +.>
Figure SMS_14
Figure SMS_15
c M Is constant and can be measured by experiment. The control distribution of the X-shaped four rotors and the control efficiency model of the multiple rotors are as follows:
Figure SMS_16
c T ,d,c M for unknown parameters, the scaling factor in the controller can be usedLine compensation, setting the outputs of accelerator, pitching, rolling and yawing in four directions as sigma 1234 And (3) making:
Figure SMS_17
the state estimation is a multi-sensor data fusion method, and the measurement precision of a measurement system is improved by utilizing the fusion of more than two types of inaccurate information; based on the optimized multi-sensor state estimator system structure, the method mainly comprises four parts: visual feature identification and tracking, IMU pre-integration, least squares optimization for residual errors, loop detection.
Aiming at the state estimator, the invention mainly has the following innovation: (1) The RGBD camera is introduced as an image sensor, so that the problem that the visual characteristic scale cannot be obtained by the traditional monocular VIO system is solved; (2) The visual-IMU residual error concept is provided, an optimization objective function is redesigned, and the complexity of the system is reduced; (3) The optimization variables in the optimization process are redesigned, and the time required by system optimization is reduced; (4) The program system implementation of the state estimator is completed, and the validity of the state estimator is verified by designing a physical system.
The invention is based on the design realization principle of an optimized multi-sensor state estimator, and utilizes a depth visual image to acquire the motion of a camera and the pre-integral track of an IMU data acquisition carrier, and the track data calculated by the image and the IMU are subjected to tight coupling estimation through a design optimization process. The difference between the visual measurement value and the IMU estimation value, namely 'residual error', is minimized by a least square optimization method, so that tight coupling of the image and the IMU data is realized; fig. 7 is a schematic diagram of a tight coupling process of an image with an IMU.
Visual image tracking is based on the image and processing links in the preprocessing of the optimized multi-sensor state estimator front-end data. And restoring an image frame state by using pixel coordinates of a plurality of groups of characteristic points and characteristic point depths, wherein the state comprises: spatial Position (P), velocity (V) and rotation (Q) of the Quaternion representation of the camera; the system architecture of the visual image tracking section is shown in fig. 3 and includes three main processing steps: sparse optical flow (KLT) tracking, SFM three-dimensional motion reconstruction and sliding window key frame selection.
And tracking the position of the feature point extracted from the previous frame in the current frame by using a KLT sparse optical flow method. In order to acquire the position of a camera in space, an SFM method is used for reconstructing the camera motion, and in the process of searching for feature points, in order to avoid too dense searched feature points, a non-maximum suppression radius needs to be set, wherein the radius describes the minimum distance of the distance between each feature point in the process of searching for the feature points. After the feature point acquisition on the current image is completed, the KLT optical flow tracking method is used for tracking the feature point to be found in the previous frame image in the next frame image, and the same feature point ID is given. After the steps are finished, the system performs screening of key frames and rejection of useless frames, so that images are distributed at a certain speed, and the calculation pressure in the back-end optimization process is reduced. Storing and updating a key frame which needs to be processed currently by using a sliding window, wherein the size of the window stored in the key frame defaults to 10 in the judging process of the key frame; key frames in the sliding window method are shown in fig. 4.
Inertial sensor pre-integration is the most important one of the front-end processes based on optimized multi-sensor state estimators, since the sampling frequency of inertial sensors is typically much higher than that of vision sensors, and the IMU obtains every time
The acceleration and angular velocity at a moment are integrated to obtain the displacement and rotation transformation between two frames measured by the IMU. In the present invention, the coupling is performed using the visual observation and the inertial observation: the visual observation value needs to be solved and the residual error is calculated, wherein the jacobian matrix of the residual error is the descending direction in the optimization, and the covariance matrix is the weight corresponding to the observation value.
First, integral deduction is performed on continuous moments of the inertial sensor in a world coordinate system, and the formula is as follows:
Figure SMS_18
because the IMU data obtained by sensor sampling is discrete, discretization of the above equation using median integration is required:
Figure SMS_19
according to the problem of high difficulty in later optimization of continuous time integration, a pre-integration link is provided. Solving the problem by performing a pre-integral derivation of successive moments of the inertial sensor in the world coordinate system; the pre-integration diagram is shown in fig. 8.
After the front-end visual image tracking and the front-end inertial sensor pre-integration are completed, the method enters an initialization link. The initialization link is the most important link in the running process of the invention. In particular, the visual inertial close-coupled system requires recovery and calibration of system parameters including camera dimensions, gravity, speed, and measurement errors (Bias) of the IMU through an initialization process. Since visual three-dimensional motion reconstruction (SFM) performs well during initialization, SFM is the dominant component during initialization. By aligning the pre-integration result of the IMU with the visual three-dimensional motion reconstruction result, the measurement error of the IMU can be further initialized. The initialization of the system mainly comprises three links: the relative rotation between the camera and the IMU, the initialization of the camera, and the alignment of the IMU and the visual information are obtained.
Because the multi-sensor state estimator adopts a visual element and an inertial element as tightly coupled data sources, and space and time mismatch exists between the sensors, rotation calibration between a camera and an IMU is very important, and the accuracy of the system is extremely low when the system has a calibration error of 1-2 degrees. By calculating the available weights:
Figure SMS_20
for N measured values
Figure SMS_21
Wherein threshold is a threshold, typically taken as 5, Q N The feature vector corresponding to the minimum singular value in the left singular vector of (a). Thus, the relative rotation can be obtained by solving the weight equation. At the same time, we need to pay attention to the termination condition of the solution, i.e. the termination condition of the calibration completion. When there is enough rotational movement, the system can estimate the relative rotation well>
Figure SMS_22
At this time Q N Corresponds to an exact solution and has a rank of 1 in its null space. However, during actual calibration, there may be degrading movements, such as uniform movements, in some axial directions. At this time Q N The rank of the null space of (1) is greater than 1, and the judgment condition is that Q is judged N If the second smallest singular value is larger than a certain threshold, the rank of the null space is 1, otherwise the rank is larger than 1. When the rank is greater than 1, it indicates the relative rotation during initialization>
Figure SMS_23
Insufficient precision or excessive degradation movement exists, and the system cannot complete initialization.
The alignment of vision with the IMU solves mainly three problems, correcting the offset bias of the gyroscope, initializing the speed, gravity vector g and scale factor (metacscale), improving the magnitude of gravity vector g, having calculated the rotation of the external parameters between the camera and IMU from the relative rotation of successive images in the previous paragraph, and then using the relative rotation of the frames of images calculated by SFM to calculate the offset of the gyroscope. And finally solving a least square problem by using a rotation matrix through an IMU quantity model:
Figure SMS_24
backend optimization is based on the core of an optimized multi-sensor state estimator. The core idea of back-end optimization is to minimize the cost function consisting of marginalized a priori information, IMU measurement residuals, and visual observation residuals:
Figure SMS_25
in the cost function, the two residual terms are in turn IMU measurement residuals
Figure SMS_26
And observation residual of "Vision-IMU->
Figure SMS_27
Wherein the residual size is expressed in mahalanobis distance. And linearizing the cost function by using a Gaussian iteration method in the optimization calculation process. And in the optimization process of the back end, the IMU partially depends on the IMU residual error and the Jacobian matrix to perform optimization. In the process of calculating nonlinear optimization by using a Ceres function library, a jacobian matrix is used for carrying out Gaussian iteration on a cost function to obtain an optimal solution.
In the loop detection process, after a new key frame is generated, a FAST feature point detection algorithm is used for searching for new feature points, and the feature points are different from the feature points searched for in KTL optical flow tracking. And extracting descriptors by a BRIEF method, matching the descriptors with the history descriptors, and storing and retrieving the descriptor information by using a DBow word bag dictionary library. If the feature points corresponding to the descriptors in the key frame are the same as the historical feature points stored in the word bags, the corresponding feature point system is matched to find the return points. If a loop point is found, the key frame which appears earliest in the loop is found, and the pose of the key frame is set to be fixed. In the loop detection process, since the speed of loop detection is always slower than the generation speed of the key frames, in order to keep loop detection from being behind key frame generation, a frame skipping method is generally adopted to reject part of the key frames, so that the detection efficiency is ensured. After loop detection is completed, the multi-sensor state estimator needs to return loop Frame information to the back-end joint optimization process by a quick repositioning mode to update the optimization data, the process diagram is shown in fig. 5, and the current loop Frame is marked as Frame cur Will be combined with the Frame cur The key frames that are mutually looped are noted as frames old . Recording Frame old Pose, key Frame sequence number, matching point pair, frame cur Key frame sequence number, timestamp and marked as loop.
By re-projection calculation, the Frame can be calculated cur Pose moving to Frame old The change amount of the pose, namely the relative pose of the current frame and the loop frame. Using KLT optical flow tracking, a matching point pair of the current frame and the loop frame can be obtained. And the loop detection process packages the loop frame information and sends the loop frame information to the joint optimization process. The state estimator will Frame cur Frame of pose old Pose, will be similar to Frame cur Frame for coordinates of matched feature points old The corresponding point coordinates are replaced and the optimization calculation is performed again.
In the autonomous operation process of the unmanned aerial vehicle, the unmanned aerial vehicle needs to be ensured not to collide with the obstacle. By utilizing the local path planning method, the unmanned aerial vehicle generates one or more unmanned aerial vehicle local paths on line by utilizing the generated map and the obstacle information updated by the sensor in real time, so that the unmanned aerial vehicle can fly in the obstacle, and a certain requirement is provided for a local path planning algorithm. Firstly, the algorithm must be an online path planning algorithm, which can ensure that the unmanned aerial vehicle runs in real time, and the path can be changed in real time according to updated obstacles. Secondly, in order to ensure that the unmanned aerial vehicle can fly stably, the path planned by the local path planning algorithm needs to meet the dynamic constraint of the unmanned aerial vehicle. Finally, the path planned by the local path planning algorithm needs to be a local optimal solution or a suboptimal solution. In view of the above needs, the invention adopts a dynamic path searching method based on B spline track optimization, so that each unmanned aerial vehicle can conduct real-time path planning in a local range.
Paths produced using solely dynamic path searching may not be ideal. Meanwhile, as distance information in free space is ignored in the dynamics search algorithm, the searched path is usually close to an obstacle. In order to solve the problems, a B-spline optimization method is adopted to optimize the track generated by dynamic search, so that the smoothness of the path is improved, and the problem that the gap between the path and an obstacle is too small is solved.
After the single unmanned aerial vehicle has the local path planning capability, the collaborative region search of the multiple unmanned aerial vehicles can be realized by means of the SLAM target allocation strategy of the cluster of the multiple unmanned aerial vehicles. Different from unmanned aerial vehicle point-to-point path planning, the goal of collaborative path planning is that multiple unmanned aerial vehicles cooperatively cover and scan an area. The collaborative search designs a control mode aiming at a specific area task area, so that a multi-unmanned aerial vehicle can quickly and efficiently cover the task area of the whole known environment with minimum cost, or search the task area to find a target with higher value in an unknown environment and reduce the uncertainty of the environment. Collaborative searching of multiple drones in a single area may be functionally divided into two parts. First, each unmanned aerial vehicle's search area is allocated, i.e., a simple single-connectivity area is allocated into multiple areas. And secondly, searching strategies of each unmanned aerial vehicle in the respective area. In order to allocate a simple single contiguous region into multiple regions, the present invention uses a DARP (Divideo image Algorithmfor optimal multi-RobotCoveragePath planning) algorithm to partition the regions. The DARP algorithm divides the areas according to the initial position of the mobile unmanned aerial vehicle, and ensures that the areas of each area are approximately equal and communicated.
After the unmanned aerial vehicle exploration area division is completed, a set search strategy can be executed according to the area. Common unmanned aerial vehicle search strategies are random search, parallel search ("zig-zag" or zig-zag "), grid search, internal spiral search, and the like. The random search means that after the unmanned aerial vehicle flies in the search area at a constant track angle and reaches the boundary of the area, the unmanned aerial vehicle turns at a minimum turning radius to enter the search area, and at the moment, the unmanned aerial vehicle continues flying at the track angle, and the unmanned aerial vehicle reciprocates in the way, as shown in fig. 6 (a). The parallel search means that the unmanned aerial vehicle searches in a search area according to the vertical or horizontal direction, and the efficiency of the turning process is lower than that of the straight line flight process from the angles of energy, path and time due to the constraint of the performance of the unmanned aerial vehicle, so most of the current research is based on parallel line search, as shown in fig. 6 (b).
When the system is used, the QAV250 type miniature four-rotor unmanned aerial vehicle can be provided with a multi-sensor system and a microcomputer, and an Intel-i7-10700k1 processor is adopted; nvidia-GTX2060 graphics computing card; simulation computer of 32GB running memory; the sensor has an intel realsensed435i, an intel realsenset265; intel realsensesed 435i acquires depth images and RGBD images; intel realsensesed 435i collects inertial information; intel realsenset265 performs location tracking. With the dataset from EuRoC, the C++ language was used for flight control and data processing under the Ubuntu1 system. And meanwhile, a third party library of an open source is required to be called as system computing support. Such as the Eigen matrix library, the Ceres nonlinear optimization library, the PCL point cloud support library, the OpenCV image processing library and the like.
In order to acquire the position of the camera in space, we reconstruct the camera motion using SFM method, in which process it is necessary to mark feature points in the image and obtain the position of the feature points in different images. The invention uses a characteristic point acquisition (GoodFeatues_To_Track) method in an Opencv computer vision library To search a plurality of characteristic points with the most obvious characteristics in a first frame image acquired by a system. Typically between 150 and 500 in VIO systems, each feature point has its corresponding feature ID.
Since the optimized multi-sensor state estimator uses the same sensor data to simultaneously estimate the pose and the depth, the visual image tracking process and the SLAM map point cloud creation process are designed to be performed synchronously to avoid the time delay of the depth data, and for the feature point which has not depth, if the feature point is observed by the key frame more than 2 frames before, the singular value decomposition is used for calculating the coordinate of the feature point, so that the re-projection error of the feature point on each frame of the observed image is minimum, and the rear-end optimization requirement is met. But in the point cloud update process, the coordinates of the feature points should use the depth measured by the sensor. Meanwhile, in the process of SLAM by using an RGBD camera instead of a point cloud camera, the following steps are needed to ensure the reliability of data and the unbiasedness of a map:
1) The depth camera is trusted, and the first observed position of each feature point is set as an accurate value, so that map points can be directly added.
2) The depth phase error is judged, three-dimensional coordinates in a world coordinate system cannot be far different in two continuous frames observed by each feature point, and if the three-dimensional coordinates are far different, the observation record of the first frame is deleted. If the difference is not large, taking the average value of the two three-dimensional coordinates as the three-dimensional position of the point, and adding the map point.
3) The map point is screened, the point needs to be observed in several consecutive frames, and the observed feature point in the consecutive image frames has a three-dimensional coordinate in the world coordinate system which is not greatly different, and is considered as an accurate feature point and added to the map point.
When the loop-back detection is carried out, the two detection and description methods with better real-time performance, namely the FAST feature point detection method and the BRIEF descriptor, are adopted, and the combination reduces the requirements of the system on a processor and a memory, so that the real-time performance requirement is ensured, and meanwhile, the DBow word bag dictionary library is used for storing and searching the descriptor information.
In the process of local path planning, the unmanned plane estimates the pose state of the unmanned plane in space in real time by means of an optimization-based multi-sensor state estimator, calculates point cloud (PointCloud) information by means of depth information and self positions provided by RGBD sensors, and updates a map. Meanwhile, as distance information in free space is ignored in the dynamics search algorithm, the searched path is usually close to an obstacle. In order to solve the problems, a B-spline optimization method is adopted to optimize the track generated by dynamic search. Aiming at the problem of indoor scheduling of multiple unmanned aerial vehicles, the invention adopts an unmanned aerial vehicle scheduling algorithm based on a DARP region segmentation method to realize search scheduling of the unmanned aerial vehicle cluster in the indoor environment of multiple communication regions such as rooms, corridors and the like.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A multi-sensor state estimator for an aircraft survey, characterized by: the method comprises the steps of establishing a detailed mathematical model for the four-rotor unmanned aerial vehicle, carrying out model description on a multi-sensor state estimator based on optimization, solving the exploration problem of the unmanned aerial vehicle on an unknown environment, realizing tight coupling estimation of images and IMU data, tracking visual images, completing inertial sensing pre-integration, and entering initialization after the steps are completed, wherein the method comprises the following steps of: solving relative rotation between the camera and the IMU, initializing the camera, and aligning the IMU with visual information;
and loop detection is performed simultaneously, the unmanned aerial vehicle can fly in the obstacle by utilizing the local path planning method, and after a single unmanned aerial vehicle has local path planning capability, the collaborative area search of the unmanned aerial vehicle can be realized by means of the SLAM target allocation strategy of the unmanned aerial vehicle cluster.
2. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: a motion model is required to be built for the four-rotor unmanned aerial vehicle, and a simplified collective motion equation can be obtained through stress analysis and moment analysis of a machine body:
Figure FDA0004125352390000011
Figure FDA0004125352390000012
Figure FDA0004125352390000013
Figure FDA0004125352390000014
Figure FDA0004125352390000015
Figure FDA0004125352390000016
3. a multi-sensing state estimator for aircraft surveys according to claim 1, wherein: model description is needed to be carried out on a multi-sensor state estimator based on optimization, the state estimator realizes real-time pose estimation of the unmanned aerial vehicle in a complex environment, and the problem that the traditional unmanned aerial vehicle SLAM needs to carry too many sensors is solved; finally, aiming at the problems of exploration and map construction of the four-rotor unmanned aerial vehicle cluster to the unknown environment, researching an online unmanned aerial vehicle motion planning algorithm suitable for a multi-sensor state estimator; an improved A-type algorithm and a B spline track optimization method based on dynamic path search are provided, and the online motion decision and planning method of a single unmanned aerial vehicle is solved; and designing an unmanned aerial vehicle cluster SLAM control strategy to complete planning of target points and path points of the unmanned aerial vehicle in a cluster state, thereby completing the closed loop of the method framework.
4. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: solving the exploration problem of the unmanned aerial vehicle on the unknown environment and mainly solving three problems; namely a state estimation problem, a path planning problem and a motion control problem of the unmanned aerial vehicle.
5. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: the method comprises the steps that tight coupling estimation of images and IMU data is needed to be achieved, a depth visual image is utilized to obtain camera motion and a pre-integral track of an IMU data obtaining carrier, and the tight coupling estimation is achieved through the track data calculated by the images and the IMU through a design optimization process; the difference between the visual measurement value and the IMU estimation value, namely the residual error, is minimized by a least square optimization method, so that tight coupling of the image and the IMU data is realized.
6. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: the visual image tracking design needs to be completed, the pixel coordinates of a plurality of groups of characteristic points and the depths of the characteristic points are used, the image frame state is restored, and the state comprises: spatial Position (P), velocity (V) and rotation (Q) of the Quaternion representation of the camera. Comprises three main processing steps: sparse optical flow (KLT) tracking, SFM three-dimensional motion reconstruction and sliding window key frame selection.
7. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: the inertial sensing pre-integration needs to be completed, and the visual observation value and the inertial observation value are used for coupling: solving a visual observation value and calculating a residual error, wherein a jacobian matrix of the residual error is a descending direction in optimization, and a covariance matrix is a weight corresponding to the observation value; in particular, a pre-integration link is provided for solving the problem of high difficulty in later optimization of continuous time integration, and the problem is solved by carrying out pre-integration deduction on continuous moments of an inertial sensor under a world coordinate system.
8. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: after the above claims are completed, an initialization link is needed to be entered, and in particular, the visual inertial tight coupling system needs to recover and calibrate system parameters through an initialization process, wherein the recovered parameters comprise camera dimensions, gravity, speed and measurement errors (Bias) of the IMU; since visual three-dimensional motion reconstruction (SFM) has better performance in the process of initialization, SFM is mainly used in the process of initialization; by aligning the pre-integration result of the IMU with the visual three-dimensional motion reconstruction result, the measurement error of the IMU can be further initialized, and the method mainly comprises three links: the relative rotation between the camera and the IMU, the initialization of the camera, and the alignment of the IMU and the visual information are obtained.
9. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: in the detection process, after a new key frame is generated, searching new feature points by using a FAST feature point detection algorithm, wherein the feature points are different from the feature points searched in KTL optical flow tracking; extracting descriptors by a BRIEF method, matching the descriptors with the historical descriptors, and storing and retrieving the descriptor information by using a DBow word bag dictionary library; if the feature points corresponding to the descriptors in the key frame are the same as the historical feature points stored in the word bags, matching the corresponding feature point systems to find the return points; if a loop point is found, a key frame which appears earliest in the loop is found, and the pose of the key frame is set to be fixed; in the loop detection process, since the speed of loop detection is always slower than the generation speed of key frames, in order to keep loop detection from being behind key frame generation, a frame skipping method is generally adopted to reject part of key frames, so that the detection efficiency is ensured; after loop detection is completed, the multi-sensor state estimator needs to return loop frame information to a back-end joint optimization process in a quick repositioning mode so as to update optimization data; the unmanned aerial vehicle is required to be ensured not to collide with the obstacle, the unmanned aerial vehicle is enabled to generate one or more unmanned aerial vehicle local paths on line by utilizing the generated map and the obstacle information updated by the sensor in real time by utilizing a local path planning method, the unmanned aerial vehicle can fly in the obstacle, and a B spline optimization method is adopted to optimize the track generated by dynamic search so as to improve the smoothness of the path and solve the problem that the gap between the path and the obstacle is too small.
10. A multi-sensing state estimator for aircraft surveys according to claim 1, wherein: through realizing the collaborative area search of multiple unmanned aerial vehicles by means of the SLAM target allocation strategy of the cluster of the multiple unmanned aerial vehicles, a DARP algorithm is used for partitioning the area, wherein the full name of the DARP algorithm is Divideo AreaLgolgorithmfor optimal multi-RobotCoveragePathPlanning; the DARP algorithm divides the areas according to the initial position of the mobile unmanned aerial vehicle, ensures that the areas of all the areas are approximately equal and communicated, and can execute a set search strategy according to the areas after the unmanned aerial vehicle exploration area division is completed.
CN202310243900.8A 2023-03-15 2023-03-15 Multi-sensing state estimator for aircraft survey Pending CN116295342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310243900.8A CN116295342A (en) 2023-03-15 2023-03-15 Multi-sensing state estimator for aircraft survey

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310243900.8A CN116295342A (en) 2023-03-15 2023-03-15 Multi-sensing state estimator for aircraft survey

Publications (1)

Publication Number Publication Date
CN116295342A true CN116295342A (en) 2023-06-23

Family

ID=86837447

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310243900.8A Pending CN116295342A (en) 2023-03-15 2023-03-15 Multi-sensing state estimator for aircraft survey

Country Status (1)

Country Link
CN (1) CN116295342A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830879A (en) * 2024-01-02 2024-04-05 广东工业大学 Indoor-oriented distributed unmanned aerial vehicle cluster positioning and mapping method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830879A (en) * 2024-01-02 2024-04-05 广东工业大学 Indoor-oriented distributed unmanned aerial vehicle cluster positioning and mapping method
CN117830879B (en) * 2024-01-02 2024-06-14 广东工业大学 Indoor-oriented distributed unmanned aerial vehicle cluster positioning and mapping method

Similar Documents

Publication Publication Date Title
CN112347840B (en) Vision sensor laser radar integrated unmanned aerial vehicle positioning and image building device and method
Mansouri et al. Cooperative coverage path planning for visual inspection
US10732647B2 (en) Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft micro-aerial vehicle (MAV)
Achtelik et al. Autonomous navigation and exploration of a quadrotor helicopter in GPS-denied indoor environments
CN113625774B (en) Local map matching and end-to-end ranging multi-unmanned aerial vehicle co-location system and method
CN109522832B (en) Loop detection method based on point cloud segment matching constraint and track drift optimization
JP2020507072A (en) Laser scanner with real-time online self-motion estimation
CN113485441A (en) Distribution network inspection method combining unmanned aerial vehicle high-precision positioning and visual tracking technology
CN110068335A (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
CN110726406A (en) Improved nonlinear optimization monocular inertial navigation SLAM method
Steiner et al. A vision-aided inertial navigation system for agile high-speed flight in unmapped environments: Distribution statement a: Approved for public release, distribution unlimited
CN110018691A (en) Small-sized multi-rotor unmanned aerial vehicle state of flight estimating system and method
Shen Autonomous navigation in complex indoor and outdoor environments with micro aerial vehicles
CN115574816B (en) Bionic vision multi-source information intelligent perception unmanned platform
Abeywardena et al. Fast, on-board, model-aided visual-inertial odometry system for quadrotor micro aerial vehicles
Magree et al. Monocular visual mapping for obstacle avoidance on UAVs
CN111238469A (en) Unmanned aerial vehicle formation relative navigation method based on inertia/data chain
CN116295342A (en) Multi-sensing state estimator for aircraft survey
CN117685953A (en) UWB and vision fusion positioning method and system for multi-unmanned aerial vehicle co-positioning
Kehoe et al. State estimation using optical flow from parallax-weighted feature tracking
Liu et al. On terrain-aided navigation for unmanned aerial vehicle using B-spline neural network and extended Kalman filter
Guan et al. A new integrated navigation system for the indoor unmanned aerial vehicles (UAVs) based on the neural network predictive compensation
Cao et al. Visual-Inertial-Laser SLAM Based on ORB-SLAM3
CN114459474A (en) Inertia/polarization/radar/optical flow tight combination navigation method based on factor graph
Hosen et al. A vision-aided nonlinear observer for fixed-wing UAV navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination