CN110865650B - Unmanned aerial vehicle pose self-adaptive estimation method based on active vision - Google Patents

Unmanned aerial vehicle pose self-adaptive estimation method based on active vision Download PDF

Info

Publication number
CN110865650B
CN110865650B CN201911133525.1A CN201911133525A CN110865650B CN 110865650 B CN110865650 B CN 110865650B CN 201911133525 A CN201911133525 A CN 201911133525A CN 110865650 B CN110865650 B CN 110865650B
Authority
CN
China
Prior art keywords
pose
unmanned aerial
aerial vehicle
visual
vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911133525.1A
Other languages
Chinese (zh)
Other versions
CN110865650A (en
Inventor
元海文
肖长诗
程莉
王艳锋
方艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Institute of Technology
Original Assignee
Wuhan Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Institute of Technology filed Critical Wuhan Institute of Technology
Priority to CN201911133525.1A priority Critical patent/CN110865650B/en
Publication of CN110865650A publication Critical patent/CN110865650A/en
Application granted granted Critical
Publication of CN110865650B publication Critical patent/CN110865650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Abstract

The invention discloses an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which at least comprises the following steps: active visual detection, namely continuously observing the landing cooperative targets through an airborne visual system of the unmanned aerial vehicle, screening all the detected landing cooperative targets, and reserving information of the landing cooperative targets with higher detection precision in the current visual field range; calculating the pose of the unmanned aerial vehicle, namely calculating the pose of the current unmanned aerial vehicle relative to a cooperative target in real time by taking the visual 2D characteristics and the inertial measurement information as input; and (3) self-adaptive pose fusion, namely performing self-adaptive fusion based on federal filtering on all calculated pose solutions of the unmanned aerial vehicle relative to the landing cooperative target according to corresponding covariance information to obtain the optimized pose of the unmanned aerial vehicle. The invention can effectively improve the effective measurement precision and range of the vision of the unmanned aerial vehicle facing the autonomous landing task, and is also suitable for the vision perception and positioning research of the robot.

Description

Unmanned aerial vehicle pose self-adaptive estimation method based on active vision
Technical Field
The invention belongs to the field of autonomous landing and visual navigation of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle pose self-adaptive estimation method based on active vision.
Background
In recent years, unmanned aerial vehicles have been intensively studied as a research hotspot in the field of robotics, and the autonomous landing capability (including stationary and moving platforms) thereof has been intensively studied. Aiming at the problem, sensing modes based on GPS, inertia, vision, laser and the like are usually adopted, and the aim is to calculate the motion pose of the unmanned aerial vehicle relative to the landing target in real time. The patent with publication number 109211241A discloses an unmanned aerial vehicle autonomous positioning method based on visual SLAM, which consists of a feature extraction and matching solution motion part, an image and inertia measurement unit IMU fusion part and a 3D point depth estimation part. The patent publication No. 106054929B discloses an unmanned aerial vehicle automatic landing guidance method based on optical flow, which determines a marker by processing a real-time image shot by a camera of an optical flow module during landing, and estimates the position and posture of the marker relative to the unmanned aerial vehicle. Publication 110068321A discloses a UAV relative pose estimation method for fixed-point landing landmarks. The patent publication No. 104166854B discloses a visual grading landmark positioning and identifying method for unmanned aerial vehicle autonomous landing, which adopts visual grading landmarks to avoid the problem of scale change of landmarks caused by fixed image resolution due to change of ground clearance when single-grade landmarks are used. Similar disclosures are 106516145B, 105197252A, 109270953A, etc. However, the above patent only considers how to calculate the relative pose of the drone from the visual 2D features, and does not consider the problem of pose information fusion based on vision.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which is mainly used for solving the influence of the change of the relative distance between a vision system and an observation target in the landing stage of an unmanned aerial vehicle on the vision positioning precision, not only evaluating the image-level detection precision of the front end of the vision positioning, but also performing self-adaptive fusion on the output pose result of the rear end of the vision positioning.
The technical scheme adopted by the invention for achieving the purpose is as follows:
the invention provides an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which at least comprises the following steps:
active visual detection, namely continuously observing the landing cooperative targets through an airborne visual system of the unmanned aerial vehicle, screening all the detected landing cooperative targets, and reserving information of the landing cooperative targets with higher detection precision in the current visual field range; the landing cooperative target comprises a plurality of groups of cooperative features with different scales, and each group of cooperative features comprises different specific geometric figures;
calculating the pose of the unmanned aerial vehicle, namely calculating the pose of the current unmanned aerial vehicle relative to a cooperative target in real time by taking visual 2D characteristics and inertial measurement information as input;
and self-adaptive pose fusion, namely performing self-adaptive fusion based on federal filtering on all calculated pose solutions of the unmanned aerial vehicle relative to the landing cooperative target according to corresponding covariance information to obtain the optimized pose of the unmanned aerial vehicle.
According to the technical scheme, the active visual detection at least comprises two steps of image target extraction and feature autonomous selection, the unmanned aerial vehicle airborne visual system continuously observes the landing cooperative targets, the image target extraction method is adopted to realize all the landing cooperative target detection and feature extraction, then the feature autonomous selection method is utilized to screen all the detected targets, and target information with high detection precision in the current visual field range is reserved.
According to the technical scheme, the unmanned aerial vehicle pose calculation method comprises the steps of pose measurement based on vision and inertia and pose calculation based on unscented Kalman filtering; the pose measurement method based on vision and inertia mainly utilizes inertia or vision detection information to calculate the relative pose of the unmanned aerial vehicle; the pose calculation method based on the unscented Kalman filtering is mainly responsible for processing pose calculation under the condition of simultaneously having inertia and visual detection information, and the output frequency of the pose calculation is improved.
According to the technical scheme, when only inertial measurement information exists, the pose change of the unmanned aerial vehicle relative to the previous moment is determined by using the time integral of the angular velocity and acceleration information of the inertial measurement information; when the visual 2D features are obtained, the unmanned aerial vehicle calculates the pose of the unmanned aerial vehicle relative to the landing cooperative target according to homography transformation of the visual features.
In connection with the above technical solution, further comprising the steps of:
and anomaly monitoring, namely continuously detecting and identifying the optimized pose solution and eliminating an abnormal value.
According to the technical scheme, the image target extraction method comprises three steps of line segment feature detection, corner feature detection and geometric pattern matching, the airborne vision system processes the image obtained in real time by utilizing the steps, the pattern matching is completed according to the geometric constraint relation between points and lines, and meanwhile landing cooperation target detection and feature extraction are achieved.
According to the technical scheme, the characteristic self-selection method simulates the biological mechanism of human vision for selecting the positioning reference object from the surrounding scene, and selects the optimal visual target for the input information of visual positioning from the combination of landing cooperative targets with different sizes and dimensions according to the 3D-2D projection relation between the imaging proportion and the relative distance of the target in the vision.
According to the technical scheme, the pose calculation method based on the unscented Kalman filtering takes inertia and visual detection information as input, the inertia sensor carries out accumulated calculation on the pose of the unmanned aerial vehicle at the angular speed and the acceleration of the refresh rate of 100 or 200Hz, and the visual detection information calculates the pose of the unmanned aerial vehicle through the conversion of visual homography; the two pose solutions are mutually corrected under a Bayes frame, and covariance information corresponding to the correction value is obtained; the non-linear state transition of the unmanned aerial vehicle along with the time change is realized by the non-trace transformation.
According to the technical scheme, the self-adaptive pose fusion method carries out modular processing on visual pose calculation, federate fusion is carried out by using pose solutions output by the modules and corresponding covariance, abnormal states output by the modules are monitored, and continuous and stable pose estimation is provided for the unmanned aerial vehicle in an autonomous landing stage.
The invention also provides a computer storage medium, in which a computer program executable by a processor is stored, and the computer program executes the unmanned aerial vehicle pose self-adaptive estimation method of the technical scheme.
The invention has the following beneficial effects: according to the invention, the onboard vision and inertia information of the rotorcraft are utilized, the information fusion and the vision positioning technology are combined, the acquired vision signal is evaluated in advance only by an active vision method, and the vision positioning result based on different scale characteristics is subjected to adaptive fusion, so that the unmanned aerial vehicle is ensured to have stable and continuous pose estimation in the landing process from far to near, and the problem of unstable detection precision of vision at different relative distances is solved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of an unmanned aerial vehicle pose adaptive estimation method based on active vision;
FIG. 2 is a flow chart of an image target detection algorithm in the active visual detection method;
FIG. 3 is a flow chart of a feature active selection algorithm in the active visual inspection method;
FIG. 4 is a flow chart of the pose calculation method of the unmanned aerial vehicle based on vision 2D-3D;
FIG. 5 is a pose calculation flow chart based on vision & inertia in the pose calculation method of the unmanned aerial vehicle;
FIG. 6 is a computational framework of an adaptive pose fusion method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which at least comprises the following steps:
and active visual detection, namely providing at least one group of high-quality images corresponding to the landing cooperative targets (reference objects) for the unmanned aerial vehicle visual system according to visual 3D-2D projection ratio transformation, so that the unmanned aerial vehicle can extract reliable geometric information from the landing cooperative targets by using an image processing algorithm. Specifically, the active visual detection part enables the unmanned aerial vehicle to search and detect a landing cooperative target through vision; the landing cooperative target comprises a plurality of groups of cooperative features with different scales, and each group of cooperative features is suitable for the observation of the unmanned aerial vehicle vision in different relative distance ranges; when the unmanned aerial vehicle is at different heights, the active visual detection part is responsible for selecting cooperation features with excellent observation quality from landing cooperation targets; each group of cooperative features is composed of different specific geometric figures, and the active visual inspection part is responsible for identifying the cooperative features and extracting points and lines from the cooperative features as image features.
Unmanned aerial vehicle position and pose calculation, which takes visual 2D characteristics and inertial measurement information as input and is used for calculating the position and pose (position and attitude) of the current unmanned aerial vehicle relative to the landing cooperative target in real time; when only inertial measurement information exists, determining the pose change of the unmanned aerial vehicle relative to the previous moment by using the time integral of the angular velocity and acceleration information; when the visual 2D features are obtained, the unmanned aerial vehicle calculates the pose of the unmanned aerial vehicle relative to the landing cooperative target according to homography transformation of the visual features. Specifically, the unmanned aerial vehicle pose calculation part is mainly used for calculating pose changes of the current unmanned aerial vehicle relative to a landing target; the vision sensor (camera) is fixedly arranged at the gravity center position of the unmanned aerial vehicle body and faces downwards vertically, so that the posture and the motion of a vision system and the unmanned aerial vehicle are consistent; taking the detected different cooperative features as input, and calculating the pose of the unmanned aerial vehicle (visual system) relative to the landing target at the current moment according to the change of the features on the visual projection plane; in the time between the acquisition of the visual image frames, the unmanned aerial vehicle carries out integral calculation by utilizing the angular velocity and acceleration information of the airborne IMU to obtain the relative pose of the unmanned aerial vehicle at the current moment; when the IMU and the visual information are obtained at the same time, the pose of the unmanned aerial vehicle is calculated according to a method based on UKF.
And self-adaptive pose fusion is used for carrying out self-adaptive fusion on different visual pose solutions and realizing the optimal estimation of the pose of the unmanned aerial vehicle. Specifically, the self-adaptive pose fusion part takes pose solutions obtained by calculation based on visual features of different scales and corresponding covariance information (active visual detection + unmanned aerial vehicle pose calculation) as input; under a federal filtering framework, covariance information is used as weight factors, weight summation is carried out on different input pose solutions, and a calculated value is considered to be the optimal estimation of the current pose of the unmanned aerial vehicle.
The active visual detection method at least comprises two steps of image target extraction and feature autonomous selection, an unmanned aerial vehicle airborne visual system continuously observes the landing cooperative targets, the image target extraction method is adopted to realize detection and feature extraction of all the landing cooperative targets, then the feature autonomous selection method is utilized to screen all the detected targets, and target information with high detection precision in the current visual field range is reserved.
The unmanned aerial vehicle pose calculation method comprises pose measurement based on vision and inertia and pose calculation based on Unscented Kalman Filtering (UKF); the pose measurement method based on the vision and the inertia mainly utilizes inertia or vision detection information to calculate the relative pose of the unmanned aerial vehicle; the pose calculation method based on the UKF is mainly responsible for processing pose calculation under the condition of inertia and visual detection information at the same time and improving the output frequency of the pose calculation.
The self-adaptive pose fusion method refers to unmanned aerial vehicle pose self-adaptive fusion and anomaly monitoring based on federal filtering; the self-adaptive fusion method based on the federal filtering performs self-adaptive fusion on all pose solutions obtained based on different visual characteristics according to corresponding covariance information, so that the final pose calculation of the unmanned aerial vehicle is optimized; meanwhile, the abnormal monitoring method continuously detects and identifies the optimized pose solution, and eliminates abnormal values.
The image target extraction method comprises three steps of line segment feature detection, corner feature detection and geometric pattern matching, the airborne vision system processes the image obtained in real time by utilizing the steps, the pattern matching is completed according to the geometric constraint relation between points and lines, and the landing cooperation target detection and the feature extraction can be simultaneously realized.
The feature automatic selection method simulates the biological mechanism of human vision for selecting and positioning reference objects from surrounding scenes, namely, the optimal target of the vision is selected from cooperative feature combinations with different sizes and dimensions for input information of the vision positioning according to the 3D-2D projection relation between the imaging proportion and the relative distance of the target in the vision.
The pose calculation method based on the UKF takes inertia and visual detection information as input, an inertial sensor carries out accumulative calculation on the pose of the unmanned aerial vehicle at the angular speed and the acceleration of 100 or 200Hz refresh rate, and the visual detection information calculates the pose of the unmanned aerial vehicle through the conversion of visual homography; the two pose solutions are mutually corrected under a Bayes frame, and covariance information corresponding to the correction value is obtained; the non-linear state transition of the unmanned aerial vehicle along with the time change is realized by the non-trace transformation.
The self-adaptive pose fusion method carries out modular processing on the vision pose calculation, federate fusion is carried out by using pose solutions output by the modules and corresponding covariances, abnormal states output by the modules are monitored, and continuous and stable pose estimation is provided for the unmanned aerial vehicle in an autonomous landing stage.
The invention also provides an unmanned aerial vehicle pose self-adaptive estimation system based on active vision, which is mainly used for realizing the unmanned aerial vehicle pose self-adaptive estimation method, and the system at least comprises the following steps:
the active visual detection module is used for detecting landing cooperative targets and extracting characteristics by the vision of the unmanned aerial vehicle and providing relatively reliable and stable 2D image characteristics for the pose calculation module;
the unmanned aerial vehicle position and pose calculation module is used for calculating the position and pose of the unmanned aerial vehicle relative to the landing cooperative target at the current moment according to the extracted 2D image characteristic information; between image frames, integral calculation is carried out on the pose of the unmanned aerial vehicle by adopting angular velocity and acceleration information provided by an airborne Inertial Measurement Unit (IMU) module; correcting the visual and inertial information by using UKF to obtain a reliable pose result;
and the self-adaptive position and posture fusion module is used for performing self-adaptive fusion on output results of the unmanned aerial vehicle position and posture calculation modules to realize optimal estimation of the position and posture and the motion state of the unmanned aerial vehicle.
According to the scheme, the active visual detection module at least comprises two parts of image target extraction and feature autonomous selection. The image target extraction part adopts a method of feature detection and pattern matching to identify and extract the cooperative features in the visual field, as shown in fig. 2. These cooperative features not only provide orientation information for drone vision, but also have unique graphical patterns. And (3) performing feature extraction and matching on the image by using a Harris corner feature and Hough transformation algorithm, completing target (cooperative feature) identification according to a point and line feature combination relation, and simultaneously acquiring point corresponding information of the cooperative feature. These point correspondence information may provide input for unmanned aerial vehicle pose calculation.
As shown in FIG. 3, the drop cooperation targets are made up of a series of cooperative features of different scale. The number of features with different scale scales is defined as the number of layers, which is determined by the distance between the target plane and the unmanned aerial vehicle camera. The more the number of layers of the feature structure increases with distance, the more the detection accuracy deteriorates. Therefore, the feature autonomous selection part not only needs to meet the requirement of the unmanned aerial vehicle for landing tasks, but also ensures the detection performance of the target. According to the camera pinhole imaging principle, the length of the cooperative characteristic of the actual length D in the camera visual field is
Figure BDA0002278976460000061
Where f represents the camera focal length, D represents the cooperative feature actual size, and H represents the relative distance between the camera and the cooperative feature. The relationship between the actual size and relative distance of such cooperating features may be described as
Figure BDA0002278976460000062
Wherein D i Representing the cooperative feature metrics of the ith layer. All the real scale of the cooperative features at different visual levels according to the formula
Figure BDA0002278976460000063
Variation of theta k The scale ratio of the characteristic features of the kth layer to the kth-1 layer is shown. In order for the dimensions of the cooperating features to appear the same size in the camera field of view, as shown in fig. 4, the distance between all cooperating features and the camera may be related by equation (2),
Figure BDA0002278976460000064
suppose that the drone is flying by altitude H max Begins to fall to height H min And is made of
Figure BDA0002278976460000065
The number of visual feature layers L included in the landing target is calculated by the following formula. For example, a drone landing from 5m to 0.1m relative to a target requires at least two layers of cooperative features of different scale ratios.
Figure BDA0002278976460000066
Furthermore, the unmanned aerial vehicle position and orientation calculation module mainly depends on visual and inertial information to calculate the current position and orientation of the unmanned aerial vehicle. In a visual system, homography can represent projective transformation relationship between corresponding points of an image plane and an image plane or an image plane and a target plane, and matching between feature points of two planes can also be realized, as shown in fig. 4. The target coordinate system is set up on the target plane, i.e. all points on the target plane are 0 in the Z-axis direction. The homography between the object plane W and the image i can be implemented as a 3 × 3 matrix
Figure BDA0002278976460000067
Represents:
Figure BDA0002278976460000068
wherein s is 1 Is a scale scalar, M = [ X, Y] T Representing object planesFeature points on the surface, corresponding homogeneous coordinates
Figure BDA0002278976460000071
m i =[μ,v] T Corresponding to the image point of M in the image i, corresponding to the homogeneous coordinate
Figure BDA0002278976460000072
Meanwhile, the homography between the image plane i and the image plane j can also be represented by a 3 × 3 size matrix
Figure BDA0002278976460000073
Represents:
Figure BDA0002278976460000074
wherein s is 2 Is a scale scalar, m i And m j Representing the point at which image i matches image j,
Figure BDA0002278976460000075
and
Figure BDA0002278976460000076
corresponding to a homogeneous coordinate form. Homography matrix solution is considered to be a non-linear least squares problem, e.g.
Figure BDA0002278976460000077
The corresponding minimization can be calculated by the Levenberg-Marquardt algorithm. The relative pose of the unmanned aerial vehicle is obtained through homography decomposition calculation. Because the homography contains camera intrinsic parameters and extrinsic parameters, and the visual system intrinsic parameter matrix K is known, H obtained after optimization obtains the unmanned aerial vehicle relative displacement t and rotation R through decomposition. The concrete pose analysis is as follows
Figure BDA0002278976460000078
Wherein h is i I-th column, r, representing H i RepresentThe i-th column of R. Since all column vectors of the rotation matrix R are mutually orthogonal, R 3 Can pass through r 1 ×r 2 A calculation is determined. However, the rotation matrix obtained in general is affected by noise of the image detection data, and orthogonality is not completely satisfied. Here, singular value decomposition is used to optimize the rotation matrix R to generate a new completely orthogonal rotation matrix, the rotation meaning of which is substantially consistent with the previous matrix. -R -1 t,R -1 The orientation of the vision carried by the unmanned aerial vehicle relative to the landing target is represented, and the vision comprises translation and rotation. Because the camera is vertically downward and is fixed with the unmanned aerial vehicle body, the relative pose of the unmanned aerial vehicle becomes solvable.
According to the scheme, the unmanned aerial vehicle pose calculation module utilizes the IMU to correct or compensate the visual pose solution in real time under the UKF framework, wherein the IMU is used as prediction information, the visual result is used as update information, and the current pose and the motion state of the unmanned aerial vehicle are calculated as shown in FIG. 5. The airborne vision system (right lower) outputs the pose measurement value
Figure BDA0002278976460000079
The inertial system (upper right) is responsible for outputting acceleration and angular velocity information
Figure BDA00022789764600000710
And the core UKF framework (left side) of pose estimation mainly comprises two parts of system prediction and measurement updating. f (-) represents a state transition equation continuously changing along with time, Q is a system noise covariance, h (-) represents a state measurement equation based on a visual pose, and R is a measurement noise covariance. The information source of the UKF framework comprises an IMU and a vision part, and the UKF driven by the IMU is predicted based on the state, and the state quantity of the UKF
Figure BDA00022789764600000711
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00022789764600000712
respectively representing the relative 3D position, velocity and attitude angle of the drone. The IMU is capable of acquiring accelerations and rotational speeds of the rigid body in three axial directions.
Figure BDA00022789764600000713
The IMU acceleration and gyroscope bias terms are represented, while the vision system is responsible for providing the 3D position and attitude of the drone itself relative to the landing target. Assuming that the inertial measurement information contains a bias term b and a random disturbance term n, the angular velocity omega and the acceleration a of the airborne IMU are expressed by a model,
ω=ω m -b ω -n ω a=a m -b a -n a (7)
wherein the subscript m represents the measured value. The dynamics of the non-static bias term b are approximated by a random process,
Figure BDA00022789764600000714
therefore, the motion state of the whole unmanned aerial vehicle is expressed by the following differential equation,
Figure BDA0002278976460000081
wherein C is (q) And the rotation matrix corresponding to the attitude quaternion q is shown, g is a gravity vector under a world coordinate system, and omega (omega) is a quaternion multiplication matrix of omega. The body acceleration and angular velocity information is used for predicting the state of the UKF, and the pose analysis based on vision is mainly used for updating the state of the UKF. Position obtained for visual method
Figure BDA0002278976460000082
And attitude of
Figure BDA0002278976460000083
The measurement model thereof can be expressed as
Figure BDA0002278976460000084
Wherein
Figure BDA0002278976460000085
The pose of the IMU is represented as,
Figure BDA0002278976460000086
representing the amount of rotation from the visual coordinate system to the world coordinate system.
According to the scheme, the self-adaptive position and pose fusion module takes the active visual detection and unmanned aerial vehicle position and pose calculation module as a subunit and adopts federal filtering as an information fusion frame. Fig. 6 shows the overall architecture of adaptive fusion. Wherein n sub-filters are respectively used for processing the visual pose solution Z obtained by all the cooperative characteristics i . Each sub-filter is independent unscented Kalman filter UKF, and comprises prediction and updating respectively, and corresponding visual solutions are used as measurement information. The fusion part is responsible for calculating the distribution parameter beta i And a final optimization solution. The airborne IMU is used as reference information of the federal filtering and is responsible for providing real-time acceleration and acceleration data of unmanned aerial vehicle state prediction. In detail, the estimated states of all sub-filters and the global update X i Are consistent as shown in equation (11). In any sub-filter, the unmanned plane motion state X i Predictions are made from the IMU and then updated with visual measurements. From the above, based on the vision, we can obtain the 3D position (x, y, z) and attitude angle of the unmanned aerial vehicle
Figure BDA00022789764600000816
Corresponding to the tilt angle, roll angle, yaw angle, respectively, these parameters being referred to as measurement information Z in the fusion process i (i =1,2,3, \ 8230;) as shown in equation (12).
Figure BDA0002278976460000087
Figure BDA0002278976460000088
From these sub-filtersThe obtained estimated state
Figure BDA0002278976460000089
And corresponding covariance P i Is passed to the global fusion module. P is i The performance of the corresponding filter can be accounted for, which means that the detection accuracy of the visual module based on the cooperative feature i can be passed through P i And (4) reflecting. Thus, by combining all available states
Figure BDA00022789764600000810
Combining the corresponding covariances P i Weighted summation (covariance is weight multiplied by state, summation, see equations 13-15), can yield a global state estimate
Figure BDA00022789764600000811
And corresponding system noise
Figure BDA00022789764600000812
Sum state covariance
Figure BDA00022789764600000813
The calculation formula is as follows,
Figure BDA00022789764600000814
Figure BDA00022789764600000815
Figure BDA0002278976460000091
the accuracy of each cooperative feature based vision estimation module at different stages or different distance ranges is different. The Federal Filter framework introduces a parameter beta i
Figure BDA0002278976460000092
To define the system noise Q of each sub-filter at the next time instant i Sum covariance P i And dynamically adjusting the confidence coefficient of each sub-filter, thereby realizing the self-adaptive fusion. As shown in the formula (16), β i Covariance P with corresponding sub-filter i In inverse proportion to each other. By the self-adaptive fusion mode, when a certain vision estimation module has relatively obvious detection or positioning error, beta is utilized i Can mitigate the effect of the vision module on the overall global estimate.
Figure BDA0002278976460000093
According to the scheme, the self-adaptive posture fusion module further comprises an abnormity monitoring part which is in charge of monitoring and eliminating the error result output by the vision module before fusion, so that the continuity and reliability of the input information in the self-adaptive fusion process are ensured. Rotational relationship between visual coordinate system and body (inertial) coordinate system
Figure BDA0002278976460000094
Is theoretically constant during the fusion process. The rotation amount is calculated at each time k by the following equation
Figure BDA0002278976460000095
This value changes relatively slowly compared to the update frequency of the fusion method. Therefore, the temperature of the molten metal is controlled,
Figure BDA0002278976460000096
the sequence can eliminate abnormal values or jump values in the sequence through a median filtering smoothing operation. In particular, the rotation estimation between the machine body coordinate system and the visual coordinate system
Figure BDA0002278976460000097
Implemented with a median filter of window size N,
Figure BDA0002278976460000098
when in use
Figure BDA0002278976460000099
Exceed
Figure BDA00022789764600000910
Is calculated, the 3 times variance boundary range at that moment is considered to output a pose solution with significant error.
The computer storage medium of the embodiment of the invention stores a computer program executable by a processor, and the computer program executes the unmanned aerial vehicle pose self-adaptive estimation method in the embodiment.
In conclusion, the unmanned aerial vehicle pose self-adaptive estimation method based on active vision establishes a target model with multi-scale cooperative characteristics according to the change of the relative height of the unmanned aerial vehicle in the landing process and provides a corresponding detection algorithm; by taking the detected cooperative features as measurement information, an unmanned aerial vehicle pose estimation method based on vision and inertia is provided; and establishing a federal filtering framework, performing adaptive estimation on the pose of the unmanned aerial vehicle, and dynamically fusing pose estimation modules based on different scale cooperation characteristics by adjusting distribution parameters. The invention can ensure the stable precision of the visual positioning of the unmanned aerial vehicle in the landing process from far to near, can also effectively improve the effective measurement precision and range of the vision of the unmanned aerial vehicle facing the autonomous landing task, and is also suitable for the visual perception and positioning research of the robot.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.

Claims (10)

1. An unmanned aerial vehicle pose self-adaptive estimation method based on active vision is characterized by at least comprising the following steps:
active visual detection, namely continuously observing the landing cooperative targets through an unmanned aerial vehicle airborne visual system, screening all the detected landing cooperative targets, and reserving the information of the landing cooperative targets with higher detection precision in the current visual field range; the landing cooperative target comprises a plurality of groups of cooperative features with different scales, each group of cooperative features comprises different specific geometric figures, and each group of cooperative features is suitable for the observation of the vision of the unmanned aerial vehicle in different relative distance ranges;
calculating the pose of the unmanned aerial vehicle, namely calculating the pose of the current unmanned aerial vehicle relative to a cooperative target in real time by taking visual 2D characteristics and inertial measurement information as input;
and self-adaptive pose fusion, namely performing self-adaptive fusion based on federal filtering on all calculated pose solutions of the unmanned aerial vehicle relative to the landing cooperative target according to corresponding covariance information to obtain the optimized pose of the unmanned aerial vehicle.
2. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 1, wherein the active visual detection at least comprises two steps of image target extraction and feature autonomous selection, an unmanned aerial vehicle airborne visual system continuously observes landing cooperative targets, detection and feature extraction of all the landing cooperative targets are realized by adopting an image target extraction method, then all the detected targets are screened by utilizing a feature autonomous selection method, and target information with higher detection precision in the current visual field range is reserved.
3. The unmanned aerial vehicle pose adaptive estimation method of claim 1, wherein the unmanned aerial vehicle pose calculation method comprises pose measurement based on vision and inertia and pose calculation based on unscented kalman filtering; the pose measurement based on vision and inertia mainly utilizes inertia or vision detection information to calculate the relative pose of the unmanned aerial vehicle; and the pose calculation based on the unscented Kalman filtering is responsible for calculating the pose under the condition of simultaneously having inertia and visual detection information.
4. The unmanned aerial vehicle pose adaptive estimation method according to claim 1, wherein when only inertial measurement information is input, the pose change of the unmanned aerial vehicle relative to the previous moment is determined by using the time integral of angular velocity and acceleration information of the unmanned aerial vehicle; when the visual 2D features are obtained, the unmanned aerial vehicle calculates the pose of the unmanned aerial vehicle relative to the landing cooperative target according to homography transformation of the visual features.
5. The unmanned aerial vehicle pose adaptive estimation method according to claim 1, further comprising the steps of:
and anomaly monitoring, namely continuously detecting and identifying the optimized pose solution and eliminating an abnormal value.
6. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 2, characterized in that the image target extraction method comprises three steps of line segment feature detection, corner feature detection and geometric pattern matching, and the airborne vision system processes the real-time acquired image by using the three steps, completes pattern matching according to geometric constraint relation between points and lines, and simultaneously realizes landing cooperative target detection and feature extraction.
7. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 2, characterized in that the feature self-selection method simulates the biological mechanism of human vision for selecting positioning reference objects from surrounding scenes, and selects the optimal visual target for input information of visual positioning from the combination of landing cooperative targets with different sizes and dimensions according to the 3D-2D projection relation between the imaging proportion and the relative distance of the target in the vision.
8. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 3, wherein the pose calculation method based on unscented Kalman filtering takes inertia and visual detection information as input, the inertial sensor performs cumulative calculation on the unmanned aerial vehicle pose at an angular velocity and an acceleration of a 100 or 200Hz refresh rate, and the visual detection information calculates the unmanned aerial vehicle pose through conversion of visual homography; the two pose solutions are mutually corrected under a Bayes frame, and covariance information corresponding to the correction value is obtained; the non-linear state transition of the unmanned aerial vehicle along with the time change is realized by the non-trace transformation.
9. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 3, wherein the self-adaptive pose fusion method is used for conducting modular processing on visual pose calculation, federate fusion is conducted by using pose solutions and corresponding covariances output by the modules, abnormal states output by the modules are monitored, and continuous and stable pose estimation is provided for the unmanned aerial vehicle in an autonomous landing stage.
10. A computer storage medium, characterized in that a computer program executable by a processor is stored therein, the computer program executing the unmanned aerial vehicle pose adaptive estimation method according to any one of claims 1-9.
CN201911133525.1A 2019-11-19 2019-11-19 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision Active CN110865650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911133525.1A CN110865650B (en) 2019-11-19 2019-11-19 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911133525.1A CN110865650B (en) 2019-11-19 2019-11-19 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision

Publications (2)

Publication Number Publication Date
CN110865650A CN110865650A (en) 2020-03-06
CN110865650B true CN110865650B (en) 2022-12-20

Family

ID=69654937

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911133525.1A Active CN110865650B (en) 2019-11-19 2019-11-19 Unmanned aerial vehicle pose self-adaptive estimation method based on active vision

Country Status (1)

Country Link
CN (1) CN110865650B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112504261B (en) * 2020-11-09 2024-02-09 中国人民解放军国防科技大学 Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points
CN113066050B (en) * 2021-03-10 2022-10-21 天津理工大学 Method for resolving course attitude of airdrop cargo bed based on vision
CN113608556B (en) * 2021-07-19 2023-06-30 西北工业大学 Multi-robot relative positioning method based on multi-sensor fusion
CN113838215A (en) * 2021-07-30 2021-12-24 歌尔光学科技有限公司 VR collision detection method and system
CN113865579A (en) * 2021-08-06 2021-12-31 湖南大学 Unmanned aerial vehicle pose parameter measuring system and method
CN114543797A (en) * 2022-02-18 2022-05-27 北京市商汤科技开发有限公司 Pose prediction method and apparatus, device, and medium
CN114415736B (en) * 2022-04-01 2022-07-12 之江实验室 Multi-stage visual accurate landing method and device for unmanned aerial vehicle
CN116399327A (en) * 2023-04-10 2023-07-07 烟台欣飞智能系统有限公司 Unmanned aerial vehicle positioning system based on multisource data fusion

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216319A (en) * 2008-01-11 2008-07-09 南京航空航天大学 Low orbit satellite multi-sensor fault tolerance autonomous navigation method based on federal UKF algorithm
CN102991681A (en) * 2012-12-25 2013-03-27 天津工业大学 Ground target identification method in unmanned aerial vehicle vision landing system
CN103942813A (en) * 2014-03-21 2014-07-23 杭州电子科技大学 Single-moving-object real-time detection method in electric wheelchair movement process
CN104166854A (en) * 2014-08-03 2014-11-26 浙江大学 Vision grading landmark locating and identifying method for autonomous landing of small unmanned aerial vehicle
CN104460685A (en) * 2014-11-21 2015-03-25 南京信息工程大学 Control system for four-rotor aircraft and control method of control system
CN105197252A (en) * 2015-09-17 2015-12-30 武汉理工大学 Small-size unmanned aerial vehicle landing method and system
CN106203439A (en) * 2016-06-27 2016-12-07 南京邮电大学 The homing vector landing concept of unmanned plane based on mark multiple features fusion
CN106687878A (en) * 2014-10-31 2017-05-17 深圳市大疆创新科技有限公司 Systems and methods for surveillance with visual marker
CN106873628A (en) * 2017-04-12 2017-06-20 北京理工大学 A kind of multiple no-manned plane tracks the collaboration paths planning method of many maneuvering targets
CN107289948A (en) * 2017-07-24 2017-10-24 成都通甲优博科技有限责任公司 A kind of UAV Navigation System and method based on Multi-sensor Fusion
US9924320B1 (en) * 2016-12-30 2018-03-20 Uber Technologies, Inc. Locating a user device
CN109209229A (en) * 2018-10-15 2019-01-15 中国石油集团渤海钻探工程有限公司 A kind of track in horizontal well landing mission regulates and controls method
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109737959A (en) * 2019-03-20 2019-05-10 哈尔滨工程大学 A kind of polar region Multi-source Information Fusion air navigation aid based on federated filter
US10304208B1 (en) * 2018-02-12 2019-05-28 Avodah Labs, Inc. Automated gesture identification using neural networks
CN110018691A (en) * 2019-04-19 2019-07-16 天津大学 Small-sized multi-rotor unmanned aerial vehicle state of flight estimating system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10139279B2 (en) * 2015-05-12 2018-11-27 BioSensing Systems, LLC Apparatuses and methods for bio-sensing using unmanned aerial vehicles
CN107209854A (en) * 2015-09-15 2017-09-26 深圳市大疆创新科技有限公司 For the support system and method that smoothly target is followed
CN107239077B (en) * 2017-06-28 2020-05-08 歌尔科技有限公司 Unmanned aerial vehicle moving distance calculation system and method
US11242144B2 (en) * 2018-02-09 2022-02-08 Skydio, Inc. Aerial vehicle smart landing
CN109407708A (en) * 2018-12-11 2019-03-01 湖南华诺星空电子技术有限公司 A kind of accurate landing control system and Landing Control method based on multi-information fusion

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216319A (en) * 2008-01-11 2008-07-09 南京航空航天大学 Low orbit satellite multi-sensor fault tolerance autonomous navigation method based on federal UKF algorithm
CN102991681A (en) * 2012-12-25 2013-03-27 天津工业大学 Ground target identification method in unmanned aerial vehicle vision landing system
CN103942813A (en) * 2014-03-21 2014-07-23 杭州电子科技大学 Single-moving-object real-time detection method in electric wheelchair movement process
CN104166854A (en) * 2014-08-03 2014-11-26 浙江大学 Vision grading landmark locating and identifying method for autonomous landing of small unmanned aerial vehicle
CN106687878A (en) * 2014-10-31 2017-05-17 深圳市大疆创新科技有限公司 Systems and methods for surveillance with visual marker
CN104460685A (en) * 2014-11-21 2015-03-25 南京信息工程大学 Control system for four-rotor aircraft and control method of control system
CN105197252A (en) * 2015-09-17 2015-12-30 武汉理工大学 Small-size unmanned aerial vehicle landing method and system
CN106203439A (en) * 2016-06-27 2016-12-07 南京邮电大学 The homing vector landing concept of unmanned plane based on mark multiple features fusion
US9924320B1 (en) * 2016-12-30 2018-03-20 Uber Technologies, Inc. Locating a user device
CN106873628A (en) * 2017-04-12 2017-06-20 北京理工大学 A kind of multiple no-manned plane tracks the collaboration paths planning method of many maneuvering targets
CN107289948A (en) * 2017-07-24 2017-10-24 成都通甲优博科技有限责任公司 A kind of UAV Navigation System and method based on Multi-sensor Fusion
US10304208B1 (en) * 2018-02-12 2019-05-28 Avodah Labs, Inc. Automated gesture identification using neural networks
CN109209229A (en) * 2018-10-15 2019-01-15 中国石油集团渤海钻探工程有限公司 A kind of track in horizontal well landing mission regulates and controls method
CN109376785A (en) * 2018-10-31 2019-02-22 东南大学 Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision
CN109737959A (en) * 2019-03-20 2019-05-10 哈尔滨工程大学 A kind of polar region Multi-source Information Fusion air navigation aid based on federated filter
CN110018691A (en) * 2019-04-19 2019-07-16 天津大学 Small-sized multi-rotor unmanned aerial vehicle state of flight estimating system and method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Hierarchical Vision-Based UAV Localization for an;Haiwen Yuan等;《Electronics》;20180511;全文 *
A Vision/Radar/INS Integrated Guidance Method for Shipboard Landing;Yue Meng等;《 IEEE Transactions on Industrial Electronics》;20190113;第66卷(第1期);全文 *
Vision-based autonomous quadrotor landing on a moving platform;Davide Falanga等;《2017 IEEE International Symposium on Safety, Security and Rescue Robotics》;20171030;全文 *
基于无迹卡尔曼滤波的仪表着陆系统/GBAS着陆系统/惯性导航系统组合导航融合方法;于耕等;《科学技术与工程》;20171231;第17卷(第36期);全文 *
基于芯片引线键合的视觉定位技术设计与研究;田锋;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170715(第07期);全文 *
基于视觉的无人机自主着陆位姿估计器设计;吴良晶;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20180315(第3期);全文 *

Also Published As

Publication number Publication date
CN110865650A (en) 2020-03-06

Similar Documents

Publication Publication Date Title
CN110865650B (en) Unmanned aerial vehicle pose self-adaptive estimation method based on active vision
US20210012520A1 (en) Distance measuring method and device
CN113269098B (en) Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
US10991105B2 (en) Image processing device
CN115371665B (en) Mobile robot positioning method based on depth camera and inertial fusion
CN110736457A (en) combination navigation method based on Beidou, GPS and SINS
JP6858681B2 (en) Distance estimation device and method
CN114623817A (en) Self-calibration-containing visual inertial odometer method based on key frame sliding window filtering
CN110598370B (en) Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion
JP2017524932A (en) Video-assisted landing guidance system and method
CN114910069A (en) Fusion positioning initialization system and method for unmanned aerial vehicle
CN112862818B (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN112146627A (en) Aircraft imaging system using projected patterns on featureless surfaces
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
Aminzadeh et al. Implementation and performance evaluation of optical flow navigation system under specific conditions for a flying robot
CN112902957B (en) Missile-borne platform navigation method and system
Zheng et al. Integrated navigation system with monocular vision and LIDAR for indoor UAVs
Popov et al. UAV navigation on the basis of video sequences registered by onboard camera
CN208314856U (en) A kind of system for the detection of monocular airborne target
CN111712855A (en) Ground information processing method and device and unmanned vehicle
KR102408478B1 (en) Finding Method of route and device using the same
Li et al. A homography-based visual inertial fusion method for robust sensing of a Micro Aerial Vehicle
Hurwitz et al. Relative Constraints and Their Contribution to Image Configurations
CN114322943B (en) Target distance measuring method and device based on forward-looking image of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant