CN110865650A - Unmanned aerial vehicle pose self-adaptive estimation method based on active vision - Google Patents
Unmanned aerial vehicle pose self-adaptive estimation method based on active vision Download PDFInfo
- Publication number
- CN110865650A CN110865650A CN201911133525.1A CN201911133525A CN110865650A CN 110865650 A CN110865650 A CN 110865650A CN 201911133525 A CN201911133525 A CN 201911133525A CN 110865650 A CN110865650 A CN 110865650A
- Authority
- CN
- China
- Prior art keywords
- pose
- unmanned aerial
- aerial vehicle
- visual
- landing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004438 eyesight Effects 0.000 title claims abstract description 68
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000000007 visual effect Effects 0.000 claims abstract description 86
- 238000001514 detection method Methods 0.000 claims abstract description 55
- 230000004927 fusion Effects 0.000 claims abstract description 28
- 238000005259 measurement Methods 0.000 claims abstract description 26
- 238000001914 filtration Methods 0.000 claims abstract description 17
- 238000012216 screening Methods 0.000 claims abstract description 3
- 238000004364 calculation method Methods 0.000 claims description 43
- 238000000605 extraction Methods 0.000 claims description 19
- 230000001133 acceleration Effects 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 11
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 230000009466 transformation Effects 0.000 claims description 9
- 230000002159 abnormal effect Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 238000007500 overflow downdraw method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 6
- 238000010187 selection method Methods 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 230000007321 biological mechanism Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 230000026676 system process Effects 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims 1
- 238000011160 research Methods 0.000 abstract description 3
- 230000008447 perception Effects 0.000 abstract 1
- 239000011159 matrix material Substances 0.000 description 11
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000007499 fusion processing Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 238000000691 measurement method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000011179 visual inspection Methods 0.000 description 2
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/08—Control of attitude, i.e. control of roll, pitch, or yaw
- G05D1/0808—Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The invention discloses an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which at least comprises the following steps: active visual detection, namely continuously observing the landing cooperative targets through an airborne visual system of the unmanned aerial vehicle, screening all the detected landing cooperative targets, and reserving information of the landing cooperative targets with higher detection precision in the current visual field range; calculating the pose of the unmanned aerial vehicle, namely calculating the pose of the current unmanned aerial vehicle relative to a cooperative target in real time by taking the visual 2D characteristics and the inertial measurement information as input; and self-adaptive pose fusion, namely performing self-adaptive fusion based on federal filtering on all calculated pose solutions of the unmanned aerial vehicle relative to the landing cooperative target according to corresponding covariance information to obtain the optimized pose of the unmanned aerial vehicle. The invention can effectively improve the effective measurement precision and range of the vision of the unmanned aerial vehicle facing the autonomous landing task, and is also suitable for the vision perception and positioning research of the robot.
Description
Technical Field
The invention belongs to the field of autonomous landing and visual navigation of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle pose self-adaptive estimation method based on active vision.
Background
In recent years, unmanned aerial vehicles have been intensively studied as a research hotspot in the field of robotics, and the autonomous landing capability (including stationary and moving platforms) thereof has been intensively studied. Aiming at the problem, sensing modes based on GPS, inertia, vision, laser and the like are usually adopted, and the aim is to calculate the motion pose of the unmanned aerial vehicle relative to the landing target in real time. The patent with publication number 109211241A discloses an unmanned aerial vehicle autonomous positioning method based on visual SLAM, which is composed of a feature extraction and matching solution motion part, an image and inertia measurement unit IMU fusion part and a 3D point depth estimation part. The patent publication No. 106054929B discloses an unmanned aerial vehicle automatic landing guidance method based on optical flow, which is to determine a marker by processing a real-time image taken by a camera of an optical flow module during landing, and estimate the position and posture of the marker relative to the unmanned aerial vehicle. Publication No. 110068321a discloses a UAV relative pose estimation method for fixed-point landing landmarks. The patent with publication number 104166854B discloses a visual grading landmark positioning and identifying method for unmanned aerial vehicle autonomous landing, which adopts a visual grading landmark to avoid the problem of scale change of the landmark caused by fixed image resolution due to change of ground clearance when a single-grade landmark is used. Similar disclosures are also 106516145B, 105197252A, 109270953A, etc. However, the above patent only considers how to calculate the relative pose of the drone from the visual 2D features, and does not consider the problem of pose information fusion based on vision.
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which is mainly used for solving the influence of relative distance change between a vision system and an observation target on the vision positioning precision in the landing stage of an unmanned aerial vehicle, not only evaluating the image-level detection precision of the front end of the vision positioning, but also performing self-adaptive fusion on the output pose result of the rear end of the vision positioning.
The technical scheme adopted by the invention for achieving the purpose is as follows:
the invention provides an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which at least comprises the following steps:
active visual detection, namely continuously observing the landing cooperative targets through an airborne visual system of the unmanned aerial vehicle, screening all the detected landing cooperative targets, and reserving information of the landing cooperative targets with higher detection precision in the current visual field range; the landing cooperative target comprises a plurality of groups of cooperative features with different scales, and each group of cooperative features comprises different specific geometric figures;
calculating the pose of the unmanned aerial vehicle, namely calculating the pose of the current unmanned aerial vehicle relative to a cooperative target in real time by taking the visual 2D characteristics and the inertial measurement information as input;
and self-adaptive pose fusion, namely performing self-adaptive fusion based on federal filtering on all calculated pose solutions of the unmanned aerial vehicle relative to the landing cooperative target according to corresponding covariance information to obtain the optimized pose of the unmanned aerial vehicle.
According to the technical scheme, the active visual detection at least comprises two steps of image target extraction and feature autonomous selection, the unmanned aerial vehicle airborne visual system continuously observes the landing cooperative targets, the image target extraction method is adopted to realize all the landing cooperative target detection and feature extraction, then the feature autonomous selection method is utilized to screen all the detected targets, and target information with high detection precision in the current visual field range is reserved.
According to the technical scheme, the unmanned aerial vehicle pose calculation method comprises the steps of pose measurement based on vision and inertia and pose calculation based on unscented Kalman filtering; the pose measurement method based on vision and inertia mainly utilizes inertia or vision detection information to calculate the relative pose of the unmanned aerial vehicle; the pose calculation method based on the unscented Kalman filtering is mainly responsible for processing pose calculation under the condition of simultaneously having inertia and visual detection information, and the output frequency of the pose calculation is improved.
According to the technical scheme, when only inertial measurement information exists, the pose change of the unmanned aerial vehicle relative to the previous moment is determined by using the time integral of the angular velocity and acceleration information of the inertial measurement information; when the visual 2D features are obtained, the unmanned aerial vehicle calculates the pose of the unmanned aerial vehicle relative to the landing cooperative target according to homography transformation of the visual features.
The technical scheme also comprises the following steps:
and anomaly monitoring, namely continuously detecting and identifying the optimized pose solution and eliminating an abnormal value.
According to the technical scheme, the image target extraction method comprises three steps of line segment feature detection, corner feature detection and geometric pattern matching, the airborne vision system processes the image obtained in real time by utilizing the steps, the pattern matching is completed according to the geometric constraint relation between points and lines, and meanwhile landing cooperation target detection and feature extraction are achieved.
According to the technical scheme, the feature self-selection method simulates the biological mechanism of human vision for selecting the positioning reference object from the surrounding scene, and selects the optimal visual target from the combination of landing cooperative targets with different sizes and dimensions for the input information of visual positioning according to the 3D-2D projection relation between the imaging proportion and the relative distance of the target in the vision.
According to the technical scheme, the pose calculation method based on the unscented Kalman filtering takes inertia and visual detection information as input, the inertia sensor carries out accumulated calculation on the pose of the unmanned aerial vehicle at the angular speed and the acceleration of the refresh rate of 100 or 200Hz, and the visual detection information calculates the pose of the unmanned aerial vehicle through the conversion of visual homography; the two pose solutions are mutually corrected under a Bayes frame, and covariance information corresponding to the correction value is obtained; the non-linear state transition of the unmanned aerial vehicle along with the time change is realized by the non-trace transformation.
According to the technical scheme, the self-adaptive pose fusion method carries out modular processing on visual pose calculation, federate fusion is carried out by using pose solutions output by the modules and corresponding covariance, abnormal states output by the modules are monitored, and continuous and stable pose estimation is provided for the unmanned aerial vehicle in an autonomous landing stage.
The invention also provides a computer storage medium, in which a computer program executable by a processor is stored, and the computer program executes the unmanned aerial vehicle pose self-adaptive estimation method of the technical scheme.
The invention has the following beneficial effects: according to the invention, the onboard vision and inertia information of the rotorcraft are utilized, the information fusion and the vision positioning technology are combined, the acquired vision signal is evaluated in advance by an active vision method, and the vision positioning result based on different scale characteristics is subjected to self-adaptive fusion, so that the unmanned aerial vehicle is ensured to have stable and continuous pose estimation in the landing process from far to near, and the problem of unstable detection precision of vision at different relative distances is solved.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of an unmanned aerial vehicle pose adaptive estimation method based on active vision;
FIG. 2 is a flow chart of an image target detection algorithm in the active visual detection method;
FIG. 3 is a flow chart of a feature active selection algorithm in the active visual inspection method;
FIG. 4 is a pose calculation flow chart based on vision 2D-3D in the pose calculation method of the unmanned aerial vehicle;
FIG. 5 is a pose calculation flow chart based on vision & inertia in the pose calculation method of the unmanned aerial vehicle;
FIG. 6 is a computational framework of an adaptive pose fusion method.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the invention provides an unmanned aerial vehicle pose self-adaptive estimation method based on active vision, which at least comprises the following steps:
and active visual detection, namely providing at least one group of high-quality images corresponding to the landing cooperative target (reference object) for the unmanned aerial vehicle visual system according to visual 3D-2D projection ratio transformation, so that the unmanned aerial vehicle can extract reliable geometric information from the landing cooperative target by using an image processing algorithm. Specifically, the active visual detection part enables the unmanned aerial vehicle to search and detect the landing cooperative target through vision; the landing cooperative target comprises a plurality of groups of cooperative features with different scales, and each group of cooperative features is suitable for the observation of the unmanned aerial vehicle vision in different relative distance ranges; when the unmanned aerial vehicle is at different heights, the active visual detection part is responsible for selecting cooperation features with excellent observation quality from the landing cooperation targets; each group of cooperative features is composed of different specific geometric figures, and the active visual inspection part is responsible for identifying the cooperative features and extracting points and lines from the cooperative features as image features.
The unmanned aerial vehicle pose calculation is used for calculating the pose (position and attitude) of the current unmanned aerial vehicle relative to the landing cooperative target in real time by taking the visual 2D characteristics and the inertial measurement information as input; when only inertial measurement information exists, determining the pose change of the unmanned aerial vehicle relative to the previous moment by using the time integral of the angular velocity and acceleration information; when the visual 2D features are obtained, the unmanned aerial vehicle calculates the pose of the unmanned aerial vehicle relative to the landing cooperative target according to homography transformation of the visual features. Specifically, the unmanned aerial vehicle pose calculation part is mainly used for calculating pose changes of the current unmanned aerial vehicle relative to a landing target; the vision sensor (camera) is fixedly arranged at the gravity center position of the unmanned aerial vehicle body and faces downwards vertically, so that the posture and the motion of a vision system and the unmanned aerial vehicle are consistent; taking the detected different cooperative features as input, and calculating the pose of the unmanned aerial vehicle (vision system) relative to the landing target at the current moment according to the change of the features on the visual projection plane; in the time between the acquisition of the visual image frames, the unmanned aerial vehicle carries out integral calculation by utilizing the angular velocity and acceleration information of the airborne IMU to obtain the relative pose of the unmanned aerial vehicle at the current moment; and when the IMU and the visual information are obtained at the same time, calculating the pose of the unmanned aerial vehicle according to a UKF-based method.
And self-adaptive pose fusion is used for carrying out self-adaptive fusion on different visual pose solutions and realizing the optimal estimation of the pose of the unmanned aerial vehicle. Specifically, the self-adaptive pose fusion part takes pose solutions obtained by calculation based on visual features of different scales and corresponding covariance information (active visual detection + unmanned aerial vehicle pose calculation) as input; under a federal filtering framework, covariance information is used as weight factors, weight summation is carried out on different input pose solutions, and a calculated value is considered to be the optimal estimation of the current pose of the unmanned aerial vehicle.
The active visual detection method at least comprises two steps of image target extraction and feature autonomous selection, an unmanned aerial vehicle airborne visual system continuously observes landing cooperative targets, detection and feature extraction of all the landing cooperative targets are achieved through the image target extraction method, then all the detected targets are screened through the feature autonomous selection method, and target information with high detection precision in the current visual field range is reserved.
The unmanned aerial vehicle pose calculation method comprises pose measurement based on vision and inertia and pose calculation based on Unscented Kalman Filtering (UKF); the pose measurement method based on the vision and the inertia mainly utilizes inertia or vision detection information to calculate the relative pose of the unmanned aerial vehicle; the pose calculation method based on the UKF is mainly responsible for processing pose calculation under the condition of inertia and visual detection information at the same time and improving the output frequency of the pose calculation.
The self-adaptive pose fusion method refers to unmanned aerial vehicle pose self-adaptive fusion and anomaly monitoring based on federal filtering; the self-adaptive fusion method based on the federal filtering performs self-adaptive fusion on all pose solutions obtained based on different visual characteristics according to corresponding covariance information, so that the final pose calculation of the unmanned aerial vehicle is optimized; meanwhile, the abnormal monitoring method carries out continuous detection and identification on the optimized pose solution, and eliminates abnormal values.
The image target extraction method comprises three steps of line segment feature detection, corner feature detection and geometric pattern matching, the airborne vision system processes the image obtained in real time by utilizing the steps, the pattern matching is completed according to the geometric constraint relation between points and lines, and the landing cooperation target detection and the feature extraction can be simultaneously realized.
The feature automatic selection method simulates the biological mechanism of human vision for selecting and positioning reference objects from surrounding scenes, namely, the optimal target of the visual scene is selected from cooperative feature combinations with different sizes and dimensions for input information of visual positioning according to the 3D-2D projection relation between the imaging proportion and the relative distance of the target in the vision.
The pose calculation method based on the UKF takes inertia and visual detection information as input, an inertial sensor carries out accumulative calculation on the pose of the unmanned aerial vehicle at an angular speed and an acceleration of a 100 or 200Hz refresh rate, and the visual detection information calculates the pose of the unmanned aerial vehicle through conversion of visual homography; the two pose solutions are mutually corrected under a Bayes frame, and covariance information corresponding to the correction value is obtained; the non-linear state transition of the unmanned aerial vehicle along with the time change is realized by the non-trace transformation.
The self-adaptive pose fusion method carries out modular processing on the vision pose calculation, federate fusion is carried out by using pose solutions output by the modules and corresponding covariances, abnormal states output by the modules are monitored, and continuous and stable pose estimation is provided for the unmanned aerial vehicle in an autonomous landing stage.
The invention also provides an unmanned aerial vehicle pose self-adaptive estimation system based on active vision, which is mainly used for realizing the unmanned aerial vehicle pose self-adaptive estimation method, and the system at least comprises the following steps:
the active visual detection module is used for detecting landing cooperative targets and extracting characteristics by the vision of the unmanned aerial vehicle and providing relatively reliable and stable 2D image characteristics for the pose calculation module;
the unmanned aerial vehicle pose calculation module is used for calculating the pose of the unmanned aerial vehicle relative to the landing cooperative target at the current moment according to the extracted 2D image characteristic information; performing integral calculation on the pose of the unmanned aerial vehicle by adopting angular velocity and acceleration information provided by an airborne Inertial Measurement Unit (IMU) module between image frames; correcting the visual and inertial information by using UKF to obtain a reliable pose result;
and the self-adaptive pose fusion module is used for performing self-adaptive fusion on output results of the unmanned aerial vehicle pose calculation modules to realize optimal estimation of the pose and the motion state of the unmanned aerial vehicle.
According to the scheme, the active visual detection module at least comprises two parts of image target extraction and feature autonomous selection. The image target extraction part adopts a feature detection and pattern matching method to identify and extract the cooperative features in the visual field, as shown in fig. 2. These cooperative features not only provide orientation information for drone vision, but also have unique graphical patterns. And (3) performing feature extraction and matching on the image by using a Harris corner feature and Hough transformation algorithm, completing target (cooperative feature) identification according to a point and line feature combination relation, and simultaneously acquiring point corresponding information of the cooperative feature. These point correspondence information may provide input for unmanned aerial vehicle pose calculation.
As shown in FIG. 3, a drop cooperation target is made up of a series of cooperative features of different scale. The number of features with different scale scales is defined as the number of layers, which is determined by the distance between the target plane and the drone camera. The more number of layers of the feature structure increases with distance, and the more the detection accuracy deteriorates. Therefore, the feature autonomous selection part not only needs to meet the requirement of the unmanned aerial vehicle for landing tasks, but also ensures the detection performance of the target. According to the camera pinhole imaging principle, the length of the cooperative characteristic of the actual length D in the camera visual field is
Where f represents the camera focal length, D represents the cooperative feature real size, and H represents the relative distance between the camera and the cooperative feature. The relationship between the actual size and relative distance of such cooperating features may be described asWherein DiRepresenting the cooperative feature scale of the ith layer. All the real scale of the cooperative features at different visual levels according to the formulaVariation, thetakThe scale ratio of the characteristic features of the kth layer to the kth-1 layer is shown. In order for the dimensions of the cooperating features to appear the same size in the camera field of view, as shown in fig. 4, the distances between all cooperating features and the camera may be related by equation (2),
suppose that the drone is driven by flight height HmaxBegins to fall to height HminAnd is andthe number of visual feature layers L included in the landing target is calculated by the following formula. For example, a drone landing from 5m to 0.1m relative to a target requires at least two layers of cooperative features of different scale ratios.
Further, the unmanned aerial vehicle position and orientation calculation module mainly depends on visual and inertial information to calculate the current position and orientation of the unmanned aerial vehicle. In a vision system, homography can represent the projective transformation relationship between the image plane and the image plane or the corresponding points of the image plane and the target plane, and matching between the feature points of the two planes can also be realized, as shown in fig. 4. The target coordinate system is set up on the target plane, i.e. all points on the target plane are 0 in the Z-axis direction. The homography between the object plane W and the image i can be implemented as a 3 × 3 matrixRepresents:
wherein s is1Is a scale scalar quantity, M ═ X, Y]TRepresenting characteristic points on the target plane, corresponding homogeneous coordinatesmi=[μ,v]TCorresponding to the image point of M in the image i, corresponding to the homogeneous coordinateAt the same time, the image is flatHomography between plane i and image plane j, a 3 × 3 size matrix may also be usedRepresents:
wherein s is2Is a scale scalar, miAnd mjRepresenting the point at which image i matches image j,andcorresponding to a homogeneous coordinate form. Homography matrix solution is considered to be a non-linear least squares problem, e.g.The corresponding minimization can be calculated by the Levenberg-Marquardt algorithm. The relative pose of the unmanned aerial vehicle is obtained through homography decomposition calculation. Because the homography contains camera intrinsic parameters and extrinsic parameters, and the visual system intrinsic parameter matrix K is known, H obtained after optimization obtains the unmanned aerial vehicle relative displacement t and rotation R through decomposition. The specific pose analysis is as follows
Wherein h isiI-th column, r, representing HiI-th columns representing R. Since all column vectors of the rotation matrix R are mutually orthogonal, R3Can pass through r1×r2A calculation is determined. However, the rotation matrix obtained in general is affected by noise of the image detection data, and orthogonality is not completely satisfied. Here, singular value decomposition is used to optimize the rotation matrix R to generate a new fully orthogonal rotation matrix whose rotation significance is substantially consistent with the previous matrix. -R-1t,R-1Indicating unmanned aerial vehicleThe visual bearing includes translation and rotation relative to the orientation of the landing target. Because the camera is fixed with the unmanned aerial vehicle organism perpendicularly downwards, the relative position appearance of unmanned aerial vehicle becomes solvable.
According to the scheme, the unmanned aerial vehicle pose calculation module utilizes the IMU to correct or compensate the vision pose solution in real time under the UKF framework, wherein the IMU is used as prediction information, the vision result is used as update information, and the current pose and the motion state of the unmanned aerial vehicle are calculated, as shown in FIG. 5. The airborne vision system (right lower) outputs a pose measurement valueThe inertial system (upper right) is responsible for outputting acceleration and angular velocity informationAnd the core UKF framework (left side) of pose estimation mainly comprises two parts of system prediction and measurement updating. f (-) represents a state transition equation continuously changing along with time, Q is system noise covariance, h (-) represents a state measurement equation based on visual pose, and R is measurement noise covariance. The information source of the UKF framework comprises an IMU and a vision part, and the UKF driven by the IMU is predicted based on the state, and the state quantity of the UKFWherein the content of the first and second substances,respectively representing the relative 3D position, velocity and attitude angle of the drone. The IMU can acquire accelerations and rotational speeds of the rigid body in three axial directions.Representing IMU acceleration and gyro bias terms, while the vision system is responsible for providing the 3D position and attitude of the drone itself relative to the landing target. Assuming that the inertial measurement information contains a bias term b and a random disturbance term n, the angular velocity ω and the acceleration a of the airborne IMU are modeled as,
ω=ωm-bω-nωa=am-ba-na(7)
wherein the subscript m represents the measured value. The dynamics of the non-static bias term b are approximated by a random process,
therefore, the motion state of the whole unmanned aerial vehicle is expressed by the following differential equation,
wherein C is(q)And the rotation matrix corresponding to the attitude quaternion q is shown, g is a gravity vector under a world coordinate system, and omega (omega) is a quaternion multiplication matrix of omega. The body acceleration and angular velocity information is used for predicting the state of the UKF, and the pose analysis based on vision is mainly used for updating the state of the UKF. Position obtained for visual methodAnd postureThe measurement model can be expressed as
WhereinThe pose of the IMU is represented as,representing the amount of rotation from the visual coordinate system to the world coordinate system.
According to the scheme, the self-adaptive pose fusion module takes the active visual detection and unmanned aerial vehicle pose calculation module as a subunit and adopts federal filtering as an information fusion frame. Figure 6 showsThe overall architecture of adaptive fusion. Wherein, n sub-filters are respectively used for processing the visual pose solution Z obtained by all the cooperative featuresiEach sub-filter is an independent unscented Kalman Filter UKF, respectively containing predictions and updates, with the corresponding visual solution as measurement informationiAnd a final optimization solution. The airborne IMU serves as the reference information of the federal filtering and is responsible for providing real-time acceleration and acceleration data of unmanned aerial vehicle state prediction. In detail, the estimated states of all sub-filters and the global update XiIs consistent as shown in formula (11). In any sub-filter, the unmanned plane motion state XiPredictions are made from the IMU and then updated with visual measurements. From the above, based on the vision, we can obtain the 3D position (x, y, z) and attitude angle of the unmanned aerial vehicleCorresponding to the tilt angle, roll angle, yaw angle, respectively, these parameters being referred to as measurement information Z in the fusion processi(i ═ 1,2,3, …), as shown in equation (12).
Estimated states obtained by the sub-filtersAnd corresponding covariance PiIs passed to the global fusion module. PiThe capability of the corresponding filter can be accounted for, which means that the detection accuracy of the visual module based on the cooperative feature i can pass through PiAnd (4) reflecting. Thus, by combining all available statesCombined with corresponding covariance PiWeighted summation(covariance is weighted by state, summed, see equations 13-15), a global state estimate can be obtainedAnd corresponding system noiseSum state covarianceThe calculation formula is as follows,
the accuracy of each collaborative feature-based vision estimation module at different stages or at different distance ranges is different the Federal Filter framework introduces parameters βi To define the system noise Q of each sub-filter at the next time instantiSum covariance PiThe confidence of each sub-filter is dynamically adjusted to achieve adaptive fusion, as shown in equation (16), βiCovariance P with corresponding sub-filteriIn this way, β is utilized when a relatively significant detection or positioning error occurs in a certain vision estimation moduleiCan mitigate the effect of the vision module on the overall global estimate.
According to the scheme, the self-adaptive posture fusion module further comprises an abnormity monitoring part which is responsible for monitoring and eliminating error results output by the vision module before fusion, and ensuring the continuity and reliability of input information in the self-adaptive fusion process. Rotational relationship between visual coordinate system and body (inertial) coordinate systemIs theoretically constant during the fusion process. The rotation amount is calculated by the following equation at each time k
This value changes relatively slowly compared to the update frequency of the fusion method. Therefore, the temperature of the molten metal is controlled,the sequence can eliminate abnormal values or jump values in the sequence through a median filtering smoothing operation. In particular, the rotation estimation between the machine body coordinate system and the visual coordinate systemImplemented with a median filter of window size N,
when in useExceedThe 3-fold variance boundary range, the vision module at that moment is considered to output a pose solution with obvious errors.
The computer storage medium of the embodiment of the invention stores a computer program executable by a processor, and the computer program executes the unmanned aerial vehicle pose self-adaptive estimation method in the embodiment.
In conclusion, the unmanned aerial vehicle pose self-adaptive estimation method based on active vision establishes a target model with multi-scale cooperative characteristics according to the change of the relative height of the unmanned aerial vehicle in the landing process and provides a corresponding detection algorithm; by taking the detected cooperative features as measurement information, an unmanned aerial vehicle pose estimation method based on vision and inertia is provided; and establishing a federal filtering framework, performing adaptive estimation on the pose of the unmanned aerial vehicle, and dynamically fusing pose estimation modules based on different scale cooperation characteristics by adjusting distribution parameters. The invention can ensure the stable precision of the visual positioning of the unmanned aerial vehicle in the landing process from far to near, can also effectively improve the effective measurement precision and range of the vision of the unmanned aerial vehicle facing the autonomous landing task, and is also suitable for the visual perception and positioning research of the robot.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.
Claims (10)
1. An unmanned aerial vehicle pose self-adaptive estimation method based on active vision is characterized by at least comprising the following steps:
active visual detection, namely continuously observing the landing cooperative targets through an airborne visual system of the unmanned aerial vehicle, screening all the detected landing cooperative targets, and reserving information of the landing cooperative targets with higher detection precision in the current visual field range; the landing cooperative target comprises a plurality of groups of cooperative features with different scales, and each group of cooperative features comprises different specific geometric figures;
calculating the pose of the unmanned aerial vehicle, namely calculating the pose of the current unmanned aerial vehicle relative to a cooperative target in real time by taking the visual 2D characteristics and the inertial measurement information as input;
and self-adaptive pose fusion, namely performing self-adaptive fusion based on federal filtering on all calculated pose solutions of the unmanned aerial vehicle relative to the landing cooperative target according to corresponding covariance information to obtain the optimized pose of the unmanned aerial vehicle.
2. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 1, wherein the active visual detection at least comprises two steps of image target extraction and feature autonomous selection, an unmanned aerial vehicle airborne visual system continuously observes landing cooperative targets, detection and feature extraction of all the landing cooperative targets are realized by adopting an image target extraction method, then all the detected targets are screened by utilizing a feature autonomous selection method, and target information with higher detection precision in the current visual field range is reserved.
3. The unmanned aerial vehicle pose adaptive estimation method of claim 1, wherein the unmanned aerial vehicle pose calculation method comprises pose measurement based on vision and inertia and pose calculation based on unscented kalman filtering; the pose measurement based on vision and inertia mainly utilizes inertia or vision detection information to calculate the relative pose of the unmanned aerial vehicle; and the pose calculation based on the unscented Kalman filtering is responsible for calculating the pose under the condition of simultaneously having inertia and visual detection information.
4. The unmanned aerial vehicle pose adaptive estimation method according to claim 1, wherein when only inertial measurement information is input, the pose change of the unmanned aerial vehicle relative to the previous moment is determined by using the time integral of angular velocity and acceleration information of the unmanned aerial vehicle; when the visual 2D features are obtained, the unmanned aerial vehicle calculates the pose of the unmanned aerial vehicle relative to the landing cooperative target according to homography transformation of the visual features.
5. The unmanned aerial vehicle pose adaptive estimation method of claim 1, further comprising the steps of:
and anomaly monitoring, namely continuously detecting and identifying the optimized pose solution and eliminating an abnormal value.
6. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 2, characterized in that the image target extraction method comprises three steps of line segment feature detection, corner feature detection and geometric pattern matching, and the airborne vision system processes the real-time acquired image by using the three steps, completes pattern matching according to geometric constraint relation between points and lines, and simultaneously realizes landing cooperative target detection and feature extraction.
7. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 2, characterized in that the feature self-selection method simulates the biological mechanism of human vision for selecting positioning reference objects from surrounding scenes, and selects the optimal target in the view for the input information of visual positioning from the combination of landing cooperative targets with different sizes and dimensions according to the 3D-2D projection relation between the imaging proportion and the relative distance of the targets in the vision.
8. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 3, wherein the pose calculation method based on unscented Kalman filtering takes inertia and visual detection information as input, the inertial sensor performs cumulative calculation on the unmanned aerial vehicle pose at an angular velocity and an acceleration of a 100 or 200Hz refresh rate, and the visual detection information calculates the unmanned aerial vehicle pose through conversion of visual homography; the two pose solutions are mutually corrected under a Bayes frame, and covariance information corresponding to the correction value is obtained; the non-linear state transition of the unmanned aerial vehicle along with the time change is realized by the non-trace transformation.
9. The unmanned aerial vehicle pose self-adaptive estimation method according to claim 3, wherein the self-adaptive pose fusion method is used for conducting modular processing on visual pose calculation, federate fusion is conducted by using pose solutions and corresponding covariances output by the modules, abnormal states output by the modules are monitored, and continuous and stable pose estimation is provided for the unmanned aerial vehicle in an autonomous landing stage.
10. A computer storage medium having stored therein a computer program executable by a processor, the computer program performing the unmanned aerial vehicle pose adaptive estimation method of any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911133525.1A CN110865650B (en) | 2019-11-19 | 2019-11-19 | Unmanned aerial vehicle pose self-adaptive estimation method based on active vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911133525.1A CN110865650B (en) | 2019-11-19 | 2019-11-19 | Unmanned aerial vehicle pose self-adaptive estimation method based on active vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110865650A true CN110865650A (en) | 2020-03-06 |
CN110865650B CN110865650B (en) | 2022-12-20 |
Family
ID=69654937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911133525.1A Active CN110865650B (en) | 2019-11-19 | 2019-11-19 | Unmanned aerial vehicle pose self-adaptive estimation method based on active vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110865650B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112504261A (en) * | 2020-11-09 | 2021-03-16 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor point |
CN113066050A (en) * | 2021-03-10 | 2021-07-02 | 天津理工大学 | Method for resolving course attitude of airdrop cargo bed based on vision |
CN113608556A (en) * | 2021-07-19 | 2021-11-05 | 西北工业大学 | Multi-robot relative positioning method based on multi-sensor fusion |
CN113838215A (en) * | 2021-07-30 | 2021-12-24 | 歌尔光学科技有限公司 | VR collision detection method and system |
CN113865579A (en) * | 2021-08-06 | 2021-12-31 | 湖南大学 | Unmanned aerial vehicle pose parameter measuring system and method |
CN114415736A (en) * | 2022-04-01 | 2022-04-29 | 之江实验室 | Multi-stage visual accurate landing method and device for unmanned aerial vehicle |
CN114543797A (en) * | 2022-02-18 | 2022-05-27 | 北京市商汤科技开发有限公司 | Pose prediction method and apparatus, device, and medium |
CN116399327A (en) * | 2023-04-10 | 2023-07-07 | 烟台欣飞智能系统有限公司 | Unmanned aerial vehicle positioning system based on multisource data fusion |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216319A (en) * | 2008-01-11 | 2008-07-09 | 南京航空航天大学 | Low orbit satellite multi-sensor fault tolerance autonomous navigation method based on federal UKF algorithm |
CN102991681A (en) * | 2012-12-25 | 2013-03-27 | 天津工业大学 | Ground target identification method in unmanned aerial vehicle vision landing system |
CN103942813A (en) * | 2014-03-21 | 2014-07-23 | 杭州电子科技大学 | Single-moving-object real-time detection method in electric wheelchair movement process |
CN104166854A (en) * | 2014-08-03 | 2014-11-26 | 浙江大学 | Vision grading landmark locating and identifying method for autonomous landing of small unmanned aerial vehicle |
CN104460685A (en) * | 2014-11-21 | 2015-03-25 | 南京信息工程大学 | Control system for four-rotor aircraft and control method of control system |
CN105197252A (en) * | 2015-09-17 | 2015-12-30 | 武汉理工大学 | Small-size unmanned aerial vehicle landing method and system |
CN106203439A (en) * | 2016-06-27 | 2016-12-07 | 南京邮电大学 | The homing vector landing concept of unmanned plane based on mark multiple features fusion |
CN106687878A (en) * | 2014-10-31 | 2017-05-17 | 深圳市大疆创新科技有限公司 | Systems and methods for surveillance with visual marker |
CN106873628A (en) * | 2017-04-12 | 2017-06-20 | 北京理工大学 | A kind of multiple no-manned plane tracks the collaboration paths planning method of many maneuvering targets |
CN107239077A (en) * | 2017-06-28 | 2017-10-10 | 歌尔科技有限公司 | A kind of unmanned plane displacement computing system and method |
CN107289948A (en) * | 2017-07-24 | 2017-10-24 | 成都通甲优博科技有限责任公司 | A kind of UAV Navigation System and method based on Multi-sensor Fusion |
US9924320B1 (en) * | 2016-12-30 | 2018-03-20 | Uber Technologies, Inc. | Locating a user device |
US20180203467A1 (en) * | 2015-09-15 | 2018-07-19 | SZ DJI Technology Co., Ltd. | Method and device of determining position of target, tracking device and tracking system |
CN109209229A (en) * | 2018-10-15 | 2019-01-15 | 中国石油集团渤海钻探工程有限公司 | A kind of track in horizontal well landing mission regulates and controls method |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
CN109407708A (en) * | 2018-12-11 | 2019-03-01 | 湖南华诺星空电子技术有限公司 | A kind of accurate landing control system and Landing Control method based on multi-information fusion |
US20190107440A1 (en) * | 2015-05-12 | 2019-04-11 | BioSensing Systems, LLC | Apparatuses And Methods For Bio-Sensing Using Unmanned Aerial Vehicles |
CN109737959A (en) * | 2019-03-20 | 2019-05-10 | 哈尔滨工程大学 | A kind of polar region Multi-source Information Fusion air navigation aid based on federated filter |
US10304208B1 (en) * | 2018-02-12 | 2019-05-28 | Avodah Labs, Inc. | Automated gesture identification using neural networks |
CN110018691A (en) * | 2019-04-19 | 2019-07-16 | 天津大学 | Small-sized multi-rotor unmanned aerial vehicle state of flight estimating system and method |
US20190248487A1 (en) * | 2018-02-09 | 2019-08-15 | Skydio, Inc. | Aerial vehicle smart landing |
-
2019
- 2019-11-19 CN CN201911133525.1A patent/CN110865650B/en active Active
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216319A (en) * | 2008-01-11 | 2008-07-09 | 南京航空航天大学 | Low orbit satellite multi-sensor fault tolerance autonomous navigation method based on federal UKF algorithm |
CN102991681A (en) * | 2012-12-25 | 2013-03-27 | 天津工业大学 | Ground target identification method in unmanned aerial vehicle vision landing system |
CN103942813A (en) * | 2014-03-21 | 2014-07-23 | 杭州电子科技大学 | Single-moving-object real-time detection method in electric wheelchair movement process |
CN104166854A (en) * | 2014-08-03 | 2014-11-26 | 浙江大学 | Vision grading landmark locating and identifying method for autonomous landing of small unmanned aerial vehicle |
CN106687878A (en) * | 2014-10-31 | 2017-05-17 | 深圳市大疆创新科技有限公司 | Systems and methods for surveillance with visual marker |
CN104460685A (en) * | 2014-11-21 | 2015-03-25 | 南京信息工程大学 | Control system for four-rotor aircraft and control method of control system |
US20190107440A1 (en) * | 2015-05-12 | 2019-04-11 | BioSensing Systems, LLC | Apparatuses And Methods For Bio-Sensing Using Unmanned Aerial Vehicles |
US20180203467A1 (en) * | 2015-09-15 | 2018-07-19 | SZ DJI Technology Co., Ltd. | Method and device of determining position of target, tracking device and tracking system |
CN105197252A (en) * | 2015-09-17 | 2015-12-30 | 武汉理工大学 | Small-size unmanned aerial vehicle landing method and system |
CN106203439A (en) * | 2016-06-27 | 2016-12-07 | 南京邮电大学 | The homing vector landing concept of unmanned plane based on mark multiple features fusion |
US9924320B1 (en) * | 2016-12-30 | 2018-03-20 | Uber Technologies, Inc. | Locating a user device |
CN106873628A (en) * | 2017-04-12 | 2017-06-20 | 北京理工大学 | A kind of multiple no-manned plane tracks the collaboration paths planning method of many maneuvering targets |
CN107239077A (en) * | 2017-06-28 | 2017-10-10 | 歌尔科技有限公司 | A kind of unmanned plane displacement computing system and method |
CN107289948A (en) * | 2017-07-24 | 2017-10-24 | 成都通甲优博科技有限责任公司 | A kind of UAV Navigation System and method based on Multi-sensor Fusion |
US20190248487A1 (en) * | 2018-02-09 | 2019-08-15 | Skydio, Inc. | Aerial vehicle smart landing |
US10304208B1 (en) * | 2018-02-12 | 2019-05-28 | Avodah Labs, Inc. | Automated gesture identification using neural networks |
CN109209229A (en) * | 2018-10-15 | 2019-01-15 | 中国石油集团渤海钻探工程有限公司 | A kind of track in horizontal well landing mission regulates and controls method |
CN109376785A (en) * | 2018-10-31 | 2019-02-22 | 东南大学 | Air navigation aid based on iterative extended Kalman filter fusion inertia and monocular vision |
CN109407708A (en) * | 2018-12-11 | 2019-03-01 | 湖南华诺星空电子技术有限公司 | A kind of accurate landing control system and Landing Control method based on multi-information fusion |
CN109737959A (en) * | 2019-03-20 | 2019-05-10 | 哈尔滨工程大学 | A kind of polar region Multi-source Information Fusion air navigation aid based on federated filter |
CN110018691A (en) * | 2019-04-19 | 2019-07-16 | 天津大学 | Small-sized multi-rotor unmanned aerial vehicle state of flight estimating system and method |
Non-Patent Citations (6)
Title |
---|
DAVIDE FALANGA等: "Vision-based autonomous quadrotor landing on a moving platform", 《2017 IEEE INTERNATIONAL SYMPOSIUM ON SAFETY, SECURITY AND RESCUE ROBOTICS》 * |
HAIWEN YUAN等: "A Hierarchical Vision-Based UAV Localization for an", 《ELECTRONICS》 * |
YUE MENG等: "A Vision/Radar/INS Integrated Guidance Method for Shipboard Landing", 《 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》 * |
于耕等: "基于无迹卡尔曼滤波的仪表着陆系统/GBAS着陆系统/惯性导航系统组合导航融合方法", 《科学技术与工程》 * |
吴良晶: "基于视觉的无人机自主着陆位姿估计器设计", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 * |
田锋: "基于芯片引线键合的视觉定位技术设计与研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112504261A (en) * | 2020-11-09 | 2021-03-16 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle landing pose filtering estimation method and system based on visual anchor point |
CN112504261B (en) * | 2020-11-09 | 2024-02-09 | 中国人民解放军国防科技大学 | Unmanned aerial vehicle falling pose filtering estimation method and system based on visual anchor points |
CN113066050A (en) * | 2021-03-10 | 2021-07-02 | 天津理工大学 | Method for resolving course attitude of airdrop cargo bed based on vision |
CN113066050B (en) * | 2021-03-10 | 2022-10-21 | 天津理工大学 | Method for resolving course attitude of airdrop cargo bed based on vision |
CN113608556A (en) * | 2021-07-19 | 2021-11-05 | 西北工业大学 | Multi-robot relative positioning method based on multi-sensor fusion |
CN113608556B (en) * | 2021-07-19 | 2023-06-30 | 西北工业大学 | Multi-robot relative positioning method based on multi-sensor fusion |
CN113838215A (en) * | 2021-07-30 | 2021-12-24 | 歌尔光学科技有限公司 | VR collision detection method and system |
CN113865579A (en) * | 2021-08-06 | 2021-12-31 | 湖南大学 | Unmanned aerial vehicle pose parameter measuring system and method |
CN114543797A (en) * | 2022-02-18 | 2022-05-27 | 北京市商汤科技开发有限公司 | Pose prediction method and apparatus, device, and medium |
CN114543797B (en) * | 2022-02-18 | 2024-06-07 | 北京市商汤科技开发有限公司 | Pose prediction method and device, equipment and medium |
CN114415736A (en) * | 2022-04-01 | 2022-04-29 | 之江实验室 | Multi-stage visual accurate landing method and device for unmanned aerial vehicle |
CN116399327A (en) * | 2023-04-10 | 2023-07-07 | 烟台欣飞智能系统有限公司 | Unmanned aerial vehicle positioning system based on multisource data fusion |
Also Published As
Publication number | Publication date |
---|---|
CN110865650B (en) | 2022-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110865650B (en) | Unmanned aerial vehicle pose self-adaptive estimation method based on active vision | |
US20210012520A1 (en) | Distance measuring method and device | |
CN113269098B (en) | Multi-target tracking positioning and motion state estimation method based on unmanned aerial vehicle | |
CN111024066A (en) | Unmanned aerial vehicle vision-inertia fusion indoor positioning method | |
CN108406731A (en) | A kind of positioning device, method and robot based on deep vision | |
CN105931275A (en) | Monocular and IMU fused stable motion tracking method and device based on mobile terminal | |
US20180075614A1 (en) | Method of Depth Estimation Using a Camera and Inertial Sensor | |
CN208323361U (en) | A kind of positioning device and robot based on deep vision | |
US20200098115A1 (en) | Image processing device | |
CN114623817A (en) | Self-calibration-containing visual inertial odometer method based on key frame sliding window filtering | |
CN110736457A (en) | combination navigation method based on Beidou, GPS and SINS | |
Palonen et al. | Augmented reality in forest machine cabin | |
CN110598370B (en) | Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion | |
CN110108894B (en) | Multi-rotor speed measuring method based on phase correlation and optical flow method | |
CN114910069A (en) | Fusion positioning initialization system and method for unmanned aerial vehicle | |
Xu et al. | Towards autonomous tracking and landing on moving target | |
CN112862818A (en) | Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera | |
CN116952229A (en) | Unmanned aerial vehicle positioning method, device, system and storage medium | |
CN112146627A (en) | Aircraft imaging system using projected patterns on featureless surfaces | |
Ramos et al. | Vision-based tracking of non-cooperative space bodies to support active attitude control detection | |
Aminzadeh et al. | Implementation and performance evaluation of optical flow navigation system under specific conditions for a flying robot | |
CN115574816A (en) | Bionic vision multi-source information intelligent perception unmanned platform | |
Zheng et al. | Integrated navigation system with monocular vision and LIDAR for indoor UAVs | |
CN117760417B (en) | Fusion positioning method and system based on 4D millimeter wave radar and IMU | |
CN111712855A (en) | Ground information processing method and device and unmanned vehicle |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |