CN112184812A - Method for improving identification and positioning precision of unmanned aerial vehicle camera to Apriltag, positioning method and positioning system - Google Patents

Method for improving identification and positioning precision of unmanned aerial vehicle camera to Apriltag, positioning method and positioning system Download PDF

Info

Publication number
CN112184812A
CN112184812A CN202011008679.0A CN202011008679A CN112184812A CN 112184812 A CN112184812 A CN 112184812A CN 202011008679 A CN202011008679 A CN 202011008679A CN 112184812 A CN112184812 A CN 112184812A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
camera
apriltag
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011008679.0A
Other languages
Chinese (zh)
Other versions
CN112184812B (en
Inventor
陈亮
彭小红
闫秀英
余应淮
王骥
邓锐
刘桃丽
谢水镔
李登印
谢宝达
叶友强
苏泽宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Ocean University
Original Assignee
Guangdong Ocean University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Ocean University filed Critical Guangdong Ocean University
Priority to CN202011008679.0A priority Critical patent/CN112184812B/en
Publication of CN112184812A publication Critical patent/CN112184812A/en
Application granted granted Critical
Publication of CN112184812B publication Critical patent/CN112184812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for improving the identification and positioning precision of an unmanned aerial vehicle camera to AprilTag, which comprises the steps of calibrating the camera; fusing a plurality of AprilTag space point information; adding one Gaussian Newton iteration to solve the camera attitude after using the PnP; and optimizing a transformation matrix between AprilTag by using a pose graph optimization theory. The invention also provides a positioning system and a positioning method of the unmanned aerial vehicle, the positioning process is simple and easy to operate, and the accuracy improving method is reasonably applied. The invention can realize high-precision positioning of the unmanned aerial vehicle under the condition that the indoor gps signal is weak, and the positioning is carried out only by the two-dimensional code array, thus having the advantages of precise positioning, low cost, reliable performance and small environmental interference.

Description

Method for improving identification and positioning precision of unmanned aerial vehicle camera to Apriltag, positioning method and positioning system
Technical Field
The invention relates to the field of unmanned aerial vehicle positioning, in particular to a method for improving the identification and positioning accuracy of an unmanned aerial vehicle camera to AprilTag, a positioning method and a positioning system.
Background
Indoor positioning is a key problem in the fields of AR, VR, robots, and the like, and is a prerequisite for indoor autonomous navigation. Although the GPS can solve the positioning problem, in indoor positioning, the signal is weak, which results in that the accuracy cannot meet the requirement, or even the positioning cannot be performed, so that other positioning technologies are required for indoor positioning. To solve the problem of indoor positioning, schemes such as UWB positioning, WIFI positioning, bluetooth positioning, and SLAM positioning have been developed.
However, the above solutions all have great technical limitations or lack of economic practicality:
1. positioning based on Bluetooth is greatly interfered by environment and cannot shield equipment;
2. the positioning scheme based on the UWB is applied to the unmanned aerial vehicle, but at least 3 base stations are needed, and signal propagation can be blocked by barriers such as walls, ceilings, doors, people and the like, so that the signals are reflected, refracted and diffracted, the transmitted signals reach a UWB receiving end through different time and different paths, the UWB precision is reduced, and a complex algorithm is needed for filtering and optimizing;
3. the positioning scheme based on the visual SLAM occupies a large amount of resources of a CPU (Central processing Unit) in the operation process, the CPU with relatively strong property is required to operate in real time, the equipment cost is high, the technical difficulty is high, the environmental texture characteristics are rich, the texture characteristics of the overlooking visual angle of the indoor unmanned aerial vehicle are not particularly rich, so that the general SLAM algorithm is not completely suitable for the overlooking visual angle of the unmanned aerial vehicle, and the positioning track is easy to drift;
4. the three-dimensional laser radar is adopted for mapping and positioning, the cost is high, the weight is large, and the requirement on the unmanned aerial vehicle is extremely high.
Disclosure of Invention
The invention aims to overcome at least one defect of the prior art, provides a method for improving the identification and positioning accuracy of an unmanned aerial vehicle camera to AprilTag, a positioning method and a positioning system, and is used for solving the problems of large environmental interference, low positioning accuracy, large CPU (central processing unit) occupied resource and high cost in indoor unmanned aerial vehicle positioning.
The invention adopts the technical scheme that a method for improving the identification and positioning precision of an unmanned aerial vehicle camera to AprilTag comprises the following steps:
s1, calibrating a camera on the unmanned aerial vehicle, and calculating to obtain camera internal parameters and distortion coefficients;
s2, on the basis of obtaining the camera internal parameters and the distortion parameters, the camera obtains an AprilTag image, then spatial point information is identified from the obtained AprilTag image, and a plurality of AprilTag spatial point information are fused;
s3, after obtaining the information of the plurality of spatial points, adopting PnP in OpenCV to solve the posture of the camera under the world coordinate system, and adding a Gauss-Newton method to iteratively optimize the posture after solving the posture through the PnP;
and S4, optimizing a transformation matrix between AprilTag by using a pose optimization theory.
In the step S1, the unmanned aerial vehicle camera is calibrated first, and in the field of machine vision, the calibration of the camera is a key link, and the accuracy of the result generated by the operation of the camera is directly affected by the precision of the calibration result and the stability of the algorithm, so that whether the machine vision system can effectively position and whether the target object can be effectively calculated is determined, and therefore, the camera calibration is a precondition for making subsequent operations. In the step S2, since a picture may recognize multiple AprilTag information and the reliability of only adopting one AprilTag information is not high, the accuracy can be improved by fusing multiple AprilTag spatial point information. In step S3, the error of the attitude solution can be further reduced by using PnP to solve the attitude and then adding one more iteration of the gauss-newton method. In the step S4, the transformation matrix between apriltags after the pose graph optimization can be basically used for unmanned indoor positioning, and the positioning accuracy is high.
Further, the process of solving the camera internal reference in step S1 is as follows: according to the imaging principle of the pinhole camera model, a 3D point P (X, Y, Z) under a camera coordinate system has a projection point P1 coordinate (u, v) in a pixel coordinate system, and the projection point P1 coordinate (u, v) corresponds to the projection point
Figure BDA0002696834490000021
Wherein K is a camera internal reference matrix, fxIs the focal length of the x axis, fyIs focal length of y-axis, cxIs the amount of x-axis translation, cyIs the y-axis translation;
the distortion coefficient solution can be divided into radial distortion and tangential distortion, and the mathematical model of the solution formula is as follows:
xdistorted=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)
ydistorted=y(1+k1r2+k2r4+k3r6)+p1(r2+2y2)+2p2xy
wherein (x, y) is the coordinate of point P in the normalized plane, xdistortedAnd ydistoredFor the distorted coordinates, r is the distance from (x, y) to the origin, k1、k2、k3As radial distortion coefficient, p1、p2Is the tangential distortion coefficient.
Further, in the step S2, a plurality of AprilTag space point information are fused, and a specific process is to fuse two AprilTag space point information by using an euclidean distance between an AprilTag center and an image center as a reliability weight, and using a formula:
Figure BDA0002696834490000031
wherein Q is the fused space point, Q1 and Q2 are the space points corresponding to two AprilTag, r1、r2The euclidean distance of the center pixel of two different aprilats from the center of the image.
Further, the specific process of step S3 is to solve the ba (bundle adjustment) problem by using the Levenberg-Marquardt algorithm in the nonlinear optimization algorithm, and the formula is as follows:
(JTJ+μI)Δx=-JTe
in the formula, J is a 2X 6 Jacobian matrix, e is an error function, I is a unit matrix, and JTIs the transpose of the J matrix and has the following form:
Figure BDA0002696834490000032
Figure BDA0002696834490000033
wherein (X ', Y ', Z ') is the space point coordinate under the camera coordinate system, ξ is the disturbance of the transformation matrix T under the special Euclidean group SE (3), and (u, v) are pixel coordinates. k is a radical of1、k2Is a distortion coefficient, fxIs the focal length of the x axis, fyIs focal length of y-axis, cxIs the amount of x-axis translation, cyD is the distance between the pixel coordinate and the origin in the normalized coordinate system.
The Levenberg-Marquardt algorithm steps are as follows:
s31, giving an initial transformation matrix T0Damping factor μ;
s32, for the nth iteration, solving the error function e at xThe Jacobian matrix is obtained by obtaining increment delta x according to a formula and making xnew=x⊕Δx;
S33, calculating a damping factor to update a scale factor rho;
S34、ρ>0.75, μ ═ μ × 2, and this update x is acceptednew
S35、ρ<0.25, mu is mu/3, refusing the update xnew
S36, judging whether the algorithm is converged, if so, ending iteration, otherwise, returning to S32;
wherein rho is a damping factor updating scale factor and the expression is
Figure BDA0002696834490000041
Further, in the step 4, the specific process of optimizing the transformation matrix between apriltags by the pose graph optimization theory is as follows:
pose of each AprilTag is T1,…,Tn,TiAnd TjTransform matrix into TijThe following equation can be obtained
Tij=Ti -1Tj
If there is an error in the transformation between the attitudes of each AprilTag, then the equation does not hold completely, let e be the errorijThen there is
Figure BDA0002696834490000042
The set of all edges is then the overall objective function is as follows:
Figure BDA0002696834490000043
the invention also provides a technical scheme, and the unmanned aerial vehicle positioning method comprises the following steps:
b1, placing AprilTag array in the scene, calculating the camera internal parameter and distortion coefficient by using the step S1 of claim 1, and performing configuration correction on the unmanned aerial vehicle;
b2, the unmanned aerial vehicle automatically flies to the designated height to hover after receiving the takeoff instruction of the remote controller;
b3, identifying aprilat information by the camera of the unmanned aerial vehicle, optimizing aprilat positioning information by using the steps S2 and S3 in claim 1, and sending the optimized aprilat positioning information to the upper computer;
b4, the upper computer receives April tag positioning information transmitted by the unmanned aerial vehicle camera, displays the information in an appointed message frame, names an ID number for each April tag positioning information area, and performs two-dimensional track drawing on a visual interface;
b5, the upper computer appoints one of the ID numbers to send to the drone, the drone flies to the aprilat with the appointed ID after receiving the appointed ID number, and the process continuously optimizes the transformation matrix between the aprilat by using the step S4 described in the above claim 1.
Further, in step B5, the upper computer designates an ID number to be sent to the drone, and further includes designating a flight trajectory to be sent to the drone. The host computer can only send a certain appointed ID number to unmanned aerial vehicle, makes unmanned aerial vehicle fly to appointed ID number's aprilTag on, also can appoint the flight path for unmanned aerial vehicle flies according to appointed flight path.
Further, in the step B5, after receiving the specified ID number, the drone flies to aprilatag with the specified ID, which specifically includes:
b51, the upper computer designates one ID number and sends the ID number to the unmanned aerial vehicle, and the unmanned aerial vehicle converts the designated ID into a pixel coordinate of a target AprilTag;
b52, decoupling the obtained pixel coordinates from roll and pitch by the unmanned aerial vehicle;
b53, fusing height information h by the unmanned aerial vehicle;
and B54, controlling the unmanned aerial vehicle to fly by using a PID controller.
The PID controller is used for controlling the unmanned aerial vehicle to fly, so that the flying hysteresis of the unmanned aerial vehicle can be effectively reduced, and the flying efficiency is improved.
This technical scheme still provides an unmanned aerial vehicle positioning system, including interior space, two-dimensional code, unmanned aerial vehicle and host computer, the two-dimensional code is placed in interior space ground, unmanned aerial vehicle with host computer bluetooth or WIFI connect, the host computer control unmanned aerial vehicle.
Further, the unmanned aerial vehicle comprises an unmanned aerial vehicle main body, a drive-free camera, an optical flow sensor and an OpenMV, wherein the drive-free camera, the optical flow sensor and the OpenMV are all connected with the unmanned aerial vehicle main body; the upper computer is a PC terminal. The drive-free camera is used for acquiring images to a PC (personal computer) end to perform offline positioning or online aerial photography; the optical flow sensor utilizes the change processing of images and is used for detecting the state of the ground so as to monitor the movement of the airplane and mainly used for keeping the horizontal position of the airplane; the OpenMV is used for Apriltag identification, and is transmitted to the unmanned aerial vehicle through a serial port and is visualized by transmitting positioning information to the upper computer through Bluetooth.
Compared with the prior art, the invention has the beneficial effects that: the method for improving the identification and positioning accuracy of the unmanned aerial vehicle camera to AprilTag effectively improves the identification and positioning accuracy of the camera to AprilTag by using a simple, convenient and innovative method. Meanwhile, the unmanned aerial vehicle positioning method can realize accurate positioning of the unmanned aerial vehicle when an indoor GPS signal is weak, and the invention only uses the AprilTag two-dimensional code to carry out indoor positioning of the unmanned aerial vehicle, is not greatly interfered by the environment, occupies small CPU resource and has low cost.
Drawings
Fig. 1 is an overall schematic view of the present invention.
Fig. 2 is a hardware connection block diagram in embodiment 1 of the present invention.
Fig. 3 is a schematic diagram of the flight control of the unmanned aerial vehicle in embodiment 1 of the present invention.
Fig. 4 is a schematic diagram of a gyroscope and an accelerometer of the unmanned aerial vehicle in embodiment 1 of the invention.
Fig. 5 is a schematic diagram of an electronic compass of the unmanned aerial vehicle in embodiment 1 of the present invention.
Fig. 6 is a pinhole imaging model of the pinhole machine in embodiment 2 of the present invention.
Fig. 7 is a schematic view of a projection of the pinhole imaging model in the x-axis direction in embodiment 2 of the present invention.
Fig. 8 is a schematic diagram of image distortion in embodiment 2 of the present invention.
Fig. 9 is a schematic diagram of tangential distortion in embodiment 2 of the present invention.
Fig. 10 is a schematic view of a chessboard calibration board in embodiment 2 of the invention.
Fig. 11 is a schematic diagram of Matlab identifying corner points of a chessboard in embodiment 2 of the invention.
Fig. 12 shows the poses of the cameras in the coordinate system of the chessboard in embodiment 2 of the present invention.
Fig. 13 shows pixel errors for each camera pose in embodiment 2 of the present invention.
Fig. 14 is a corresponding relationship between feature points and space points in embodiment 2 of the present invention.
FIG. 15 is a block diagram illustrating the reprojection error of the conventional AprilTag attitude solution in embodiment 2 of the present invention.
FIG. 16 shows the reprojection error of PnP attitude solution in embodiment 2 of the present invention.
Fig. 17 is a structure of the diagram in embodiment 2 of the present invention.
FIG. 18 is a comparison of the optimization of g2o in example 2 of the present invention.
Fig. 19 is a result of optimizing an image acquired by a general camera by pose graph optimization in embodiment 2 of the present invention.
Fig. 20 is a result of optimizing an OpenMV acquired image by pose graph optimization in embodiment 2 of the present invention.
Fig. 21 shows a prior art method ORBSLAM2 for locating top-view traces in embodiment 2 of the present invention.
FIG. 22 is a top view trajectory of the attitude resolution API in the Source code of the university of Millikon, a prior art method in accordance with embodiment 2 of the present invention.
Fig. 23 is a top view of a single AprilTag information positioning in embodiment 2 of the present invention.
Fig. 24 is a positioning top view trajectory fusing multiple AprilTag information in embodiment 2 of the present invention.
Fig. 25 is a flowchart of the unmanned aerial vehicle process in embodiment 3 of the present invention.
Fig. 26 is a flowchart of the execution steps at the unmanned aerial vehicle end after the upper computer gives the aprilatag target ID in embodiment 3 of the present invention.
Fig. 27 is a flowchart of an upper computer program according to embodiment 3 of the present invention.
Fig. 28 is a display interface of the upper computer in embodiment 3 of the present invention.
Fig. 29 is a diagram of the flight results of the unmanned aerial vehicle in embodiment 3 of the present invention.
Fig. 30 shows the display and visualization of the positioning information of the upper computer in embodiment 3 of the present invention.
Fig. 31 is a flowchart of a method for improving AprilTag identification and positioning accuracy of an unmanned aerial vehicle camera in embodiment 2 of the present invention.
Detailed Description
The drawings are only for purposes of illustration and are not to be construed as limiting the invention. For a better understanding of the following embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Example 1
As shown in fig. 1, the positioning system for the unmanned aerial vehicle provided by the present embodiment includes an indoor space, a two-dimensional code 200, an unmanned aerial vehicle 100, and an upper computer, where the unmanned aerial vehicle 100 includes an unmanned aerial vehicle main body, a drive-free camera, an optical flow sensor, and an OpenMV, and the drive-free camera, the optical flow sensor, and the OpenMV are all connected to the unmanned aerial vehicle main body; the upper computer is a PC terminal.
Specifically, as shown in fig. 2, the two-dimensional code 200 adopted in this embodiment is an aprilat array of 3 × 3, the execution environment of the PC end of the upper computer is Windows 10, the drive-free camera may be connected to a raspberry pi 3B, the raspberry pi may photograph a front image and transmit the front image to a remote interface through WiFi, the remote interface may be implemented by a TightVNC, and an image on the raspberry pi may be stored locally and then may be used for offline SLAM positioning or online aerial photography; OpenMV can perform AprilTag recognition with a resolution of 160 × 120. The unmanned aerial vehicle 100 can adopt anonymous flight control, the CPU is STM32F407, the system requirements are met, the open source is convenient for operating a bottom program, a specific flight control schematic diagram is shown in FIG. 3, 5 serial port communication interfaces can be led out from the flight control, and the communication with OpenMV and an optical flow sensor is convenient to realize. In addition, unmanned aerial vehicle 100 flies to control and can also pass through wireless receiver and connect the remote controller in this embodiment, and the remote controller can control unmanned aerial vehicle 100 key take-off and salvage through the remote controller when PC control is invalid. The drone 100 may further include a gyroscope, an accelerometer, and an electronic compass, where the gyroscope and the accelerometer may be shown in fig. 4, and the electronic compass may be shown in fig. 5.
Example 2
The embodiment provides a method for improving AprilTag identification and positioning accuracy of an unmanned aerial vehicle camera, as shown in fig. 31, the method includes the following steps:
s1, calibrating a camera on the unmanned aerial vehicle, and calculating to obtain camera internal parameters and distortion coefficients;
s2, on the basis of obtaining the camera internal parameters and the distortion parameters, the camera obtains an AprilTag image, then spatial point information is identified from the obtained AprilTag image, and a plurality of AprilTag spatial point information are fused;
s3, after obtaining the information of the plurality of spatial points, adopting PnP in OpenCV to solve the posture of the camera under the world coordinate system, and adding a Gauss-Newton method to iteratively optimize the posture after solving the posture through the PnP;
and S4, optimizing a transformation matrix between AprilTag by using a pose graph optimization theory.
In step S1, the pinhole camera model imaging principle can be explained by using a pinhole imaging model, as shown in fig. 6, both of which are used to convert an object in a three-dimensional space into a two-dimensional plane coordinate system, which is a normalized plane coordinate system in the camera.
The process of solving the camera parameters may be to set the length of the object AP in the X-axis direction as X, as shown in fig. 7, BP ' is obtained through projection, the length is X ', the length of the AP in the Y-axis is Y, the length after projection is Y ', the known focal length is f, and the length in the Z-axis direction is Z, and the length can be obtained according to a similar triangle
Figure BDA0002696834490000081
Figure BDA0002696834490000082
When the formula is used in a camera model, X 'and Y' need to be converted into a pixel coordinate system, the origin of the coordinate system is the upper left corner of an image, the image is subjected to certain scaling and translation in the conversion process, and the X-axis magnification is assumed to be alpha, and the translation amount is assumed to be cxY-axis magnification of beta and translation of cyIs obtained by
Figure BDA0002696834490000083
Figure BDA0002696834490000084
Wherein f isx=αf,fy=βf。
Conversion to matrix form
Figure BDA0002696834490000085
Wherein K is a camera reference matrix.
The camera is provided with a lens, so that the picture is easy to deform, for example, a straight line is changed into a curved line, and the distortion caused by the lens is called radial distortion and can be divided into pillow type distortion and barrel type distortion, namely a normal image, the pillow type distortion and the barrel type distortion in sequence as shown in fig. 8. Opposite the sagittal distortion is the tangential distortion, fig. 9, which is a result of the lens not being parallel to the imaging plane.
The mathematical models for radial and tangential distortion are as follows:
xdistorted=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)
ydistorted=y(1+k1r2+k2r4+k3r6)+p1(r2+2y2)+2p2xy
where (x, y) is the coordinates under the normalized plane, r is the distance of (x, y) from the origin, k1、k2、k3As radial distortion coefficient, p1、p2Is the tangential distortion coefficient.
Specifically, as shown in fig. 10 and 11, in this embodiment, a Matlab chessboard scaling method is used to solve the internal reference and distortion coefficient of the camera, and in addition to the internal reference and distortion parameter, Matlab can also solve a transformation matrix T, where the T matrix is linked to the outside world and is composed of a rotation matrix R and a translational vector T. The default is no radial distortion in the parameters for Matlab solution, so the radial distortion is obtained by OpenCV solution, and the final parameters and distortion coefficients are as follows:
K=[509.0703,0,349.8988;0,507.1801,178.7456;0,0,1]
k1=0.095175,k1=0.040872,k3=-0.275938
p1=-0.010368,p2=-0.001062
matlab will carry on the angular point detection to the chessboard calibration board to get the pixel coordinate of angular point, and set up the world coordinate system on the chessboard, the characteristic of the angular point on the world coordinate system of the chessboard is 0 in the z-axis, set up the relation with angular point coordinate under the world coordinate system under the angular point coordinate system under the pixel coordinate system finally, and find out the parameter of internal reference, distortion through methods such as PnP, etc. In this embodiment, 16 pieces of checkerboard pictures are adopted, the side length in the checkerboard is 24mm, the pose of each camera relative to the checkerboard coordinate system can be seen after the Matlab solution, the result is shown in fig. 12, and finally, the pixel error is less than 0.35, which basically meets the use requirement, and the result is shown in fig. 13.
In step S2, since a picture may recognize multiple AprilTag information, the reliability of only adopting one AprilTag information is not high, and multiple AprilTag information need to be combined to obtain a transformation matrix. In the embodiment, only the barrel distortion of the camera is considered, the farther the pixel point is from the center point, the more noise is introduced, so that the euclidean distance between the center of AprilTag and the center of the image is used as the reliability weight to fuse the plurality of spatial points. Considering that the accuracy can be effectively improved without fusing more information, the embodiment fuses only two AprilTag information, and the formula is as follows:
Figure BDA0002696834490000091
wherein Q is the fused space point, Q1、q2Two spatial points, r, respectively corresponding to AprilTag1、r2The euclidean distance of the center pixel of two different aprilats from the center of the image.
In step S3, after AprilTag is successfully identified, it is necessary to establish a corresponding relationship between four corner points of a rectangle and three-dimensional space points, and perform attitude calculation. In the embodiment, the pose of the camera under a world coordinate system is solved by combining the PnP in OpenCV with the initial value of the homography matrix, wherein the world coordinate system is an AprilTag coordinate system. After the pose is solved through PnP, the pose is iteratively optimized by adding one gauss-newton method.
The homography matrix is used for describing a mapping relation between two planes, and when the spatial points are known to be coplanar, the homography matrix can be used for solving the camera attitude.
Let the homography matrix be H, the coordinates of spatial points under the April Tag coordinate system be (X, Y, Z), because each April Tag is coplanar and overlapped with an X-Y plane, Z in all April Tag spatial points is 0, the coordinates of the corner points of the April Tag matrix are X, K are camera parameters, R is a rotation matrix, t is a translation matrix, and
x=sK[R t]3×4[X Y Z 1]T
r0、r1the first column and the second column of the matrix R are rotated and are obtained by sorting
x=sK[r0 r1 t]3×3[X Y 1]T
Let H ═ sK [ r0 r1 1],P=[X Y 1]And finally, can obtain
x=HP
The H matrix can be obtained by solving through algorithms such as DLT and the like, the H matrix has 9 variables, but the degree of freedom is 8 due to uncertainty of the scale, so that at least 4 pairs of 2D-3D points are needed for solving the matrix.
The PnP is a common method for solving the world coordinate system to the camera coordinate system, when it is known that the correspondence between a feature point in an image and a space point in the world coordinate system is as shown in fig. 14, and the logarithm of the feature point and the space point is not less than three pairs, the transform matrix T can be solved by using the PnP, and the PnP solving method generally includes DLT, P3P, EPnP, an iterative method, and the like.
In the embodiment, a Levenberg-Marquardt algorithm in a nonlinear optimization algorithm is adopted to solve the BA problem, a good transformation matrix is required to be provided as an initial value before iteration is carried out, otherwise an error function is converged to a local extreme value, and the initial value can be selected as a camera attitude T obtained by a homography matrix.
The Levenberg-Marquardt algorithm is an improved version of a Gauss-Newton method, a damping factor mu is added, errors can be effectively controlled to go in a descending direction, and the positive nature of a matrix is ensured, and the formula is as follows:
(JTJ+μI)Δx=-JTe
in the formula, J is a 2X 6 Jacobian matrix, e is an error function, I is a unit matrix, and JTIs the transpose of the J matrix and has the following form:
Figure BDA0002696834490000101
Figure BDA0002696834490000102
wherein (X ', Y ', Z ') is the space point coordinate under the camera coordinate system, ξ is the disturbance of the transformation matrix T under the special Euclidean group SE (3), and (u, v) are pixel coordinates. k is a radical of1、k2Is a distortion coefficient, fxIs the focal length of the x axis, fyIs focal length of y-axis, cxIs the amount of x-axis translation, cyIs the y-axis translation, d is the pixel coordinate at the normalized coordinateIs the distance from the origin.
The Levenberg-Marquardt algorithm steps are as follows:
s31, giving an initial transformation matrix T0Damping factor μ;
s32, for the nth iteration, solving a Jacobian matrix of the error function e at x, and obtaining an increment delta x according to a formula so as to enable
xnew=x⊕Δx;
S33, calculating a damping factor to update a scale factor rho;
S34、ρ>0.75, μ ═ μ × 2, and this update x is acceptednew
S35、ρ<0.25, mu is mu/3, refusing the update xnew
S36, judging whether the algorithm is converged, if so, ending iteration, otherwise, returning to S32;
wherein rho is a damping factor updating scale factor and the expression is
Figure BDA0002696834490000111
In step S3 in this embodiment, the PnP in OpenCV is used to solve the pose of the camera in the world coordinate system, and after the pose is solved through the PnP, a gauss-newton method is added to iteratively optimize the pose again, so that the pose settlement error is lower compared with the existing AprilTag pose solution, and in this embodiment, the error comparison is performed on the two methods through the reprojection error.
The reprojection error is a method for comparing the precision of a transformation matrix solved by a 2D-3D point pair, and when the coordinate of a space point in a known world coordinate system is P ═ X (X ═ X-w,Yw,Zw) The homogeneous coordinate point of the space point in the camera coordinate system is p ═ u, v,1, and the rotation matrix R from the world coordinate system to the camera coordinate systemcwThe translation vector is tcwInternal reference K, the resulting reprojection error f (x) is
Figure BDA0002696834490000112
And performing error analysis on 50 pictures by using the conventional AprilTag attitude calculation algorithm and the PnP attitude calculation algorithm in OpenCV according to the reprojection error definition, wherein after a PnP result is obtained, iterative optimization is performed on the camera attitude T again. The final two reprojection error results are shown in fig. 15 and 16, from which it can be derived that AprilTag attitude solution error is stable around 0.5 pixels, while OpenCV attitude solution error is lower than AprilTag attitude solution error.
In the present invention, it is necessary to acquire the transformation matrix between apriltags, but the acquired matrix has a large error and needs to reduce the error, so the step S4 optimizes the transformation matrix between apriltags by using a pose graph optimization theory.
The pose graph optimization is an application of SLAM rear end optimization, when feature points in the visual SLAM are optimized for multiple times, the feature points are basically converged, the feature points do not need to be optimized continuously in the subsequent optimization process, only the track needs to be optimized, and the pose graph optimization provides a theoretical basis for only optimizing the track.
Pose of each AprilTag is T1,…,Tn,TiAnd TjTransform matrix into TijThe following equation can be obtained
Tij=Ti -1Tj
However, if there is an error in the transformation between the attitudes of each aprilat, the equation does not hold completely, and the error is assumed to be eijThen there is
Figure BDA0002696834490000121
Assuming the set of all edges is, the overall objective function is as follows:
Figure BDA0002696834490000122
wherein
Figure BDA0002696834490000123
And the diagonal information matrix is 6 x 6, represents the uncertainty of corresponding variables, nonlinear optimization is performed after an objective function is obtained, and the iteration solution result of a Levenberg-Marquardt algorithm can still be adopted in the pose graph optimization.
In this embodiment, a g2o nonlinear optimization library can be used for optimizing errors, g2o is commonly used for optimizing an open source C + + framework of a graph structure, is commonly used for back-end optimization of SLAM and solving a Bundle Adjustment problem, and encapsulates a plurality of iterative algorithms including a gauss-newton method, a Dog-Leg algorithm, an LM algorithm and the like, and encapsulates a plurality of solution APIs of linear equations.
Problems in robotics and computer vision involving minimization of errors can both represent graph-based nonlinear error functions. In SLAM or BA, the overall goal of these problems is to solve a global minimum of the error function. The structure of the graph is shown in fig. 17, where the motion model and the observation model constitute the graph-optimized edges. The side of pose graph optimization in this embodiment is the transformation error between the poses of each AprilTag. The effect of the trajectory by the g2o map optimization is shown in fig. 18.
In the embodiment, the pose graph optimization is adopted to optimize the two-dimensional code array posture, and a common camera and an OpenMV camera are respectively used for comparison.
Aprilatag poses acquired by a common camera are used for optimization, and the result is shown in fig. 19.
Aprilatag poses acquired by OpenMV were used for optimization, and the results are shown in fig. 20.
From the results, the pose graph optimization theory is completely effective for aprilat optimization, and can reduce errors as much as possible, for example, the aprilat pose obtained by OpenMV is optimized, so that the optimization effect is obviously better, but the optimization result is not optimal because the image obtained by OpenMV is not subjected to distortion removal, but the influence of distortion parameters can be reduced as much as possible. For a common camera, because image distortion removal and other processing are performed in advance, AprilTag attitude error obtained before optimization is small, and the effect after optimization is very ideal.
In this embodiment, the method for improving the positioning accuracy in this embodiment is further compared with the method of the prior art, ORBSLAM2, and the attitude solution API positioning effect in the open source code of the university of milliroot. In this embodiment, the comparison data is compact irregular elliptical motion performed above the 3 × 3 aprilat two-dimensional code array, the comparison platform is ubuntu 16.04, the output data is a spatial coordinate point (X, Y, Z), and the visualization data platform is Matlab2019 a.
The comparison results are shown in fig. 21-24, and the following conclusions can be drawn from the top view of the trajectory:
(1) the positioning result of ORBSLAM2 has obvious drift and jump line segments as shown in FIG. 21, while the AprilTag-based positioning track has obvious compactness and basically no jump line segments as shown in FIGS. 23 and 24.
(2) The trajectory located by aptitag has real dimension information, in this embodiment, the aptitag location unit is meter, and the specific dimension cannot be known from ORBSLAM2 because the monocular dimension is uncertain.
(3) After the attitude in the open source code of the Millikan university is adopted to calculate the API, and the result is converted into a world coordinate system as shown in FIG. 22, it can be seen from the figure that part of the attitudes obtained by the algorithm have obvious errors, and the effect is not as good as that of AprilTag calculated by the attitude of the embodiment. The positioning effect of the fusion of the April Tag information is as shown in FIG. 24, and is obviously better at some local positions than that of the position map 23 without the fusion of the April Tag information.
In summary, the positioning trajectory of the present embodiment is slightly better than the ORBSLAM2 positioning trajectory when there is a clear marker and only from the top view. The localization track of the embodiment fusing multiple AprilTag information is also slightly better than the localization track using only a single AprilTag information.
Example 3
The embodiment provides an unmanned aerial vehicle positioning method, which comprises the following steps:
b1, placing an AprilTag array in the scene, calculating to obtain camera internal parameters and distortion coefficients by using the camera calibration method in the embodiment 2, and performing configuration correction on the unmanned aerial vehicle;
b2, the unmanned aerial vehicle automatically flies to the designated height to hover after receiving the takeoff instruction of the remote controller;
b3, identifying April tag information by the camera of the unmanned aerial vehicle, optimizing April tag positioning information by using the method in the embodiment 2, and sending the optimized April tag positioning information to the upper computer;
b4, the upper computer receives April tag positioning information transmitted by the unmanned aerial vehicle camera, displays the information in an appointed message frame, names an ID number for each April tag positioning information area, and performs two-dimensional track drawing on a visual interface;
b5, the upper computer designates one of the ID numbers to send to the unmanned aerial vehicle, the unmanned aerial vehicle flies to the aprilatag of the designated ID after receiving the designated ID number, and the attitude map optimization in the above embodiment 2 is continuously used in the process to optimize the transformation matrix between the aprilatas.
Specifically, in step B5, the upper computer designates an ID number to be sent to the drone, and further includes designating a flight trajectory to be sent to the drone.
In this embodiment, the unmanned aerial vehicle camera is OpenMV.
Before the unmanned aerial vehicle takes off, the unmanned aerial vehicle needs to be configured through a ground station, such as a gyroscope, an accelerometer, an optical flow module and the like, correction is carried out, safety flight measures such as low-voltage landing of the unmanned aerial vehicle need to be set, unmanned aerial vehicle testing is carried out after all configurations are completed and safety is confirmed to be correct, the unmanned aerial vehicle waits for a take-off instruction of a remote controller, the instruction is given through an auxiliary channel of the remote controller, after the take-off instruction is obtained, the unmanned aerial vehicle can automatically fly to a specified height to hover, OpenMV starts to recognize surrounding AprilTag at the moment, if an ID given by an upper computer is recognized, the unmanned aerial vehicle hovers at the AprilTag of the specified ID, and otherwise, the unmanned. A specific unmanned-machine-side flowchart is shown in fig. 25.
After the upper computer gives an AprilTag target ID, the specific steps executed by the unmanned aerial vehicle end are as follows:
b51, the upper computer designates one ID number and sends the ID number to the unmanned aerial vehicle, and the unmanned aerial vehicle converts the designated ID into a pixel coordinate of a target AprilTag;
b52, decoupling the obtained pixel coordinates from roll and pitch by the unmanned aerial vehicle;
b53, fusing height information h by the unmanned aerial vehicle;
b54, controlling the flight of the unmanned aerial vehicle by using a PID controller
The specific flow is shown in fig. 26, and in order to reduce the hysteresis of the drone, the controller adopts PID control.
In this embodiment, the upper computer body design platform is Windows, the GUI framework is Qt 5.9, and the flowchart is shown in fig. 27.
When the upper computer is started, a program can automatically search a serial port, then the serial port is manually selected to be started, if the serial port is not started, the upper computer does not operate until the serial port is started, after the serial port is started, April Tag information transmitted by OpenMV is received, the April Tag information is displayed in a specified information frame, two-dimensional track drawing is carried out on a visual interface, when the spatial information of the April Tag is received, a coordinate system of the spatial information is established on the identified ID, and therefore the world coordinate system is required to be converted into a world coordinate system, and the world coordinate system is defined at the central position of an April Tag array, namely the central position of a central April Tag matrix. And then, any two-dimension code of the upper computer can be clicked to control the unmanned aerial vehicle to fly, and the operation interface of the upper computer is shown in figure 28.
In this embodiment, in the 3 × 3 aprilat array, the IDs are 0, 1, 2, 6, 7, 8, 3, 4, and 5 from left to right and from top to bottom in sequence, the host computer designates that the unmanned aerial vehicle flies to the aprilat with the ID of 7, the flight result of the unmanned aerial vehicle is shown in fig. 29, the positioning information display and visualization of the host computer is shown in fig. 30, the shadow is the top view trajectory of the unmanned aerial vehicle, it can be seen that the unmanned aerial vehicle is basically gathered near the aprilat with the ID of 7, and the operation result basically matches the actual situation.
It should be understood that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the technical solutions of the present invention, and are not intended to limit the specific embodiments of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention claims should be included in the protection scope of the present invention claims.

Claims (10)

1. A method for improving identification and positioning accuracy of an unmanned aerial vehicle camera to AprilTag is characterized by comprising the following steps:
s1, calibrating a camera on the unmanned aerial vehicle, and calculating to obtain camera internal parameters and distortion coefficients;
s2, on the basis of obtaining the camera internal parameters and the distortion parameters, the camera obtains an AprilTag image, then spatial point information is identified from the obtained AprilTag image, and a plurality of AprilTag spatial point information are fused;
s3, after obtaining the information of the plurality of spatial points, adopting PnP in OpenCV to solve the posture of the camera under the world coordinate system, and adding a Gauss-Newton method to iteratively optimize the posture after solving the posture through the PnP;
and S4, optimizing a transformation matrix between AprilTag by using a pose graph optimization theory.
2. The method of claim 1, wherein the step S1 of solving the camera reference process includes:
according to the imaging principle of the pinhole camera model, a 3D point P (X, Y, Z) under a camera coordinate system has a projection point P1 coordinate (u, v) in a pixel coordinate system, and the projection point P1 coordinate (u, v) corresponds to the projection point
Figure FDA0002696834480000011
Wherein K is a camera internal reference matrix, fxIs the focal length of the x axis, fyIs focal length of y-axis, cxIs the amount of x-axis translation, cyIs the y-axis translation;
the distortion coefficient solution can be divided into radial distortion and tangential distortion, and the mathematical model of the solution formula is as follows:
xdistorted=x(1+k1r2+k2r4+k3r6)+2p1xy+p2(r2+2x2)
ydistorted=y(1+k1r2+k2r4+k3r6)+p1(r2+2y2)+2p2xy
wherein (x, y) is the coordinate of point P in the normalized plane, xdistortedAnd ydistoredFor the distorted coordinates, r is the distance from (x, y) to the origin, k1、k2、k3As radial distortion coefficient, p1、p2Is the tangential distortion coefficient.
3. The method according to claim 1, wherein in step S2, a plurality of AprilTag spatial point information are fused, and the specific process is to use an euclidean distance between an AprilTag center and an image center as a confidence weight, fuse two AprilTag spatial point information, and use a formula:
Figure FDA0002696834480000012
wherein Q is the fused space point, Q1、q2Two spatial points, r, respectively corresponding to AprilTag1、r2The euclidean distance of the center pixel of two different aprilats from the center of the image.
4. The method for improving the aprilat positioning accuracy of an unmanned aerial vehicle camera pair according to claim 1, wherein the specific process of step S3 is to solve a ba (bundle adjustment) problem by using a Levenberg-Marquardt algorithm in a nonlinear optimization algorithm, and the formula is as follows:
(JTJ+μI)Δx=-JTe,
in the formula, J is a 2X 6 Jacobian matrix, e is an error function, I is a unit matrix, and JTIs the transpose of the J matrix and has the following form:
Figure FDA0002696834480000021
Figure FDA0002696834480000022
wherein (X ', Y ', Z ') is a space point coordinate under a camera coordinate system, ξ is disturbance of a transformation matrix T under a special Euclidean group SE (3), and (u, v) are pixel coordinates; k is a radical of1、k2Is a distortion coefficient, fxIs the focal length of the x axis, fyIs focal length of y-axis, cxIs the amount of x-axis translation, cyD is the distance between the pixel coordinate and the origin in the normalized coordinate system;
the Levenberg-Marquardt algorithm steps are as follows:
s31, giving an initial transformation matrix T0Damping factor μ;
s32, for the nth iteration, solving a Jacobian matrix of the error function e at x, obtaining an increment delta x according to a formula, and enabling x to be xnew=x⊕Δx;
S33, calculating a damping factor to update a scale factor rho;
S34、ρ>0.75, μ ═ μ × 2, and this update x is acceptednew
S35、ρ<0.25, mu is mu/3, refusing the update xnew
S36, judging whether the algorithm is converged, if so, ending iteration, otherwise, returning to S32;
wherein rho is a damping factor updating scale factor and the expression is
Figure FDA0002696834480000023
5. The method according to claim 1, wherein in step S4, the specific process of optimizing the transformation matrix between aprilats by using the pose graph optimization theory includes:
pose of each AprilTag is T1,…,Tn,TiAnd TjTransform matrix into TijThe following can be obtainedEquation of
Tij=Ti -1Tj
If there is an error in the transformation between the attitudes of each AprilTag, then the equation does not hold completely, let e be the errorijThen there is
Figure FDA0002696834480000031
The set of all edges is then the overall objective function is as follows:
Figure FDA0002696834480000032
6. an unmanned aerial vehicle positioning method is characterized by comprising the following steps:
b1, placing AprilTag array in the scene, calculating the camera intrinsic parameters and distortion coefficients by using the step S1 of any one of claims 1 to 5, and performing configuration correction on the unmanned aerial vehicle;
b2, the unmanned aerial vehicle automatically flies to the designated height to hover after receiving the takeoff instruction of the remote controller;
b3, identifying April tag information by the camera of the unmanned aerial vehicle, optimizing April tag positioning information by the steps S2 and S3 in any one of claims 1 to 5, and sending the optimized April tag positioning information to the upper computer;
b4, the upper computer receives April tag positioning information transmitted by the unmanned aerial vehicle camera, displays the information in an appointed message frame, names an ID number for each April tag positioning information area, and performs two-dimensional track drawing on a visual interface;
b5, the upper computer appoints one of the ID numbers to send to the drone, the drone flies to the aprilat with the appointed ID after receiving the appointed ID number, and the process continuously optimizes the transformation matrix between the aprilat by using the step S4 described in the above claim 1.
7. The method according to claim 6, wherein in step B5, the upper computer specifies the ID number to be sent to the UAV, and further comprises specifying the flight trajectory to be sent to the UAV.
8. The method according to claim 6, wherein the step B5, after receiving the ID number, the drone flies to aprilat with the specified ID, and specifically includes:
b51, the upper computer designates one ID number and sends the ID number to the unmanned aerial vehicle, and the unmanned aerial vehicle converts the designated ID into a pixel coordinate of a target AprilTag;
b52, decoupling the obtained pixel coordinates from roll and pitch by the unmanned aerial vehicle;
b53, fusing height information h by the unmanned aerial vehicle;
and B54, controlling the unmanned aerial vehicle to fly by using a PID controller.
9. An unmanned aerial vehicle positioning system applying the positioning method of any one of claims 6 to 8, comprising an indoor space, a two-dimensional code, an unmanned aerial vehicle and an upper computer, wherein the two-dimensional code is placed on the ground of the indoor space, the unmanned aerial vehicle is connected with the upper computer through Bluetooth or WIFI, and the upper computer controls the unmanned aerial vehicle.
10. The drone positioning system of claim 9, wherein the drone includes a drone body, a drive-free camera, an optical flow sensor, and an OpenMV, the drive-free camera, the optical flow sensor, and the OpenMV being connected to the drone body; the upper computer is a PC terminal.
CN202011008679.0A 2020-09-23 2020-09-23 Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system Active CN112184812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011008679.0A CN112184812B (en) 2020-09-23 2020-09-23 Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011008679.0A CN112184812B (en) 2020-09-23 2020-09-23 Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system

Publications (2)

Publication Number Publication Date
CN112184812A true CN112184812A (en) 2021-01-05
CN112184812B CN112184812B (en) 2023-09-22

Family

ID=73955357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011008679.0A Active CN112184812B (en) 2020-09-23 2020-09-23 Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system

Country Status (1)

Country Link
CN (1) CN112184812B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113485449A (en) * 2021-08-16 2021-10-08 普宙科技(深圳)有限公司 Unmanned aerial vehicle autonomous landing method and system based on nested two-dimensional code
CN113674340A (en) * 2021-07-05 2021-11-19 北京物资学院 Binocular vision navigation method and device based on landmark points
CN113790668A (en) * 2021-07-26 2021-12-14 广东工业大学 Intelligent cargo measuring system based on multi-rotor unmanned aerial vehicle
CN114007047A (en) * 2021-11-02 2022-02-01 广西电网有限责任公司贺州供电局 Power transformation field operation real-time monitoring and alarming system based on machine vision
CN114200948A (en) * 2021-12-09 2022-03-18 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous landing method based on visual assistance

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658461A (en) * 2018-12-24 2019-04-19 中国电子科技集团公司第二十研究所 A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN110929642A (en) * 2019-11-21 2020-03-27 扬州市职业大学(扬州市广播电视大学) Real-time estimation method for human face posture based on two-dimensional feature points
CN111197984A (en) * 2020-01-15 2020-05-26 重庆邮电大学 Vision-inertial motion estimation method based on environmental constraint

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019136613A1 (en) * 2018-01-09 2019-07-18 深圳市沃特沃德股份有限公司 Indoor locating method and device for robot
CN109697753A (en) * 2018-12-10 2019-04-30 智灵飞(北京)科技有限公司 A kind of no-manned plane three-dimensional method for reconstructing, unmanned plane based on RGB-D SLAM
CN109658461A (en) * 2018-12-24 2019-04-19 中国电子科技集团公司第二十研究所 A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment
CN110929642A (en) * 2019-11-21 2020-03-27 扬州市职业大学(扬州市广播电视大学) Real-time estimation method for human face posture based on two-dimensional feature points
CN111197984A (en) * 2020-01-15 2020-05-26 重庆邮电大学 Vision-inertial motion estimation method based on environmental constraint

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAO LIANG ET AL: "Moving target tracking method for unmanned aerial vehicle/unmanned ground vehicle heterogeneous system based on AprilTags", 《MEASUREMENT AND CONTROL》, vol. 53, no. 3, pages 427 - 440 *
高嘉瑜 等: "基于AprilTag二维码的无人机着陆引导方法", 《现代导航》, no. 01, pages 20 - 25 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674340A (en) * 2021-07-05 2021-11-19 北京物资学院 Binocular vision navigation method and device based on landmark points
CN113790668A (en) * 2021-07-26 2021-12-14 广东工业大学 Intelligent cargo measuring system based on multi-rotor unmanned aerial vehicle
CN113790668B (en) * 2021-07-26 2023-06-06 广东工业大学 Intelligent cargo measurement system based on multi-rotor unmanned aerial vehicle
CN113485449A (en) * 2021-08-16 2021-10-08 普宙科技(深圳)有限公司 Unmanned aerial vehicle autonomous landing method and system based on nested two-dimensional code
CN114007047A (en) * 2021-11-02 2022-02-01 广西电网有限责任公司贺州供电局 Power transformation field operation real-time monitoring and alarming system based on machine vision
CN114007047B (en) * 2021-11-02 2024-03-15 广西电网有限责任公司贺州供电局 Machine vision-based real-time monitoring and alarming system for operation of power transformation site
CN114200948A (en) * 2021-12-09 2022-03-18 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous landing method based on visual assistance
CN114200948B (en) * 2021-12-09 2023-12-29 中国人民解放军国防科技大学 Unmanned aerial vehicle autonomous landing method based on visual assistance

Also Published As

Publication number Publication date
CN112184812B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN112184812B (en) Method for improving identification and positioning precision of unmanned aerial vehicle camera to april tag and positioning method and system
Courbon et al. Vision-based navigation of unmanned aerial vehicles
Wu et al. Autonomous flight in GPS-denied environments using monocular vision and inertial sensors
US9355453B2 (en) Three-dimensional measurement apparatus, model generation apparatus, processing method thereof, and non-transitory computer-readable storage medium
WO2018145291A1 (en) System and method for real-time location tracking of drone
Sanfourche et al. Perception for UAV: Vision-Based Navigation and Environment Modeling.
WO2022042184A1 (en) Method and apparatus for estimating position of tracking target, and unmanned aerial vehicle
Eynard et al. Real time UAV altitude, attitude and motion estimation from hybrid stereovision
Courbon et al. Visual navigation of a quadrotor aerial vehicle
Magree et al. Monocular visual mapping for obstacle avoidance on UAVs
Magree et al. Monocular visual mapping for obstacle avoidance on UAVs
Celik et al. Mono-vision corner SLAM for indoor navigation
Hinzmann et al. Flexible stereo: constrained, non-rigid, wide-baseline stereo vision for fixed-wing aerial platforms
Luo et al. Docking navigation method for UAV autonomous aerial refueling
Moore et al. A stereo vision system for uav guidance
Li et al. Metric sensing and control of a quadrotor using a homography-based visual inertial fusion method
Hinzmann et al. Robust map generation for fixed-wing UAVs with low-cost highly-oblique monocular cameras
Mansur et al. Real time monocular visual odometry using optical flow: study on navigation of quadrotors UAV
Wang et al. Pose and velocity estimation algorithm for UAV in visual landing
CN112945233A (en) Global drift-free autonomous robot simultaneous positioning and map building method
CN110136168B (en) Multi-rotor speed measuring method based on feature point matching and optical flow method
Andert et al. Autonomous vision-based helicopter flights through obstacle gates
Abdulov et al. Visual odometry approaches to autonomous navigation for multicopter model in virtual indoor environment
Klavins et al. Unmanned aerial vehicle movement trajectory detection in open environment
Aminzadeh et al. Implementation and performance evaluation of optical flow navigation system under specific conditions for a flying robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant