CN112116651B - Ground target positioning method and system based on monocular vision of unmanned aerial vehicle - Google Patents

Ground target positioning method and system based on monocular vision of unmanned aerial vehicle Download PDF

Info

Publication number
CN112116651B
CN112116651B CN202010807215.XA CN202010807215A CN112116651B CN 112116651 B CN112116651 B CN 112116651B CN 202010807215 A CN202010807215 A CN 202010807215A CN 112116651 B CN112116651 B CN 112116651B
Authority
CN
China
Prior art keywords
image
ground target
unmanned aerial
aerial vehicle
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010807215.XA
Other languages
Chinese (zh)
Other versions
CN112116651A (en
Inventor
康颖
王宁
史殿习
张拥军
秦伟
崔玉宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN202010807215.XA priority Critical patent/CN112116651B/en
Publication of CN112116651A publication Critical patent/CN112116651A/en
Application granted granted Critical
Publication of CN112116651B publication Critical patent/CN112116651B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters

Abstract

The invention provides a ground target positioning method and system based on monocular vision of an unmanned aerial vehicle, which comprises the following steps: acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle; calculating real coordinates of the ground target by adopting the field angle based on the position in the image; and superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle. The calibrated camera focal length parameters are needed to be used for calculating aiming at the current ground target positioning based on the unmanned aerial vehicle airborne camera, and if the camera cannot be calibrated accurately, a positioning result can generate a large error. The method can solve the problem aiming at the scene with insufficient calibration conditions, does not use the focal length of the camera to be calibrated, but uses the field angle parameter of the camera to perform positioning calculation, only needs to obtain the rated field angle of the camera, and does not need to calibrate in advance. This may improve the usability and accuracy of the positioning algorithm in situations of shortage of conditions.

Description

Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
Technical Field
The invention belongs to the technical field of computer vision and unmanned aerial vehicle sensors, and particularly relates to a ground target positioning method and system based on monocular vision of an unmanned aerial vehicle.
Background
Unmanned Aerial Vehicles (UAVs) play an important role in various scenes in recent years, such as Unmanned Aerial Vehicle search and rescue, reconnaissance, environmental monitoring, traffic management, and the like. In these application scenarios, a very important task of a drone is to detect and track objects of interest. For example, in the search and rescue task, the unmanned aerial vehicle is in a spiral mode above a disaster area, disaster victims needing search and rescue are accurately detected, the directions of the workers are provided, and rescue is conveniently carried out by the workers. In the task of detecting and tracking a target by an unmanned aerial vehicle, real coordinate positioning of the target is indispensable. In the aspect of target positioning, researchers do a lot of research work, and the methods relate to vision-based positioning, passive radar positioning, sonar system positioning, infrared positioning and the like, and each positioning method depends on specific sensor support. In these positioning methods, vision-based positioning is not only simple and easy to use, but also the price of the vision camera is more people-friendly. Therefore, with the rapid development of computer vision technology, a large number of vision positioning systems are developed and widely applied in various fields.
The vision-based target positioning is one of the research hotspots of computer vision because of the important practical application value. The task of target positioning is to accurately estimate the relative position coordinates of a target and an unmanned aerial vehicle in the real world on the premise of determining the position of the target in an image. The position of the target in the image can be obtained by a target boundary box obtained by manual framing, a target detection algorithm and a target tracking algorithm. The positioning of a single camera onboard a drone is called monocular vision positioning. The present monocular vision positioning of the unmanned aerial vehicle is divided into two branches of single unmanned aerial vehicle positioning and multi-unmanned aerial vehicle positioning. The single-machine Positioning needs to use the ground as a horizontal plane, measure the relative height between the unmanned aerial vehicle and the ground through a Global Positioning System (GPS) sensor, and then the unmanned aerial vehicle completes the target Positioning according to the flying height of the unmanned aerial vehicle and the position of the target in the camera image. The multi-machine positioning is to continuously observe the same target by using a plurality of unmanned aerial vehicles, each unmanned aerial vehicle can obtain the positioning of one target by using a single-machine positioning algorithm, and then target positioning coordinates obtained by each unmanned aerial vehicle are fused by using a fusion algorithm to obtain the final target positioning. The multi-machine positioning is actually the effective fusion of a plurality of single-machine positioning, so the basis is also single-machine positioning. The most effective method of the current single-machine monocular vision positioning algorithm is to use camera calibration and space dimension transformation, and utilize internal parameters and external parameters of the camera calibrated in advance to carry out 2D to 3D space transformation on a target image so as to complete the space coordinate positioning of a ground target. The method depends on accurate camera parameter calibration, and if the calibration is not accurate, a large error occurs in a positioning result. Calibration requires specific equipment and environment, and is easily affected by factors such as illumination, whether the checkerboard is standard or not, and therefore, the calibration is difficult to implement in some scenes with insufficient calibration conditions. How to accurately position a ground target under the condition of insufficient calibration conditions becomes a problem which needs to be solved urgently.
In the flying process of the unmanned aerial vehicle, the unmanned aerial vehicle moves to cause the pose angle of an airborne camera of the unmanned aerial vehicle to change, so that the coordinate mapping relation from 2D to 3D in target positioning is changed, and positioning is made to be wrong. Such an error is fatal in the target tracking and positioning process, and is likely to directly affect the flight path of the target tracked by the unmanned aerial vehicle, so that the unmanned aerial vehicle can lose the target. Therefore, how to ensure continuous and accurate target positioning when the unmanned aerial vehicle moves is a problem to be solved.
The existing monocular vision target positioning algorithm of the unmanned aerial vehicle cannot effectively meet the requirements of insufficient calibration conditions and continuous positioning of the unmanned aerial vehicle in flight, and the research of a positioning algorithm which is continuous, accurate and independent of accurate camera calibration is particularly important.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a ground target positioning method based on monocular vision of an unmanned aerial vehicle, which comprises the following steps:
acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle;
calculating real coordinates of the ground target by adopting a field angle based on the position in the image;
and superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle.
Preferably, the acquiring the position of the ground target in each frame of the monocular vision image of the drone includes:
when the ground target appears in the image frame of the monocular visual image of the unmanned aerial vehicle for the first time, framing an initial target frame containing the ground target;
and initializing a target tracking algorithm according to the initial target frame, performing target tracking on each frame of image after the ground target appears for the first time by adopting the target tracking algorithm, and framing out the ground target in real time to obtain the position of the ground target in each frame of monocular vision image of the unmanned aerial vehicle.
Preferably, the calculating the real coordinates of the ground target by using the field angle based on the position in the image includes:
establishing a world coordinate system by taking the position of the airborne camera as the origin of the world coordinate system;
calculating the image coordinate offset between the ground target and the image center point on the image frame;
calculating the coordinates of the position points of the ground target on the image frame on a world coordinate system according to the image coordinate offset and in combination with the field angle;
and calculating the real coordinate of the ground target according to the coordinate of the position point on the world coordinate system.
Preferably, the calculating the coordinates of the position point of the ground target on the image frame on the world coordinate system according to the image coordinate offset and the angle of view includes:
calculating the distance from the image center point to the image boundary in the real world according to the distance from the image center point to the image boundary and the combination of the field angle;
calculating the coordinate offset of the ground target between the position point of the image frame and the image center point in the real world according to the distance from the image center point to the image boundary, the distance in the real world and the image coordinate offset;
obtaining initial coordinates of the position point in the real world without considering camera rotation and unmanned aerial vehicle flying height by using the coordinate offset of the real world and combining a focal length;
and sequentially superposing the initial coordinates on the rotation matrix of the camera and the flight height of the unmanned aerial vehicle to obtain the coordinates of the position point of the ground target on the image frame on a world coordinate system.
Preferably, the calculation formula of the real world distance from the image center point to the image boundary is as follows:
Figure BDA0002629582460000031
wherein L is w/2 Representing the real world distance, L, from the image center point to the image boundary in the horizontal direction h/2 Representing the real world distance from the centre point of the image to the border of the image in the vertical direction, f c Denotes the focal length of the camera, F hor Representing the horizontal field of view of the camera, F ver Representing the vertical field of view of the camera.
Preferably, the real-world coordinate offset is calculated as follows:
Figure BDA0002629582460000032
Figure BDA0002629582460000033
in the formula, L Δw Representing the amount of real world coordinate shift in the horizontal direction,. DELTA.w representing the amount of image coordinate shift in the horizontal direction,. W representing the image width,. L Δh Denotes the amount of real world coordinate shift in the vertical direction, Δ h denotes the amount of image coordinate shift in the horizontal-vertical direction, and h denotes the image height.
Preferably, the calculating the real coordinates of the ground target according to the coordinates of the position point on the world coordinate system includes:
calculating a line function equation of the camera coordinate point and the position point in the real world according to the coordinate of the position point on the world coordinate system;
and (4) with the ground height as 0, eliminating the focal length by adopting the connection line function equation, and calculating the real coordinate of the ground target.
Preferably, the superimposing the real coordinates of the ground target on the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle includes:
acquiring pose information of the unmanned aerial vehicle through an inertial measurement unit, and calculating a rotation matrix of the unmanned aerial vehicle based on the pose information;
and superposing the unmanned aerial vehicle rotation matrix to the rotation matrix of the camera, and calculating to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle.
Preferably, after calculating the real coordinates of the ground target by using the field angle based on the position in the image and before superimposing the real coordinates of the ground target on the pose information of the unmanned aerial vehicle, the method further includes:
and filtering and denoising the pose information of the unmanned aerial vehicle by adopting a Kalman filtering method.
Based on the same inventive concept, the application also provides a ground target positioning system based on unmanned aerial vehicle monocular vision, which is characterized by comprising: the device comprises a position acquisition module, a coordinate calculation module and a positioning module;
the position acquisition module is used for acquiring the position of the ground target in each frame of unmanned aerial vehicle monocular vision image;
the coordinate calculation module is used for calculating the real coordinate of the ground target by adopting an angle of view based on the position in the image;
and the positioning module is used for superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle.
Compared with the closest prior art, the invention has the following beneficial effects:
the invention provides a ground target positioning method and system based on monocular vision of an unmanned aerial vehicle, which comprises the following steps: acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle; calculating real coordinates of the ground target by adopting the field angle based on the position in the image; and superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle. The calibrated camera focal length parameters are needed to be used for calculating aiming at the current ground target positioning based on the unmanned aerial vehicle airborne camera, and if the camera cannot be calibrated accurately, a positioning result can generate a large error. The method can solve the problem of scenes with insufficient calibration conditions, does not use the focal length of the camera to be calibrated, but adopts the field angle parameter of the camera to perform positioning calculation, only needs to acquire the rated field angle of the camera, and does not need to calibrate in advance. This may improve the usability and accuracy of the positioning algorithm in situations of short-of-condition.
Drawings
Fig. 1 is a schematic flow chart of a ground target positioning method based on monocular vision of an unmanned aerial vehicle according to the present invention;
FIG. 2 is a general flowchart of a ground target positioning method based on monocular vision of an unmanned aerial vehicle according to the present invention;
FIG. 3 is a diagram of a positioner frame provided in the present invention;
fig. 4 is a schematic diagram of a basic structure of ground target positioning based on monocular vision of an unmanned aerial vehicle provided by the invention;
fig. 5 is a schematic diagram of a detailed structure of ground target positioning based on monocular vision of an unmanned aerial vehicle provided by the invention.
Detailed Description
The following detailed description of embodiments of the invention is provided in connection with the accompanying drawings.
The invention aims to solve the technical problem of providing a ground target positioning method based on the monocular vision of an unmanned aerial vehicle with a camera View angle (FOV), which does not depend on accurate camera focal length calibration and can not carry out strict camera calibration, realizes the real physical coordinate positioning of a target in a visual image, and improves the real-time performance and the usability of the positioning method. Meanwhile, the Inertial Measurement Unit (IMU) and the measured pose data are applied to a positioning algorithm, so that the method can also perform accurate target positioning when the pose of the unmanned aerial vehicle changes.
Example 1:
the invention provides a schematic flow diagram of a ground target positioning method based on monocular vision of an unmanned aerial vehicle, which is shown in figure 1 and comprises the following steps:
s1: acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle;
s2: calculating real coordinates of the ground target by adopting the field angle based on the position in the image;
s3: and superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle.
The specific technical scheme of the invention is shown in figure 2, and comprises the following steps:
the method comprises the following steps of firstly, realizing a camera-based field of view (FOV) ground target positioning algorithm, and calculating real coordinates of a ground target in the flight process of the unmanned aerial vehicle.
The specific method comprises the following steps:
1.1 establishing a world coordinate system (x, y, Z) by taking the position of the onboard camera as the world coordinate system origin O (0, 0).
1.2 the onboard camera is controlled by a universal joint and can rotate freely, and the rotation angle of the camera relative to the initial angle at each moment is calculated by the following specific method:
1.2.1 obtaining in real time the rotation angle of the onboard camera relative to the initial angle, this rotation angle being obtainable by a gimbal connecting the onboard camera with the drone. The rotation angles are expressed in the form of euler angles, namely Roll angle (Roll), pitch angle (Pitch) and Yaw angle (Yaw), which are expressed in terms of phi, beta, gamma, respectively.
1.2.2 according to three rotation angles phi, beta and gamma, the corresponding three rotation matrixes R are obtained by using a rotation matrix calculation method φ ,R β ,R γ The calculation formula is formula (1) -formula (3)
Figure BDA0002629582460000051
Figure BDA0002629582460000052
/>
Figure BDA0002629582460000061
From this, a resultant rotation matrix R can be calculated φβγ =R γ R β R φ
1.3 calculate the position coordinates of the target, i.e. the image coordinate offset (Δ w, Δ h) of the position point T from the image center point C. The coordinate value of T can be calculated by the target frame containing the ground target, and the calculation formula is formula (4)
Figure BDA0002629582460000062
In the formula u T Is the horizontal coordinate of T, v T And the vertical coordinate of T, rx is the coordinate of the top left corner vertex of the target frame in the image u direction, namely the horizontal direction, ry is the coordinate of the top left corner vertex of the target frame in the image v direction, namely the vertical direction, rw is the length of the target frame, and rh is the width of the target frame.
C has a coordinate value of
Figure BDA0002629582460000063
The coordinate offset can therefore be determined by subtracting the coordinate values of T and C: />
Figure BDA0002629582460000064
Figure BDA0002629582460000065
Where w represents the image width and h represents the image height.
1.4 image coordinate System distance from the center point C of the known image to the image boundary
Figure BDA0002629582460000066
Calculating its distance (L) in the real world w/2 ,L h/2 ) The calculation formula is formula (5)
Figure BDA0002629582460000067
Wherein f is c Denotes the focal length (in meters), F, of the camera hor And F ver Representing the horizontal and vertical field angles of the camera, respectively. Wherein, the horizontal and vertical field angles are rated parameters of the camera.
1.5 calculating the coordinate of the position T point of the target in the image in a real world coordinate system, and the specific method is as follows:
1.5.1 image coordinate System distance according to C
Figure BDA0002629582460000068
Distance (L) from it in the real world w/2 ,L h/2 ) The amount of shift of the center of the position coordinate T of the target, that is, the amount of shift of the image coordinate (Δ w, Δ h) and the amount of shift thereof in the real world (L) Δw ,L Δh ) To calculate a ratio of the image coordinate system to the real coordinate system, based on the comparison result, and to calculate a ratio of the image coordinate system to the real coordinate system>
Figure BDA0002629582460000069
Can find L Δw And L Δh The value of (c).
1.5.2 the true distance from the image plane to the camera position is the focal length f c Then the coordinates of the T point in the real world can be obtained
Figure BDA00026295824600000610
1.6 calculate the coordinates of the target's position T in the image in the real world coordinate system after the camera has rotated, i.e. the rotation matrix R of the camera φ ,R β ,R γ Superimposed on the initial real world coordinate T of point T W The calculation formula is formula (6)
Figure BDA0002629582460000071
1.7 applying unmanned aerial vehicle flying height to
Figure BDA0002629582460000072
In the calculation of (a) is performed,obtaining real world coordinates of a target point in flight of the unmanned aerial vehicle->
Figure BDA0002629582460000073
Figure BDA0002629582460000074
Wherein h is u The real flying height of the unmanned aerial vehicle can be obtained by a GPS sensor.
1.8 define the ground plane as z =0, calculate the real coordinate T of the ground target R The specific method comprises the following steps:
1.8.1 calculating a connection function equation of the camera coordinate point and an imaging point of the target in the image in the real world, wherein the real world coordinate of the camera is (0, h) after the unmanned aerial vehicle takes off u ) Then, according to the straight line two-point type available connection equation:
Figure BDA0002629582460000075
1.8.2 substituting z =0 into the connection equation to obtain the real coordinate of the ground target
Figure BDA0002629582460000076
During this calculation, ->
Figure BDA0002629582460000077
All contain the same f c So can be eliminated to realize f-free c Positioning of (3).
Secondly, the pose information of the unmanned aerial vehicle is superposed in the target positioning, the camera is controlled by the universal joint to rotate, and a rotation matrix R can be obtained φβγ . In the flying process of the unmanned aerial vehicle, under the condition that the pose is constantly changed, the IMU can acquire pose information and solve the rotation matrix R of the unmanned aerial vehicle U 。R U And the target can be accurately tracked in the process of realizing the flight of the unmanned aerial vehicle by superposing the target on the rotation matrix of the camera.
The specific method comprises the following steps:
2.1 obtaining pose information of the unmanned aerial vehicle through the IMU and calculating a rotation matrix R of the unmanned aerial vehicle U
2.1.1IMU the data that obtains is the form of quaternion, and the quaternion is a hypercomplex number, is constituteed by a real part and three imaginary parts. The unit quaternion may represent a rotation operation of the three-dimensional plane. Definition u R For a rotation axis of three-dimensional space, the quaternion information acquired by the IMU can be defined as equation (8)
Figure BDA0002629582460000078
Wherein α is winding u R The angle of rotation.
2.1.2 Another quaternion
Figure BDA0002629582460000079
Is->
Figure BDA00026295824600000710
Get after rotating>
Figure BDA00026295824600000711
Rotating formula is->
Figure BDA00026295824600000712
(/>
Figure BDA00026295824600000713
Product of representing quaternion), further solving the resulting equation (9)
Figure BDA0002629582460000081
Wherein
Figure BDA0002629582460000082
Is/>
Figure BDA0002629582460000083
The scalar quantity of (a) is,v and v R Are respectively>
Figure BDA0002629582460000084
And &>
Figure BDA0002629582460000085
The vector portion of (2). From this, can solve unmanned aerial vehicle rotation matrix R U
2.2 rotating the unmanned aerial vehicle matrix R U Superimposed on the real coordinates of the target location, i.e. the ground target, the rotation of the drone and the rotation of the camera gimbal can be regarded as a superimposed relationship: that is, the camera first rotates the gimbal and then rotates the drone based on the rotation of the gimbal, so the overall rotation matrix of the camera can be defined as R U_φβγ =R U R φβγ . Using R U_φβγ Replacing R in the second step φβγ And the superposition of the pose matrix of the unmanned aerial vehicle in the positioning process is realized.
The motion pose data of the unmanned aerial vehicle are superimposed to the target positioning algorithm in real time, so that the ground target can be accurately positioned no matter how the unmanned aerial vehicle moves and how the angle changes. Thus, the positioning at all times is really realized.
A diagram of the positioner frame based on the first step to the second step method is shown in fig. 3, where the RPY angle is the field angle.
And thirdly, based on the angular velocity information of the unmanned aerial vehicle observed by the IMU, filtering and denoising the pose information of the unmanned aerial vehicle by using a Kalman filter.
The specific method comprises the following steps:
3.1 define quaternion
Figure BDA00026295824600000817
Derivative with respect to time t is +>
Figure BDA0002629582460000086
Wherein u R Representing a fixed axis of rotation, and therefore du R =0, α is the rotation angle.
3.2 solving the derivative of the rotation angle α with respect to time t,
Figure BDA0002629582460000087
ω E I.e. winding u R Instantaneous angular velocity of shaft rotation, which is in u R On three partial axes with partial speed->
Figure BDA0002629582460000088
3.3 solving for
Figure BDA0002629582460000089
About>
Figure BDA00026295824600000810
The update formula of (2).
3.3.1 extract one from the formula in 3.1
Figure BDA00026295824600000811
Will omega in 3.2 E Substituting to obtain: />
Figure BDA00026295824600000812
Omega (t) is defined by E A4 x 4 matrix of components.
3.3.2 in 3.3.1
Figure BDA00026295824600000813
The formula is a homogeneous linear equation, and the discrete form of the general solution can be obtained: />
Figure BDA00026295824600000814
Figure BDA00026295824600000815
3.3.3 Taylor expansion of the above formula and extraction of the first two terms, the equation (10)
Figure BDA00026295824600000816
And 3.4, calculating an instantaneous angular velocity by utilizing the sampling data of the IMU angular velocity, and constructing a Kalman filter to de-noise the pose information.
3.4.1 Definitions f G The angular velocity sampling frequency of IMU is
Figure BDA0002629582460000091
Wherein
Figure BDA0002629582460000092
Representing the change in angle over time at, which represents the instantaneous angular velocity when at is sufficiently small.
3.4.2 according to the updating formula of 3.3.3 and all known quantities, constructing a Kalman filter to complete the de-noising of the pose. And (6) ending.
In the actual process of target positioning, the third step can be executed after the second step in the continuous positioning process, or the third step can be executed after the quaternion is obtained in the second step and before the real coordinate of the ground target is superposed with the pose information of the unmanned aerial vehicle.
The Kalman filtering method is adopted to carry out filtering and denoising on the observation data, and the observation angular speed of the IMU is fully utilized to predict and denoise the pose of the unmanned aerial vehicle. Therefore, the final positioning result is stable, and the positioning result is more accurate.
Example 2:
another embodiment of the ground target positioning method based on monocular vision of the unmanned aerial vehicle is given below.
This embodiment comprises the steps of:
and a1, acquiring the position of the target in each frame of image. After the unmanned aerial vehicle takes off, the camera is started and the ground target is in place, the target position is initialized by adopting a manual framing target or target detection algorithm, and the position of the target in the image is continuously acquired by using a target tracking method. Manually framing the target, namely framing the target on the frame of image containing the target in a manner of manually drawing a rectangular frame after the camera shoots the target, and initializing the position of the target in the image; and (3) using a target detection algorithm, namely, after the camera shoots a target, automatically detecting the target by using the detection algorithm and framing the target by using a rectangular frame. The initial target box and the initial frame are then used to initialize the target tracking algorithm, enabling the tracking algorithm to track and frame the target in each subsequent frame.
The specific method comprises the following steps:
a1.1 the unmanned aerial vehicle takes off and starts the camera to shoot the ground, each frame image captured by the camera is transmitted back to the ground station, and the ground station processes the images.
a1.2 when the target appears in a certain frame of image I, using a manual or target detection method to frame the target in the image I, wherein the method comprises the following steps:
the manual method a1.2.1 is to write a framing program using an image processing framework such as OpenCV, and the engineer executes the program to draw a target frame on the image I using a mouse. The target frame contains four values { rx, ry, rw, rh }, where (u, v) represents two coordinate axes of the image coordinate system, rx and ry represent coordinates of the top left vertex of the target frame in the u and v directions of the image I, respectively, and rw and rh represent the length and width of the target frame, respectively.
and a1.2.2 target detection method, namely, calling a target detection algorithm to process the image I, and directly obtaining a target frame. Commonly used target detection algorithms are: YOLO, fast-RCNN, SSD, and the like. The data form of the obtained target frame is the same as that described in a1.2.1.
a1.3, initializing a target tracking algorithm by using the target frame data of the image I, and then tracking the target of each subsequent frame of image by using the tracking algorithm to frame the target in real time. Common target tracking algorithms are: KCF, TLD, siamFC, etc.
And a2, realizing a camera-based field of view (FOV) ground target positioning algorithm, and calculating real coordinates of the ground target in the flight process of the unmanned aerial vehicle.
The specific method comprises the following steps:
a2.1 establishing a world coordinate system (x, y, z) by taking the position of the onboard camera as the world coordinate system origin O (0, 0).
a2.2, controlling the onboard camera by a universal joint, wherein the onboard camera can rotate freely, and calculating the rotation angle of the camera relative to an initial angle at each moment by the following specific method:
and a2.2.1, acquiring the rotation angle of the airborne camera relative to the initial angle in real time, wherein the rotation angle can be acquired by a universal joint connecting the airborne camera and the unmanned aerial vehicle. The rotation angles are expressed in the form of euler angles, namely Roll angle (Roll), pitch angle (Pitch) and Yaw angle (Yaw), which are expressed in terms of phi, beta, gamma, respectively.
a2.2.2 determining three corresponding rotation matrixes R by using a rotation matrix calculation method according to three rotation angles phi, beta and gamma φ ,R β ,R γ
a2.3 calculating the coordinate offset (Δ w, Δ h) of the position coordinate T of the target from the image center point C. The coordinate value of T can be calculated from the target frame obtained in a1.2.1 by the formula
Figure BDA0002629582460000101
C coordinate value is>
Figure BDA0002629582460000102
The coordinate offset can therefore be determined by subtracting the coordinate values of T and C: />
Figure BDA0002629582460000103
Where w and h are the width and height of the image, respectively. />
a2.4 knowing the image coordinate system distance from the image center point C to the image boundary
Figure BDA0002629582460000104
Calculating its distance in the real world>
Figure BDA0002629582460000105
Wherein f is c Denotes the focal length of the camera (in meters), F hor And F ver Representing the horizontal and vertical field angles of the camera, respectively.
a2.5, calculating the coordinates of the position T point of the target in the image in a real world coordinate system, wherein the specific method comprises the following steps:
a2.5.1 distance according to image coordinate system
Figure BDA0002629582460000106
Distance (L) from it in the real world w/2 ,L h/2 ) Central offset (Δ w, Δ h) of position coordinate T of target and offset (L) thereof in real world Δw ,L Δh ) Calculating the proportional relation between the image coordinate system and the real coordinate system>
Figure BDA0002629582460000107
Can find L Δw And L Δh The value of (c).
a real distance from the image plane to the camera position is the focal length f c Then the coordinates of the T point in the real world can be solved according to 2.5.2
Figure BDA0002629582460000111
a2.6 calculating the coordinates of the target position T in the image in the real world coordinate system after the camera is rotated, i.e. the rotation matrix R of the camera φ ,R β ,R γ Superimposed on the original real world coordinates T of point T W The calculation formula is as follows:
Figure BDA0002629582460000112
Figure BDA0002629582460000113
a2.7 calculating the flying height of the unmanned aerial vehicle to
Figure BDA0002629582460000114
Obtaining real world coordinates of a target point in the flight of the unmanned aerial vehicle->
Figure BDA0002629582460000115
Figure BDA0002629582460000116
Wherein h is u The real flying height of the unmanned aerial vehicle can be obtained by a GPS sensor.
a2.8 defines the ground plane as z =0, calculates the true ground targetCoordinate T R The specific method comprises the following steps:
a2.8.1 calculating a connection function equation of the camera coordinate point and an imaging point of the target in the image in the real world, wherein when the unmanned aerial vehicle takes off, the real world coordinate of the camera is (0, h) u ) Then, according to the straight line two-point type available connection equation:
Figure BDA0002629582460000117
Figure BDA0002629582460000118
and a2.8.2, substituting z =0 into the connection line equation to obtain the real coordinates of the ground target
Figure BDA0002629582460000119
During this calculation, ->
Figure BDA00026295824600001110
All contain the same f c So can be eliminated to realize f-free c Positioning of (3).
Step a3, the pose information of the unmanned aerial vehicle is superposed in the target positioning, the camera is controlled by the universal joint to rotate, and a rotation matrix R can be obtained φβγ . In the flying process of the unmanned aerial vehicle, under the condition that the pose is constantly changed, the IMU can acquire pose information and solve the rotation matrix R of the unmanned aerial vehicle U 。R U The unmanned aerial vehicle can accurately track the target in the flying process by superposing the unmanned aerial vehicle to the rotation matrix of the camera.
The specific method comprises the following steps:
a3.1, acquiring pose information of the unmanned aerial vehicle through the IMU, and calculating a rotation matrix R of the unmanned aerial vehicle U
a3.1.1 define quaternion information obtained by IMU as
Figure BDA00026295824600001111
a3.1.2 Another quaternion
Figure BDA00026295824600001112
Is->
Figure BDA00026295824600001113
Get after rotating->
Figure BDA00026295824600001114
Rotating formula is->
Figure BDA00026295824600001115
Further resolution may result in a->
Figure BDA0002629582460000121
Wherein->
Figure BDA0002629582460000122
Is/>
Figure BDA0002629582460000123
Scalar quantities of v and v R Are respectively>
Figure BDA0002629582460000124
And &>
Figure BDA0002629582460000125
The vector portion of (2). From this, can solve unmanned aerial vehicle rotation matrix R U
a3.2 rotating unmanned aerial vehicle matrix R U Superimposed on the target location, the rotation of the drone and the rotation of the camera gimbal can be regarded as a superimposed relationship: that is, the camera first rotates the gimbal and then rotates the drone based on the rotation of the gimbal, so the overall rotation matrix of the camera can be defined as R U_φβγ =R U R φβγ . Using R U_φβγ Replacing R in step a2 φβγ And the superposition of the pose matrix of the unmanned aerial vehicle in the positioning process is realized.
And a4, based on the angular velocity information of the unmanned aerial vehicle observed by the IMU, filtering and denoising the pose information of the unmanned aerial vehicle by using a Kalman filter.
The specific method comprises the following steps:
a4.1 defines quaternion
Figure BDA0002629582460000126
Derivative with respect to time t is pick>
Figure BDA0002629582460000127
Wherein u is R Representing a fixed axis of rotation, and therefore du R u0, α are rotation angles.
a4.2 solving the derivative of the rotation angle alpha with respect to time t,
Figure BDA0002629582460000128
ω E i.e. winding u R Instantaneous angular velocity of shaft rotation, which is in u R On three partial axes with partial speed->
Figure BDA0002629582460000129
a4.3 solving
Figure BDA00026295824600001210
About>
Figure BDA00026295824600001211
The update formula of (2).
a4.3.1 extracting one from the formula in a4.1
Figure BDA00026295824600001212
And converting omega in 4.2 to omega E Substituting to obtain: />
Figure BDA00026295824600001213
Omega (t) is formed by E A4 x 4 matrix of components.
a4.3.2 in a4.3.1
Figure BDA00026295824600001214
The formula is a homogeneous linear equation, and the discrete form of the general solution can be obtained:
Figure BDA00026295824600001215
Figure BDA00026295824600001216
a4.3.3 Taylor expansion of the above formula and extraction of the first two terms gives:
Figure BDA00026295824600001217
and a4.4, calculating instantaneous angular velocity by using sampling data of the IMU angular velocity, and constructing a kalman filter to denoise pose information.
a4.4.1 Definitions f G The angular velocity sampling frequency of IMU is
Figure BDA00026295824600001218
Wherein +>
Figure BDA0002629582460000131
Representing the change in angle over time at, which represents the instantaneous angular velocity when at is sufficiently small.
and a4.4.2, constructing a kalman filter according to the updating formula of a4.3.3 and all known quantities to complete the denoising of the pose information of the unmanned aerial vehicle. And (6) ending.
Example 3:
based on the same invention concept, the invention also provides a ground target positioning system based on the monocular vision of the unmanned aerial vehicle, and because the principle of solving the technical problems of the devices is similar to the ground target positioning method based on the monocular vision of the unmanned aerial vehicle, repeated parts are not repeated.
The basic structure of the system is shown in fig. 4, and comprises: the device comprises a position acquisition module, a coordinate calculation module and a positioning module;
the position acquisition module is used for acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle;
the coordinate calculation module is used for calculating the real coordinates of the ground target by adopting the field angle based on the position in the image;
and the positioning module is used for superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle.
The detailed structure of the ground target positioning system based on monocular vision of the unmanned aerial vehicle is shown in fig. 5.
Wherein, the position acquisition module includes: an initial target frame unit and a tracking unit;
the initial target frame unit is used for framing an initial target frame containing the ground target when the ground target appears in an image frame of the monocular visual image of the unmanned aerial vehicle for the first time;
and the tracking unit is used for initializing a target tracking algorithm according to the initial target frame, tracking the target of each frame of image after the ground target appears for the first time by adopting the target tracking algorithm, framing out the ground target in real time and obtaining the position of the ground target in each frame of monocular vision image of the unmanned aerial vehicle.
Wherein, the coordinate calculation module includes: the device comprises a coordinate system unit, an image coordinate offset unit, a position point coordinate unit and a target coordinate unit;
the coordinate system unit is used for establishing a world coordinate system by taking the position of the airborne camera as the origin of the world coordinate system;
the image coordinate offset unit is used for calculating the image coordinate offset between the ground target and the image center point on the image frame;
the position point coordinate unit is used for calculating the coordinates of the position points of the ground target on the image frame on a world coordinate system according to the image coordinate offset and in combination with the field angle;
and the target coordinate unit is used for calculating the real coordinate of the ground target according to the coordinate of the position point on the world coordinate system.
Wherein, the position point coordinate unit includes: the device comprises a central point distance ion unit, a coordinate offset quantum unit, an initial coordinate subunit and a position point coordinate subunit;
the center point distance ion unit is used for calculating the distance from the image center point to the real world of the image boundary according to the distance from the image center point to the image boundary and in combination with the field angle;
the coordinate offset quantum unit is used for calculating the coordinate offset of the ground target in the real world between the position point of the image frame and the image center point according to the distance from the image center point to the image boundary, the distance in the real world and the image coordinate offset;
the initial coordinate subunit is used for obtaining an initial coordinate of a position point in the real world without considering the rotation of the camera and the flying height of the unmanned aerial vehicle by using the coordinate offset of the real world and combining the focal length;
and the position point coordinate subunit is used for sequentially superposing the initial coordinate on the rotation matrix of the camera and the flight height of the unmanned aerial vehicle to obtain the coordinate of the position point of the ground target on the image frame on a world coordinate system.
Wherein the target coordinate unit includes: a connection function subunit and a real coordinate subunit;
the connecting line function subunit is used for calculating a connecting line function equation of the camera coordinate point and the position point in the real world according to the coordinates of the position point on the world coordinate system;
and the real coordinate subunit is used for eliminating the focal length by adopting a line function equation with the ground height as 0 and calculating the real coordinate of the ground target.
Wherein, the orientation module includes: the unmanned aerial vehicle rotation matrix unit and the positioning unit;
the unmanned aerial vehicle rotation matrix unit is used for acquiring pose information of the unmanned aerial vehicle through the inertial measurement unit and calculating an unmanned aerial vehicle rotation matrix based on the pose information;
and the positioning unit is used for superposing the rotation matrix of the unmanned aerial vehicle on the rotation matrix of the camera and obtaining the positioning information of the ground target in the flying process of the unmanned aerial vehicle through calculation.
The ground target positioning system based on the monocular vision of the unmanned aerial vehicle further comprises a filtering module;
and the filtering module is used for filtering and denoising the pose information of the unmanned aerial vehicle by adopting a Kalman filtering method.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that the above-mentioned embodiments are only for illustrating the technical solutions of the present application and not for limiting the scope of protection thereof, and although the present application is described in detail with reference to the above-mentioned embodiments, those skilled in the art should understand that after reading the present application, they can make various changes, modifications or equivalents to the specific embodiments of the application, but these changes, modifications or equivalents are all within the scope of protection of the claims to be filed.

Claims (7)

1. A ground target positioning method based on monocular vision of an unmanned aerial vehicle is characterized by comprising the following steps:
acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle;
calculating real coordinates of the ground target by adopting an angle of view based on the position in the image;
superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle;
the calculating the real coordinates of the ground target by adopting the field angle based on the position in the image comprises the following steps:
establishing a world coordinate system by taking the position of the airborne camera as the origin of the world coordinate system;
calculating the image coordinate offset between the ground target and the image center point on the image frame;
calculating the coordinates of the position points of the ground target on the image frame on a world coordinate system according to the image coordinate offset and in combination with the field angle;
calculating the real coordinate of the ground target according to the coordinate of the position point on a world coordinate system;
the method for calculating the coordinates of the position points of the ground target on the image frame on the world coordinate system according to the image coordinate offset and in combination with the field angle comprises the following steps:
calculating the distance from the central point of the image to the real world of the image boundary according to the focal length of the camera and the combination of the field angle;
calculating the coordinate offset of the ground target between the position point of the image frame and the image center point in the real world according to the distance from the image center point to the image boundary, the distance from the image center point to the real world of the image boundary and the image coordinate offset;
obtaining initial coordinates of the position point in the real world without considering camera rotation and unmanned aerial vehicle flying height by using the coordinate offset of the real world and combining a focal length;
sequentially superposing the initial coordinates on a rotation matrix of a camera and the flight height of the unmanned aerial vehicle to obtain coordinates of position points of the ground target on the image frame on a world coordinate system;
the calculating the real coordinate of the ground target according to the coordinate of the position point on the world coordinate system comprises the following steps:
calculating a line function equation of the camera coordinate point and the position point in the real world according to the coordinate of the position point on the world coordinate system;
and (3) with the ground height as 0, eliminating the focal length by adopting the line function equation, and calculating the real coordinate of the ground target.
2. The method of claim 1, wherein said obtaining a location of a ground target in each frame of the drone monocular visual image comprises:
when the ground target appears in the image frame of the monocular visual image of the unmanned aerial vehicle for the first time, framing an initial target frame containing the ground target;
and initializing a target tracking algorithm according to the initial target frame, performing target tracking on each frame of image after the ground target appears for the first time by adopting the target tracking algorithm, and framing out the ground target in real time to obtain the position of the ground target in each frame of monocular vision image of the unmanned aerial vehicle.
3. The method of claim 1, wherein the real-world distance from the image center point to the image boundary is calculated as follows:
Figure FDA0004043679530000021
wherein L is w/2 Representing the real world distance, L, from the image center point to the image boundary in the horizontal direction h/2 Representing the real world distance from the centre point of the image to the border of the image in the vertical direction, f c Denotes the focal length of the camera, F hor Representing the horizontal field of view of the camera, F ver Representing the vertical field of view of the camera.
4. The method of claim 3, wherein the real-world coordinate offset is calculated as follows:
Figure FDA0004043679530000022
Figure FDA0004043679530000023
in the formula, L Δw Representing the amount of real world coordinate shift in the horizontal direction,. DELTA.w representing the amount of image coordinate shift in the horizontal direction,. W representing the image width,. L Δh Denotes the amount of real world coordinate shift in the vertical direction, Δ h denotes the amount of image coordinate shift in the horizontal-vertical direction, and h denotes the image height.
5. The method of claim 1, wherein superimposing the real coordinates of the ground target with pose information of the drone to obtain positioning information of the ground target during flight of the drone comprises:
acquiring pose information of the unmanned aerial vehicle through an inertial measurement unit, and calculating an unmanned aerial vehicle rotation matrix based on the pose information;
and superposing the unmanned aerial vehicle rotation matrix to the rotation matrix of the camera, and calculating to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle.
6. The method of claim 1, wherein after calculating the real coordinates of the ground target using the field angle based on the position in the image and before superimposing the real coordinates of the ground target with the pose information of the drone, further comprising:
and filtering and denoising the pose information of the unmanned aerial vehicle by adopting a Kalman filtering method.
7. The utility model provides a ground target positioning system based on unmanned aerial vehicle monocular vision which characterized in that includes: the device comprises a position acquisition module, a coordinate calculation module and a positioning module;
the position acquisition module is used for acquiring the position of a ground target in each frame of monocular vision image of the unmanned aerial vehicle;
the coordinate calculation module is used for calculating the real coordinate of the ground target by adopting an angle of view based on the position in the image;
the positioning module is used for superposing the real coordinates of the ground target with the pose information of the unmanned aerial vehicle to obtain the positioning information of the ground target in the flying process of the unmanned aerial vehicle;
the calculating the real coordinate of the ground target by adopting the field angle based on the position in the image comprises the following steps:
establishing a world coordinate system by taking the position of the airborne camera as the origin of the world coordinate system;
calculating the image coordinate offset between the ground target and the image center point on the image frame;
calculating the coordinates of the position points of the ground target on the image frame on a world coordinate system according to the image coordinate offset and in combination with the field angle;
calculating the real coordinate of the ground target according to the coordinate of the position point on a world coordinate system;
the calculating the coordinates of the position points of the ground target on the image frame on a world coordinate system according to the image coordinate offset and in combination with the field angle comprises the following steps:
calculating the distance from the central point of the image to the real world of the image boundary according to the focal length of the camera and the combination of the field angle;
calculating the coordinate offset of the ground target between the position point of the image frame and the image center point in the real world according to the distance from the image center point to the image boundary, the distance from the image center point to the real world of the image boundary and the image coordinate offset;
obtaining initial coordinates of the position point in the real world without considering camera rotation and unmanned aerial vehicle flying height by using the coordinate offset of the real world and combining a focal length;
sequentially superposing the initial coordinates with a rotation matrix of a camera and the flight height of the unmanned aerial vehicle to obtain coordinates of position points of the ground target on the image frame on a world coordinate system;
the calculating the real coordinate of the ground target according to the coordinate of the position point on the world coordinate system comprises the following steps:
calculating a line function equation of the camera coordinate point and the position point in the real world according to the coordinate of the position point on the world coordinate system;
and (3) with the ground height as 0, eliminating the focal length by adopting the line function equation, and calculating the real coordinate of the ground target.
CN202010807215.XA 2020-08-12 2020-08-12 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle Active CN112116651B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010807215.XA CN112116651B (en) 2020-08-12 2020-08-12 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010807215.XA CN112116651B (en) 2020-08-12 2020-08-12 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112116651A CN112116651A (en) 2020-12-22
CN112116651B true CN112116651B (en) 2023-04-07

Family

ID=73804085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010807215.XA Active CN112116651B (en) 2020-08-12 2020-08-12 Ground target positioning method and system based on monocular vision of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112116651B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113821052A (en) * 2021-09-22 2021-12-21 一飞智控(天津)科技有限公司 Cluster unmanned aerial vehicle cooperative target positioning method and system and cooperative target positioning terminal
CN113804165B (en) * 2021-09-30 2023-12-22 北京欧比邻科技有限公司 Unmanned aerial vehicle simulation GPS signal positioning method and device
CN114326765B (en) * 2021-12-01 2024-02-09 爱笛无人机技术(南京)有限责任公司 Landmark tracking control system and method for unmanned aerial vehicle visual landing
CN114964245B (en) * 2022-02-25 2023-08-11 珠海紫燕无人飞行器有限公司 Unmanned aerial vehicle vision reconnaissance positioning method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014204548A1 (en) * 2013-06-19 2014-12-24 The Boeing Company Systems and methods for tracking location of movable target object
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN107247458A (en) * 2017-05-24 2017-10-13 中国电子科技集团公司第二十八研究所 UAV Video image object alignment system, localization method and cloud platform control method
CN107727079A (en) * 2017-11-30 2018-02-23 湖北航天飞行器研究所 The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite
CN109598794A (en) * 2018-11-30 2019-04-09 苏州维众数据技术有限公司 The construction method of three-dimension GIS dynamic model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014204548A1 (en) * 2013-06-19 2014-12-24 The Boeing Company Systems and methods for tracking location of movable target object
CN104732518A (en) * 2015-01-19 2015-06-24 北京工业大学 PTAM improvement method based on ground characteristics of intelligent robot
CN106570820A (en) * 2016-10-18 2017-04-19 浙江工业大学 Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN107247458A (en) * 2017-05-24 2017-10-13 中国电子科技集团公司第二十八研究所 UAV Video image object alignment system, localization method and cloud platform control method
CN107727079A (en) * 2017-11-30 2018-02-23 湖北航天飞行器研究所 The object localization method of camera is regarded under a kind of full strapdown of Small and micro-satellite
CN109598794A (en) * 2018-11-30 2019-04-09 苏州维众数据技术有限公司 The construction method of three-dimension GIS dynamic model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于高程数据的无人机视频实时定位方法;郭乔进等;《计算机与数字工程》;20181231;全文 *

Also Published As

Publication number Publication date
CN112116651A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN112116651B (en) Ground target positioning method and system based on monocular vision of unmanned aerial vehicle
CN109084732B (en) Positioning and navigation method, device and processing equipment
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109887057B (en) Method and device for generating high-precision map
CN109506642B (en) Robot multi-camera visual inertia real-time positioning method and device
CN109752003B (en) Robot vision inertia point-line characteristic positioning method and device
WO2020253260A1 (en) Time synchronization processing method, electronic apparatus, and storage medium
CN108810473B (en) Method and system for realizing GPS mapping camera picture coordinate on mobile platform
CN111156998A (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
US20180075614A1 (en) Method of Depth Estimation Using a Camera and Inertial Sensor
CN109978954A (en) The method and apparatus of radar and camera combined calibrating based on cabinet
CN111968228B (en) Augmented reality self-positioning method based on aviation assembly
CN114088087B (en) High-reliability high-precision navigation positioning method and system under unmanned aerial vehicle GPS-DENIED
CN111791235A (en) Robot multi-camera visual inertia point-line characteristic positioning method and device
CN112200869A (en) Robot global optimal visual positioning method and device based on point-line characteristics
CN114419109B (en) Aircraft positioning method based on visual and barometric information fusion
CN114638897A (en) Multi-camera system initialization method, system and device based on non-overlapping views
CN112862818B (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
CN110598370A (en) Robust attitude estimation of multi-rotor unmanned aerial vehicle based on SIP and EKF fusion
CN116182855B (en) Combined navigation method of compound eye-simulated polarized vision unmanned aerial vehicle under weak light and strong environment
CN116952229A (en) Unmanned aerial vehicle positioning method, device, system and storage medium
CN110108894B (en) Multi-rotor speed measuring method based on phase correlation and optical flow method
CN117115271A (en) Binocular camera external parameter self-calibration method and system in unmanned aerial vehicle flight process
Wang et al. Pose and velocity estimation algorithm for UAV in visual landing
CN116721166A (en) Binocular camera and IMU rotation external parameter online calibration method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant