CN118052886A - GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle - Google Patents

GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle Download PDF

Info

Publication number
CN118052886A
CN118052886A CN202410242624.8A CN202410242624A CN118052886A CN 118052886 A CN118052886 A CN 118052886A CN 202410242624 A CN202410242624 A CN 202410242624A CN 118052886 A CN118052886 A CN 118052886A
Authority
CN
China
Prior art keywords
camera
gnss
residual
point
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410242624.8A
Other languages
Chinese (zh)
Inventor
石立阳
杨建�
祝昌宝
王哲
邱伟洋
陈洪杰
黄星淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Technology Guangzhou Co ltd
Original Assignee
Digital Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Technology Guangzhou Co ltd filed Critical Digital Technology Guangzhou Co ltd
Priority to CN202410242624.8A priority Critical patent/CN118052886A/en
Publication of CN118052886A publication Critical patent/CN118052886A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/23Testing, monitoring, correcting or calibrating of receiver elements
    • G01S19/235Calibration of receiver components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a GNSS and camera external parameter calibration method for correcting AR display errors of an unmanned aerial vehicle, which comprises the steps of carrying out flight in an open area through a camera on the unmanned aerial vehicle, recording video in the flight process, wherein the starting point and the end point of a flight path are at the same position; acquiring GNSS data during video recording; transmitting the recorded video and GNSS data into a vision-GNSS-SLAM system to obtain a camera track with a scale; inputting the camera track with the scale and the original GNSS data into an external parameter calibration SLAM system to obtain external parameters of a camera and the GNSS; when the GNSS is utilized to carry out label projection, the camera and the GNSS external parameters are multiplied, the coordinates of the camera in the world system are obtained, and the coordinates are used for transformation display. The invention reduces the consumption of system computing resources, and the coordinate acquisition by adopting the external parameter calibration parameters can avoid adding a visual SLAM system in a terminal embedded system, and the acquisition of GNSS does not need system resources, and can acquire corresponding camera coordinates through external parameter transformation.

Description

GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle
Technical Field
The invention relates to the technical field of unmanned aerial vehicles and image processing, in particular to a GNSS and camera external parameter calibration method for correcting an AR display error of an unmanned aerial vehicle.
Background
In the unmanned aerial vehicle AR (augmented reality AR-Augmented Reality), a tag model in a live-action three-dimensional GIS (geographic information system-Geographic Information System) needs to be displayed in a camera video carried by the unmanned aerial vehicle, in order to perform correct display projection, the coordinate position of the unmanned aerial vehicle under the ground system (such as WGS 84) needs to be known, and meanwhile, the coordinate of the tag model needs to be known, and since the tag model coordinate and the unmanned aerial vehicle are uniformly represented on the ground system, coordinate transformation can be performed to obtain transformation from the tag model to the unmanned aerial vehicle, and since the ground system coordinate of the unmanned aerial vehicle is acquired through a carried GNSS (global satellite navigation system-Global Navigation SATELLITE SYSTEM), and rotation and translation transformation exists before the GNSS and the camera, the transformation is called external parameters.
The principle of AR display is to rely on the transformation from a label model to a camera to project, and due to the original parameters, only the coordinates of the GNSS can be obtained, and if the coordinates of the GNSS are artificially projected as the coordinates of the camera, there will be a significant fixed pixel deviation in the imaging plane, and the source of the deviation is caused by the transformation from the GNSS to the camera.
The problem is not solved, the invention provides a GNSS-camera external parameter calibration method, which corrects the fixed pixel deviation displayed by the AR of the unmanned aerial vehicle, adopts a GNSS module to perform real-time positioning acquisition of the unmanned aerial vehicle, performs AR system construction in the mode, reduces the requirement on a calculation module, unifies the unmanned aerial vehicle on a geodetic coordinate system by combining with GNSS information, and ensures that the coordinate transformation of AR projection is more universal and is not influenced by the geographic position.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention discloses a GNSS and camera external parameter calibration method for correcting an AR display error of an unmanned aerial vehicle, which comprises the following steps:
step 1, flying in an open area through a camera on the unmanned aerial vehicle, recording a video in the flying process, wherein the starting point and the end point of a flying path are at the same position;
Step 2, acquiring GNSS data during video recording;
step 3, transmitting the recorded video and GNSS data into a vision-GNSS-SLAM system to obtain a camera track with a scale;
step 4, inputting the camera track with the scale and the original GNSS data into an external parameter calibration SLAM system to obtain external parameters of a camera and the GNSS;
And 5, multiplying the external parameters of the camera and the GNSS when the GNSS is utilized to carry out label projection, obtaining the coordinates of the camera in the world system, and carrying out transformation display by using the coordinates.
Still further, the vision-GNSS-SLAM system further includes a system residual being a vision re-projection residual constraint, a GNSS relative translation residual constraint, a GNSS absolute translation residual constraint, respectively.
Still further, the visual re-projection residual constraint is specifically as follows: the projection of n three-dimensional space points P is P, the pose of the world system to a camera is R, T, and the pose is represented as T by a Liqun, whereinThe spatial point p= [ x, y, z ] T, the pixel point coordinates are u= [ u, v ] T, and the relationship between the pixel point position and the spatial point position is: su=ktp, where s represents a scale factor, K e 3×3 is a camera reference;
Residual is constructed as
The index i represents the characteristic pixel point u and the scale factor s corresponding to the ith map point P, e represents the overall error, the overall error is required to be obtained through camera calibration in advance, n map points form n residual formulas, and a T transformation is found through BA cluster optimization so that the overall error is minimum, wherein T is the transformation between cameras.
Still further, the GNSS relative translation residual constraint further includes that, since the camera and the GNSS are in the same rigid body, their relative translations are necessarily the same, the residual is constructed as: Δt represents two-stage displacement/> And/>T g1,tc1 denotes the GNSS and camera coordinate point in the first frame, t g2,tc2 denotes the GNSS and camera coordinate point in the second frame,/>Representing the displacement of a GNSS between two frames,/>Representing the displacement of the camera between two frames, wherein/>And taking the visual track as an optimization variable to obtain the real scale.
Still further, the GNSS absolute translation residual constraints include:
the coordinate transformation of the camera track under the world system can be obtained by adding absolute translation errors, the starting point of the camera track VO (visual odometer-Visual Odometry) is generally an identity matrix, and the starting point of the VO is aligned to the world system through the constraint, and the specific residual error is as follows: Δt represents two-stage displacement/> And/>Error of/>For translation of GNSS under world system,/>For translation of cameras under the world system, the optimization variable here is/>The starting point of the visual track is aligned to the world system, and the zero space drift phenomenon caused by relative translation optimization is reduced.
Furthermore, the unmanned aerial vehicle external parameter calibration SLAM system is used for calibrating external parameters between the GNSS and the camera, and the unmanned aerial vehicle external parameter calibration SLAM system residual error constraint comprises a visual re-projection residual error constraint, a world point residual error constraint, a camera relative translation residual error constraint and a camera absolute translation residual error constraint.
Still further, the visual re-projection residual constraints are as follows:
The residual constraint has an optimization variable of T cg, which is the GNSS-to-camera extrinsic parameter
The visual re-projection formula is as follows
Wherein t=t cgTg2g1Tgc,Pc1=TcgTg1wPw, 2
Δu represents pixel error in x direction, Δv represents pixel error in v direction, the units are pixels, subscript c represents camera, g represents GNSS, w represents world system, 1 represents previous frame, and 2 represents next frame;
substituting 2 into 1 to obtain heavy projection residual constraint
The method comprises the steps of casting a world point Pw to a camera coordinate system under a j-th frame, obtaining corresponding pixel coordinates through internal references, and then constructing a re-projection residual error with the detected pixel points, wherein the Pw is a 3D point obtained from a corresponding point under a c1 pixel coordinate system, residual error construction is carried out on a 3D point which can not be obtained by adopting a c2 image, and the 3D point is obtained through a GIS engine;
Expanding the heavy projection residual constraint is given by
Re-projection is carried out under the normalized coordinate system of the camera, and the pixel plane is not re-projected, so that the parameter K is not needed to be multiplied, and a projection formula is changed into
Still further, the world point residual construction constraint includes: according to the pixel coordinates of the image, a 3D point P w corresponding to the pixel point is obtained through a GIS engine, and a constructed residual constraint equation is as follows:
ΔP=Pw2-Pw1,P∈3×1
the world point P w1 acquired under the c1 image is only optimized, which means that the 3D points corresponding to the same feature point in the two images are theoretically the same point.
Still further, the camera relative translation residual constraint is as follows:
since the GNSS data has non-negligible jitter and the pose of the camera is smooth, the relative pose of the camera is adopted to restrict the relative pose of the GPS so as to reduce the jitter of the GPS, and a constructed residual constraint equation has the following meaning as follows:
Δt=tc2c1-tg2g1
wherein t g2g1 is specifically calculated as follows:
Tg2g1=Tg2wTwg1
tg2g1=Tg2wTwg1+tg2w
Since this rotation R is basically a unit array I, the ENU coordinate system is aligned when producing the visual data, and can be equivalently as follows:
tg2g1=twg1+tg2w
The residual equation optimization variable is t g2g1, and the equation means that the camera translation and the GNSS translation between two adjacent frames are theoretically equal.
Still further, the camera absolute translation residual constraint is as follows:
The GPS data is converted into world coordinates with the first frame, so that the GPS data is aligned with the world coordinates of the camera in terms of numerical value, absolute translation constraint is added to prevent zero space drift caused by relative displacement constraint, a residual equation is constructed specifically as follows, and the formula has the same meaning as the above:
Δt=tc1w-tg1w
The optimization variable of the residual equation is t g1w, which means that the translation of the current frame of the camera to world system is theoretically equal to the translation of the current GNSS to world system.
Aiming at the prior art, the invention has the advantages that the GNSS-camera external parameter calibration method is provided, and the fixed pixel deviation of the AR display of the unmanned aerial vehicle is corrected. The GNSS module is adopted to perform unmanned aerial vehicle real-time positioning acquisition, AR system construction is performed in the mode, and the demand on the calculation force module is reduced. Combining with GNSS information, unifying the unmanned aerial vehicle to a geodetic coordinate system, the coordinate transformation of AR projection is more universal, and the fixed error of the AR label display of the unmanned aerial vehicle is reduced without being influenced by the geographic position. The original label displays the world system coordinates of the camera by adopting the original value of the GNSS, the external parameters generated by the GNSS and the camera on assembly are not considered in the method, and the correct coordinates of the camera can be obtained by converting the GNSS coordinates and the external parameters in the calibrating method. The system computing resource consumption is reduced, the coordinate acquisition is carried out by adopting the external parameter calibration parameters, the addition of a visual SLAM system in a terminal embedded system can be avoided, the acquisition of GNSS (Global navigation satellite System) does not need system resources, and the corresponding camera coordinate can be acquired through external parameter transformation.
Drawings
The invention will be further understood from the following description taken in conjunction with the accompanying drawings. The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the embodiments. In the figures, like reference numerals designate corresponding parts throughout the different views.
FIG. 1 is a calibration flow chart of a method of calibrating GNSS and camera parameters according to the present invention;
FIG. 2 is a diagram illustrating a visual-GNSS-SLAM system factor design according to the present invention;
fig. 3 is a design diagram of factors of an unmanned aerial vehicle external parameter calibration SLAM system.
Detailed Description
Example 1
As shown in fig. 1-3, the present embodiment provides a method for calibrating GNSS and camera external parameters, which includes the steps of: a. the camera on the unmanned aerial vehicle is used for flying in a free place, video recording is carried out in the flying process, and the starting point and the end point of a flying path are preferably near the same position; b. acquiring GNSS data during video recording; c. transmitting the recorded video and GNSS data into a vision-GNSS-SLAM system to obtain a camera track with a scale; d. the camera track with the scale and the original GNSS data are input into an external parameter calibration SLAM (simultaneous localization and mapping-Simultaneous Localization AND MAPPING) system to obtain external parameters of the camera and the GNSS. e. When the GNSS is used for label projection, the external parameters are multiplied to obtain the coordinates of the camera in the world system, and the coordinates are used for conversion display.
The calibration method is used by combining two SLAM systems with different functions, and comprises a vision-GNSS-SLAM system and an unmanned aerial vehicle external parameter calibration SLAM system respectively.
The following formulas have the meaning of 2 subscripts, all read from right to left, e.gThe translation from g 1 to g 2 is expressed, 1 is expressed as the previous frame, 2 is expressed as the next frame, c is the camera system, g is the GNSS system, and w is the world system.
1. The visual-GNSS-SLAM system residual constraint is constructed as follows:
the system aims at acquiring a track of a camera with a scale, which is to be used as an observation value in a calibration system, and residuals of the system are respectively a visual re-projection residual constraint, a GNSS relative translation residual constraint and a GNSS absolute translation residual constraint.
A. The visual re-projection residual constraints are specifically as follows:
The projection of n three-dimensional space points P is P, the pose of the world system to a camera is R, T, and the pose is represented as T by a Liqun, wherein The spatial point p= [ x, y, z ] T, the pixel point coordinates are u= [ u, v ] T, and the relationship between the pixel point position and the spatial point position is: su=ktp, where s represents a scale factor, K e 3×3 is a camera reference
Residual is constructed as
The index i represents the characteristic pixel point u and scale factor s corresponding to the ith map point P, e represents the overall error, the overall error is required to be obtained through camera calibration in advance, n map points form n residual formulas, and a T transformation is found through BA (cluster optimization-Bundle Adjustment) to minimize the overall error, wherein T is the transformation between cameras.
Gnss relative translation residual constraint:
since the camera and GNSS are in the same rigid body, their relative translations are necessarily the same, then the residual is constructed as: Δt represents two-stage displacement/> And/>T g1,tc1 denotes the GNSS and camera coordinate point in the first frame, t g2,tc2 denotes the GNSS and camera coordinate point in the second frame,/>Representing the displacement of a GNSS between two frames,/>Representing the displacement of the camera between two frames.
The optimization variables here areThe purpose is to obtain the true scale of the visual track.
Gnss absolute translation residual constraint:
the coordinate transformation of the camera track under the world system can be obtained by adding absolute translation errors, the starting point of the camera track VO (visual odometer-Visual Odometry) is generally an identity matrix, and the starting point of the VO is aligned to the world system through the constraint, and the specific residual error is as follows: Δt represents two-stage displacement/> And/>Error of/>For translation of GNSS under world system,/>For translation of cameras under the world system, the optimization variable here is/>The purpose is to align the starting point of the visual track to the world system and simultaneously reduce the phenomenon of zero space drift caused by relative translation optimization.
3. The residual constraint construction of the unmanned aerial vehicle external parameter calibration SLAM system is as follows:
The system is used for calibrating external parameters between the GNSS and the camera, the camera track of the 1 st SLAM is used as an observation value of the system, and residuals of the system are respectively visual re-projection residual constraint, world point residual constraint, camera relative translation residual constraint and camera absolute translation residual constraint.
A. The visual re-projection residual constraints are as follows:
The residual constraint has an optimization variable of T cg, which is the GNSS-to-camera extrinsic parameter
The visual re-projection formula is as follows
Wherein t=t cgTg2g1Tgc,Pc1=TcgTg1wPw
Δu represents pixel error in x direction, Δv represents pixel error in v direction, the unit is pixel, subscript c represents camera, g represents GNSS, w represents world system, 1 represents previous frame, 2 represents next frame, and the following formulas have the same meaning.
Substituting ② into ① to obtain heavy projection residual constraint
In the formula, a world point Pw is cast into a camera coordinate system under a j-th frame, corresponding pixel coordinates are obtained through internal references, then a re-projection residual error is constructed with the detected pixel point, wherein Pw is a 3D point obtained from a corresponding point under a c1 pixel coordinate system, residual error construction is carried out by a 3D point which can not be obtained by adopting a c2 image, and the 3D point is obtained through a GIS engine.
Expanding the heavy projection residual constraint is given by
Re-projection is carried out under the normalized coordinate system of the camera, and the pixel plane is not re-projected, so that the parameter K is not needed to be multiplied, and a projection formula is changed into
B. world point residual construction constraints are as follows:
According to the pixel coordinates of the images, a 3D point P w corresponding to the pixel point is obtained through a GIS engine, but because the 3D point obtained in the GIS engine has an accuracy error, the 3D point coordinates of the same pixel point of two images are inconsistent, and a constructed residual constraint equation is as follows:
ΔP=Pw2-Pw1,P∈3×1
In the constraint, only the world point P w1 acquired under the c1 image is optimized, and the formula means that the 3D points corresponding to the same characteristic point in the two images are theoretically the same point.
C. the camera relative translation residual constraint is as follows:
since the GNSS data has non-negligible jitter and the pose of the camera is smooth, the relative pose of the camera is adopted to restrict the relative pose of the GPS so as to reduce the jitter of the GPS, and a constructed residual constraint equation has the following meaning as follows:
Δt=tc2c1-tg2g1
wherein t g2g1 is specifically calculated as follows:
Tg2g1=Tg2wTwg1
tg2g1=Rg2wtwg1+tg2w
Since this rotation R is basically a unit array I, the ENU coordinate system is aligned when producing the visual data, and can be equivalently as follows:
tg2g1=twg1+tg2w
The residual equation optimization variable is t g2g1, and the equation means that the camera translation and the GNSS translation between two adjacent frames are theoretically equal.
D. the camera absolute translation residual constraint is as follows:
The GPS data is converted into world coordinates with the first frame, so that the GPS data is aligned with the world coordinates of the camera in terms of numerical value, absolute translation constraint is added to prevent zero space drift caused by relative displacement constraint, a residual equation is constructed specifically as follows, and the formula has the same meaning as the above:
Δt=tc1w-tg1w
The optimization variable of the residual equation is t g1w, which means that the translation of the current frame of the camera to world system is theoretically equal to the translation of the current GNSS to world system.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While the invention has been described above with reference to various embodiments, it should be understood that many changes and modifications can be made without departing from the scope of the invention. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. The above examples should be understood as illustrative only and not limiting the scope of the invention. Various changes and modifications to the present invention may be made by one skilled in the art after reading the teachings herein, and such equivalent changes and modifications are intended to fall within the scope of the invention as defined in the appended claims.

Claims (10)

1. A GNSS and camera external parameter calibration method for correcting an AR display error of an unmanned aerial vehicle is characterized in that,
The method comprises the following steps:
step 1, flying in an open area through a camera on the unmanned aerial vehicle, recording a video in the flying process, wherein the starting point and the end point of a flying path are at the same position;
Step 2, acquiring GNSS data during video recording;
step 3, transmitting the recorded video and GNSS data into a vision-GNSS-SLAM system to obtain a camera track with a scale;
step 4, inputting the camera track with the scale and the original GNSS data into an external parameter calibration SLAM system to obtain external parameters of a camera and the GNSS;
And 5, multiplying the external parameters of the camera and the GNSS when the GNSS is utilized to carry out label projection, obtaining the coordinates of the camera in the world system, and carrying out transformation display by using the coordinates.
2. The method for calibrating GNSS and camera extrinsic parameters for correcting unmanned aerial vehicle AR display errors according to claim 1, wherein said vision-GNSS-SLAM system further includes residual errors of the system being respectively a vision re-projection residual error constraint, a GNSS relative translation residual error constraint, a GNSS absolute translation residual error constraint.
3. The GNSS and camera extrinsic calibration method for correcting unmanned aerial vehicle AR display errors according to claim 2, wherein the visual re-projection residual constraints are specifically as follows: the projection of n three-dimensional space points P is P, the pose of the world system to a camera is R, T, and the pose is represented as T by a Liqun, whereinThe spatial point p= [ x, y, z ] T, the pixel point coordinates are u= [ u, v ] T, and the relationship between the pixel point position and the spatial point position is: su=ktp, where s represents a scale factor, K e 3×3 is a camera reference;
Residual is constructed as
The index i represents the characteristic pixel point u and the scale factor s corresponding to the ith map point P, e represents the overall error, the overall error is required to be obtained through camera calibration in advance, n map points form n residual formulas, and a T transformation is found through BA cluster optimization so that the overall error is minimum, wherein T is the transformation between cameras.
4. The method of calibrating GNSS and camera external parameters for correcting unmanned AR display errors according to claim 2, wherein the residual constraint of GNSS relative translation further comprises, since the camera and GNSS are in the same rigid body, the relative translation is necessarily the same, the residual is constructed as: Δt represents two-stage displacement/> AndT g1,tc1 denotes the GNSS and camera coordinate point in the first frame, t g2,tc2 denotes the GNSS and camera coordinate point in the second frame, t g2g1 denotes the displacement of the GNSS between the two frames,/>Representing the displacement of the camera between two frames, wherein/>And taking the visual track as an optimization variable to obtain the real scale.
5. The method for calibrating GNSS and camera extrinsic parameters for correcting unmanned aerial vehicle AR display errors according to claim 2, wherein GNSS absolute translation residual constraints include:
the coordinate transformation of the camera track under the world system can be obtained by adding absolute translation errors, the starting point of the camera track VO (visual odometer-Visual Odometry) is generally an identity matrix, and the starting point of the VO is aligned to the world system through the constraint, and the specific residual error is as follows: Δt represents two-stage displacement/> And/>Error of/>For translation of GNSS under world system,/>For translation of cameras under the world system, the optimization variable here is/>The starting point of the visual track is aligned to the world system, and the zero space drift phenomenon caused by relative translation optimization is reduced.
6. The method for calibrating the external parameters of the GNSS and the camera for correcting the display error of the AR of the unmanned aerial vehicle according to claim 1, wherein the external parameters of the GNSS and the camera are calibrated by the external parameters calibration SLAM system of the unmanned aerial vehicle, and the residual constraints of the external parameters calibration SLAM system of the unmanned aerial vehicle comprise a visual re-projection residual constraint, a world point residual constraint, a relative translation residual constraint of the camera and an absolute translation residual constraint of the camera.
7. The GNSS and camera extrinsic calibration method for correcting unmanned aerial vehicle AR display errors according to claim 6, wherein the visual re-projection residual constraints are as follows:
The residual constraint has an optimization variable of T cg, which is the GNSS-to-camera extrinsic parameter
The visual re-projection formula is as follows
Wherein t=t cgTg2g1Tgc,Pc1=TcgTg1wPw, 2
Δu represents pixel error in x direction, Δv represents pixel error in v direction, the units are pixels, subscript c represents camera, g represents GNSS, w represents world system, 1 represents previous frame, and 2 represents next frame;
substituting 2 into 1 to obtain heavy projection residual constraint
The method comprises the steps of casting a world point Pw to a camera coordinate system under a j-th frame, obtaining corresponding pixel coordinates through internal references, and then constructing a re-projection residual error with the detected pixel points, wherein the Pw is a 3D point obtained from a corresponding point under a c1 pixel coordinate system, residual error construction is carried out on a 3D point which can not be obtained by adopting a c2 image, and the 3D point is obtained through a GIS engine;
Expanding the heavy projection residual constraint is given by
Re-projection is carried out under the normalized coordinate system of the camera, and the pixel plane is not re-projected, so that the parameter K is not needed to be multiplied, and a projection formula is changed into
8. The GNSS and camera extrinsic calibration method for correcting unmanned aerial vehicle AR display errors according to claim 6, wherein said world point residual construction constraints include: according to the pixel coordinates of the image, a 3D point P w corresponding to the pixel point is obtained through a GIS engine, and a constructed residual constraint equation is as follows:
ΔP=Pw2-Pw1,P∈3×1
the world point P w1 acquired under the c1 image is only optimized, which means that the 3D points corresponding to the same feature point in the two images are theoretically the same point.
9. The GNSS and camera extrinsic calibration method for correcting unmanned aerial vehicle AR display errors according to claim 6, wherein the relative translational residual constraints of the camera are as follows:
since the GNSS data has non-negligible jitter and the pose of the camera is smooth, the relative pose of the camera is adopted to restrict the relative pose of the GPS so as to reduce the jitter of the GPS, and a constructed residual constraint equation has the following meaning as follows:
Δt=tc2c1-tg2g1
wherein t g2g1 is specifically calculated as follows:
Tg2g1=Tg2wPwg1
tg2g1=Rg2wtwg1+tg2w
Since this rotation R is basically a unit array I, the ENU coordinate system is aligned when producing the visual data, and can be equivalently as follows:
tg2g1=twg1+tg2w
The residual equation optimization variable is t g2g1, and the equation means that the camera translation and the GNSS translation between two adjacent frames are theoretically equal.
10. The GNSS and camera extrinsic calibration method for correcting unmanned aerial vehicle AR display errors according to claim 6, wherein the camera absolute translation residual constraints are as follows:
The GPS data is converted into world coordinates with the first frame, so that the GPS data is aligned with the world coordinates of the camera in terms of numerical value, absolute translation constraint is added to prevent zero space drift caused by relative displacement constraint, a residual equation is constructed specifically as follows, and the formula has the same meaning as the above:
Δt=tc1w-tg1w
The optimization variable of the residual equation is t g1w, which means that the translation of the current frame of the camera to world system is theoretically equal to the translation of the current GNSS to world system.
CN202410242624.8A 2024-03-04 2024-03-04 GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle Pending CN118052886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410242624.8A CN118052886A (en) 2024-03-04 2024-03-04 GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410242624.8A CN118052886A (en) 2024-03-04 2024-03-04 GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle

Publications (1)

Publication Number Publication Date
CN118052886A true CN118052886A (en) 2024-05-17

Family

ID=91050160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410242624.8A Pending CN118052886A (en) 2024-03-04 2024-03-04 GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN118052886A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118226871A (en) * 2024-05-23 2024-06-21 北京数易科技有限公司 Unmanned aerial vehicle obstacle avoidance method, system and medium based on deep reinforcement learning

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118226871A (en) * 2024-05-23 2024-06-21 北京数易科技有限公司 Unmanned aerial vehicle obstacle avoidance method, system and medium based on deep reinforcement learning

Similar Documents

Publication Publication Date Title
US11715232B2 (en) Method and device to determine the camera position and angle
CN108629831B (en) Three-dimensional human body reconstruction method and system based on parameterized human body template and inertial measurement
US8698875B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
JP4685313B2 (en) Method for processing passive volumetric image of any aspect
WO2020140431A1 (en) Camera pose determination method and apparatus, electronic device and storage medium
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
CN107167826B (en) Vehicle longitudinal positioning system and method based on variable grid image feature detection in automatic driving
CN110264563A (en) A kind of Octree based on ORBSLAM2 builds drawing method
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
CN101545776B (en) Method for obtaining digital photo orientation elements based on digital map
CN118052886A (en) GNSS and camera external parameter calibration method for correcting AR display error of unmanned aerial vehicle
CN103822615A (en) Unmanned aerial vehicle ground target real-time positioning method with automatic extraction and gathering of multiple control points
JP2003256874A (en) Image synthesis and conversion device
CN113406682A (en) Positioning method, positioning device, electronic equipment and storage medium
CN113418527B (en) Strong real-time double-structure continuous scene fusion matching navigation positioning method and system
CN110187375A (en) A kind of method and device improving positioning accuracy based on SLAM positioning result
US20120063668A1 (en) Spatial accuracy assessment of digital mapping imagery
Yue et al. 3D point clouds data super resolution-aided LiDAR odometry for vehicular positioning in urban canyons
CN114037762B (en) Real-time high-precision positioning method based on registration of image and high-precision map
CN114022561A (en) Urban area monocular mapping method and system based on GPS constraint and dynamic correction
CN114494466B (en) External parameter calibration method, device and equipment and storage medium
CN113345032A (en) Wide-angle camera large-distortion image based initial image construction method and system
CN116524159A (en) Method and device for stable imaging of AR (augmented reality) of unmanned aerial vehicle
CN113654528B (en) Method and system for estimating target coordinates through unmanned aerial vehicle position and cradle head angle
CN116642511A (en) AR navigation image rendering method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination