CN113415433B - Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle - Google Patents

Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle Download PDF

Info

Publication number
CN113415433B
CN113415433B CN202110871786.4A CN202110871786A CN113415433B CN 113415433 B CN113415433 B CN 113415433B CN 202110871786 A CN202110871786 A CN 202110871786A CN 113415433 B CN113415433 B CN 113415433B
Authority
CN
China
Prior art keywords
pod
current
camera
attitude
scene model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110871786.4A
Other languages
Chinese (zh)
Other versions
CN113415433A (en
Inventor
周黎明
郭有威
刘夯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zongheng Dapeng Unmanned Plane Technology Co ltd
Original Assignee
Chengdu Zongheng Dapeng Unmanned Plane Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zongheng Dapeng Unmanned Plane Technology Co ltd filed Critical Chengdu Zongheng Dapeng Unmanned Plane Technology Co ltd
Priority to CN202110871786.4A priority Critical patent/CN113415433B/en
Publication of CN113415433A publication Critical patent/CN113415433A/en
Application granted granted Critical
Publication of CN113415433B publication Critical patent/CN113415433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64DEQUIPMENT FOR FITTING IN OR TO AIRCRAFT; FLIGHT SUITS; PARACHUTES; ARRANGEMENTS OR MOUNTING OF POWER PLANTS OR PROPULSION TRANSMISSIONS IN AIRCRAFT
    • B64D47/00Equipment not otherwise provided for
    • B64D47/08Arrangements of cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64CAEROPLANES; HELICOPTERS
    • B64C39/00Aircraft not otherwise provided for
    • B64C39/02Aircraft not otherwise provided for characterised by special use
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a pod attitude correction method based on a three-dimensional scene model, which utilizes a preset three-dimensional scene model to generate a projection image most similar to a current video frame image in real time, then performs characteristic point matching on the two images, and obtains a correction value of a pod attitude and a focus through an algorithm. The unmanned aerial vehicle low-altitude large-inclination-angle video inspection system has the advantages that during a high-automation complex-scene unmanned aerial vehicle low-altitude large-inclination-angle video inspection task, the attitude and the camera are calibrated to ensure that an ROI (region of interest) is effectively covered, the absolute positioning precision is improved, and the effectiveness of operation is ensured. The application also provides a nacelle attitude correction device based on the three-dimensional scene model, an unmanned aerial vehicle and a computer readable storage medium, and the nacelle attitude correction device, the unmanned aerial vehicle and the computer readable storage medium all have the beneficial effects.

Description

Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
Technical Field
The application relates to the technical field of pod attitude correction, in particular to a pod attitude correction method based on a three-dimensional scene model, a pod attitude correction device based on the three-dimensional scene model, an unmanned aerial vehicle and a computer readable storage medium.
Background
The unmanned aerial vehicle automatic inspection operation matched with the two-axis or three-axis pod is in the next hot direction and is mainly applied to the fields of inspection of electric power, pipelines, riverways, highways, railways, bridges and the like. The condition that the field of view area shot by a camera in the nacelle is the interesting area ROI of a user is a precondition for further image processing, especially in the process of line patrol, the unmanned aerial vehicle usually flies linearly in a long distance, and if a certain section of image in the middle is invalid, the operation result of the whole flying frame is possibly invalid. The task requirements of a large-scale working area cannot be met only by improving the relative attitude precision. To further improve the absolute pose accuracy, either a special set of image control points is maintained within the region of interest (ROI) or a reference image is used to provide absolute position information.
In the prior art, firstly, feature point matching is performed on a reference image containing elevation information and a real-time video image of an unmanned aerial vehicle to perform attitude calibration of the unmanned aerial vehicle, and a current video image is shot at an angle similar to that of the reference image, such as orthographic shooting. If the current video image has a large inclination angle relative to the original shooting angle of the reference image, or the current camera is in a complex scene with a large number of buildings around, the reference image matching method cannot be developed.
In the second prior art, the video image data of the unmanned aerial vehicle and the depth data of the depth camera are combined to update an original scene model, and the unmanned aerial vehicle needs to be manually maintained by detecting a scheme that the artificial tag object optimizes the positioning and the posture of the unmanned aerial vehicle in the flight of the unmanned aerial vehicle, and the working distance is seriously limited by the working distance of the depth camera.
The method aims at the inconvenience that the ROI is required to be searched through manual intervention in the takeoff stage of the unmanned aerial vehicle in the scheme of carrying out real-time attitude correction by means of image control points. Replacing the manual image control points of the ROI area by pre-storing several reference maps is an effective way to increase the automation efficiency. However, the reference map stored in advance is only an image captured at a certain angle (generally, a downward orthographic view), and when the drone is in a complex environment such as a scene with a large number of buildings around, the captured image cannot be compared with the reference map effectively.
Therefore, how to provide a solution to the above technical problems is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The nacelle attitude correction method based on the three-dimensional scene model, the nacelle attitude correction device based on the three-dimensional scene model, the unmanned aerial vehicle and the computer-readable storage medium can enable the unmanned aerial vehicle nacelle to automatically carry out patrol operation under a complex scene environment, measure and correct the attitude of the unmanned aerial vehicle and the carried nacelle and the zoom focal length of a camera in real time, and guarantee that the ROI is effectively covered. The specific scheme is as follows:
the method comprises the steps of generating a corresponding projection image based on a preset three-dimensional scene model according to the current position and the current attitude of an unmanned aerial vehicle, the current position and the current attitude of a pod and the current focal length value of a pod camera, and acquiring feature points of the projection image and coordinates of three-dimensional space points corresponding to the feature points of the projection image in a world coordinate system; matching the characteristic points of the current video frame image with the characteristic points of the projected image, and reconstructing theoretical position data, theoretical attitude data and theoretical focal length value of the pod camera corresponding to the current video frame image according to the matched characteristic points and the coordinates of the three-dimensional space points corresponding to the characteristic points of the projected image in a world coordinate system; and correcting the pod attitude according to the current position, the current attitude and the current focal length value of the pod camera and the error values of the theoretical position data, the theoretical attitude data and the theoretical focal length value of the pod camera.
Therefore, the projection image most similar to the current video frame image is generated in real time by using the preset three-dimensional scene model, then the feature point matching is carried out on the two images, and the corrected values of the pod attitude and the focal length are obtained through an algorithm. The unmanned aerial vehicle low-altitude large-inclination-angle video inspection system has the advantages that during a high-automation complex-scene unmanned aerial vehicle low-altitude large-inclination-angle video inspection task, the attitude and the camera are calibrated to ensure that an ROI (region of interest) is effectively covered, the absolute positioning precision is improved, and the effectiveness of operation is ensured. This application still provides a nacelle gesture correcting unit, unmanned aerial vehicle and computer readable storage medium based on three-dimensional scene model simultaneously, all has above-mentioned beneficial effect, and no longer gives unnecessary details here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a nacelle attitude modification method based on a three-dimensional scene model according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a method for generating a projection image and acquiring feature points of the projection image and coordinates of a three-dimensional space point corresponding to the feature points of the projection image in a world coordinate system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a pod attitude correction device based on a three-dimensional scene model according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the scope of protection of the present application.
The unmanned aerial vehicle automatic inspection operation matched with the two-axis or three-axis pod is in the next hot direction and is mainly applied to the fields of inspection of electric power, pipelines, riverways, highways, railways, bridges and the like. The condition that the field of view area shot by a camera in the nacelle is the interesting area ROI of a user is a precondition for further image processing, especially in the process of line patrol, the unmanned aerial vehicle usually flies linearly in a long distance, and if a certain section of image in the middle is invalid, the operation result of the whole flying frame is possibly invalid. The task requirements of a large-scale working area cannot be met only by improving the relative attitude precision. To further improve the absolute pose accuracy, either a special set of image control points is maintained within the region of interest (ROI) or a reference image is used to provide absolute position information.
The method aims at the inconvenience that the ROI is required to be searched through manual intervention in the takeoff stage of the unmanned aerial vehicle in the scheme of carrying out real-time attitude correction by means of image control points. Replacing the manual image control points of the ROI area by pre-storing several reference maps is an effective way to increase the automation efficiency. However, the pre-stored reference map is only an image captured at a certain angle (generally, a downward orthographic view), and when the unmanned aerial vehicle is in a complex environment such as a scene with a large number of buildings around, the captured image cannot be compared with the reference map effectively.
Based on the above problems, the embodiment provides a pod attitude correction method based on a three-dimensional scene model, which can enable an unmanned aerial vehicle pod to automatically carry out routing inspection operation in a complex scene environment, measure and correct the attitude of the unmanned aerial vehicle and the pod carried and the zoom focal length of a camera in real time, and ensure that an ROI is effectively covered. Referring to fig. 1 specifically, fig. 1 is a flowchart of a pod attitude correction method based on a three-dimensional scene model according to an embodiment of the present application, which specifically includes:
s101: the method comprises the steps of obtaining the current position and the current posture of the unmanned aerial vehicle, the current position and the current posture of a pod, the current focal length value of a pod camera and a current video frame image collected by the pod camera.
In the embodiment, the mode of acquiring the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod, the current focal length value of the pod camera and the current video frame image acquired by the pod camera is not limited, and the acquisition can be performed according to the acquisition equipment actually installed on the unmanned aerial vehicle as long as the purpose of the embodiment can be realized. For example, the current position and the current attitude data of the unmanned aerial vehicle in the world coordinate system can be obtained by integrating velocity and acceleration sensors such as a differential GPS and an airborne IMU (inertial measurement Unit), and the current position and the current attitude data of the pod in the world coordinate system with the earth center as the origin can be obtained by converting the relative positions of the cabin and the pod of the unmanned aerial vehicle and the attitude data of the two-axis or three-axis pod. The embodiment also comprises the step of acquiring the intrinsic parameters of the pod camera, namely the current focal length value of the zoom camera, and the intrinsic parameters can be obtained by reading the current parameters of the pod camera. In addition, the embodiment further includes acquiring a current video frame image acquired by the pod camera, and the current video frame image can be acquired by reading the current image frame taken by the pod camera of the unmanned aerial vehicle.
Specifically, an unmanned aerial vehicle with a pod is used for line patrol flight. The flight path of the unmanned aerial vehicle is generally planned in advance according to the required line patrol range. The pod acquires video images in real time in the flying process, acquires differential GPS information in real time, and acquires unmanned aerial vehicle and pod attitude information and current pod camera focal length measurement values from other sensors such as an IMU (inertial measurement Unit).
Furthermore, the world position of the unmanned aerial vehicle obtained through differential GPS is accurate, the precision is in the centimeter level, but an error exists between the time of obtaining the GPS and the exposure time of the camera, generally in the millisecond level, and the error is obtained by multiplying the speed of the unmanned aerial vehicle. The general error of the attitude angle value that airborne IMU obtained is 0.1 degree rank, and unmanned aerial vehicle and nacelle relative position error mainly derive from the low frequency displacement error that structure installation error and nacelle damping device brought. The difference between the error of the pod code disc and the precision of the code disc is generally between milliradian and micro radian, but nonlinear superposition of two axes (azimuth and pitch) or three axes (azimuth and pitch and roll) exists. The intrinsic parameter error of the camera mainly refers to the measurement error of the current focal length value of the camera and the error of the size of a sensor of the camera, wherein the former is generally in the order of micrometer to millimeter, and the latter can be removed in a pre-calibration mode and is generally in the order of nanometer to micrometer. Therefore, the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod and the current focal length value of the pod camera which are acquired in the embodiment are accurate, and accurate data support is further provided for subsequently acquiring the projection image.
S102: generating a corresponding projection image based on a preset three-dimensional scene model according to the current position and the current posture of the unmanned aerial vehicle, the current position and the current posture of the pod and the current focal length value of the pod camera, and acquiring the feature points of the projection image and the coordinates of the three-dimensional space points corresponding to the feature points of the projection image in a world coordinate system.
In this embodiment, the preset three-dimensional scene model is a three-dimensional scene model containing geographic position information, which is reconstructed by technologies such as a currently mature and commercial unmanned aerial vehicle oblique photography scheme, a ground image control point, and three-dimensional reconstruction software Smart3D, and the accuracy of a general model and the positioning accuracy can reach the centimeter level. In the unmanned aerial vehicle operation of patrolling and examining, the operation region is generally relatively fixed, consequently only need make above-mentioned scene once in advance just can long-time used repeatedly.
In an implementable embodiment, a generation method of a preset three-dimensional scene model comprises the following steps: acquiring image control points manually on the ground, and acquiring a plurality of high-resolution images for three-dimensional reconstruction and ground image control point information by combining an unmanned aerial vehicle oblique photography technology; and inputting the information of the control points of the plurality of high-resolution images and the ground images into three-dimensional reconstruction software to obtain a three-dimensional scene model with geographical position information.
Specifically, firstly, image control points are manually collected on the ground, and a plurality of high-resolution (the resolution of a surveying and mapping camera is high relative to the resolution of a pod camera) images which can be used for three-dimensional reconstruction and ground image control point information are obtained by combining the unmanned aerial vehicle oblique photography technology which is mature in the surveying and mapping field. Because the method has higher cost and large workload, the method can be generally executed once a year in the same geographic area. And then, inputting the acquired control point information of the plurality of high-resolution images and the bottom surface image into three-dimensional reconstruction software such as Smart3D and the like to obtain a high-precision three-dimensional scene model with geographical position information. It can be understood that the pod attitude correction method provided by the application can be used as a basis for absolute positioning relative to a world coordinate system by using a high-precision three-dimensional scene model with geographical position information, which is generated in advance by an oblique photography mode.
In the embodiment, according to the current position and the current posture of the unmanned aerial vehicle, the current position and the current posture of the pod and the current focal length value of the pod camera, a projection image corresponding to the current video frame image can be generated according to the preset three-dimensional scene model, and then the feature point of the projection image is solved, and the coordinate of the three-dimensional space point in the world coordinate system is corresponding to the feature point of the projection image.
In an implementation manner, please refer to fig. 2, fig. 2 is a flowchart of a method for generating a projection image and acquiring feature points of the projection image and coordinates of three-dimensional space points corresponding to the feature points of the projection image in a world coordinate system according to an embodiment of the present application, including:
s1021: and inputting the current position and the current posture of the unmanned aerial vehicle and the current position and the current posture of the pod into a preset three-dimensional scene model to obtain the virtual position and the virtual posture of the pod camera in the preset three-dimensional scene model.
Specifically, the step of inputting the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod into the preset three-dimensional scene model to obtain the virtual position and the virtual attitude of the pod camera in the preset three-dimensional scene model may include: the method comprises the steps of converting attitude data acquired by sensors such as current GPS longitude and latitude height data and IMU of an unmanned aerial vehicle into a position and an attitude in an XYZ coordinate system, wherein the position and the attitude are virtual positions and virtual attitudes, and the position and the attitude are virtual positions and virtual attitudes in the three-dimensional rectangular coordinate system, wherein the position and the attitude are taken as the origin, the X axis is taken as the origin, the rays from the origin to longitude 0.0 degree, latitude 0.0 degree and height 0.0 meter point, the X axis is taken as the origin, the X axis is taken as the longitude 90.0 degree, the X axis is taken as the latitude 0.0 degree, the Y axis is taken as the Y axis, and the X axis is taken as the Z axis.
S1022: and generating a corresponding projection image in the preset three-dimensional scene model according to the virtual position and the virtual posture of the pod camera in the preset three-dimensional scene model and the current focal length value of the pod camera, wherein the projection image comprises a texture sub-image and a position sub-image.
Specifically, the projection image is divided into a texture sub-image and a position sub-image. The resolution of the texture sub-image and the position sub-image are identical. The resolution of the projected image is specified as follows: and calculating a current field angle A (field width angle) according to the resolution of the pod camera and the current focal length value, and calculating a larger field width angle A2 according to the maximum accumulated error estimation value of each error in each link in the measurement pose (namely the obtained current position and current pose of the unmanned aerial vehicle, the current position and current pose of the pod) so as to be capable of containing but not limited to the actual field range corresponding to the current real-time video frame. The pod camera resolution is C × D. (A2 × C/A) × (A2 × D/A) is the projection view resolution.
Further, step S1022 is to generate a corresponding projection image in the preset three-dimensional scene model according to the virtual position and the virtual pose of the pod camera in the preset three-dimensional scene model and the current focal length value of the pod camera, where the projection image includes a texture sub-image and a position sub-image, and includes:
s10221: and calculating the projection direction and the field angle of the current field of view of the pod camera in the preset three-dimensional scene model according to the virtual position and the virtual attitude of the pod camera in the preset three-dimensional scene model, the size and the resolution of the preset pod camera and the current focal length value of the pod camera.
S10222: determining a corresponding projection area in a preset three-dimensional scene model according to the projection direction and the field angle of the current field of view of the pod camera in the preset three-dimensional scene model, rendering the color of each grid of the projection area on a texture sub-image of a projection image, converting the three-dimensional coordinate of each grid into a world coordinate system, and rendering the three-dimensional coordinate of each grid on a position sub-image of the projection image, wherein only the grid positioned at the foreground part is rendered for a plurality of grids with shielding relations in the projection area.
Specifically, the projection direction and the field angle of the current field of view of the pod in the preset three-dimensional scene model are calculated according to the acquired current position and current attitude of the pod in the world coordinate system, the sensor size and resolution obtained by calibrating the pod in advance and the measurement value of the current zoom focal length. Each mesh of the preset three-dimensional scene model renders the color of the mesh vertex on a texture sub-image of the projection image through, for example, opengl vertex and pixel shader program, and renders the three-dimensional coordinate of the mesh on a position sub-image (non-color data) of the projection image after being converted to the world coordinate. In the process of rendering, only the grid closest to the position of the pod camera in the preset three-dimensional scene model grid with the occlusion relation in the projection area is rendered by using the depth buffer area of opengl, namely, for a plurality of grids with the occlusion relation in the projection area, the depth buffer area can be used for dynamically ensuring that only the part of the grid with the minimum distance value to the pod camera, namely the grid of the foreground part, is rendered (other grids are not visible to the pod camera due to being occluded or exceeding the range of the field of view, and the corresponding rendering result is omitted or covered), so that the rendering result only contains the grid visible to the pod camera, wherein the sub-area of the texture sub-image can maximally simulate the real image obtained by the current field of view of the pod camera in the complex environment so as to perform subsequent image matching.
S1023: extracting characteristic points from texture sub-images of the projected images, and acquiring coordinates of the three-dimensional space points corresponding to each characteristic point in a world coordinate system from the position sub-images.
The present embodiment does not limit the specific method for extracting the feature points from the texture sub-image of the projection image, and a mature method in the prior art may be adopted as long as the purpose of the present embodiment can be achieved, but the method for extracting the feature points from the texture sub-image of the projection image should be consistent with the method for extracting the feature points from the current video frame image, so as to ensure the accuracy of the subsequent feature point matching. For example, in the present embodiment, several pieces of feature point information obtained by using a feature point extraction method with scale invariance, such as SIFT, may be applied.
Further, although the corresponding view angle dimensions of each pixel of the current video frame image and the projected image are very close, in order to improve the accuracy to the maximum extent, the dimension error still needs to be considered. In addition, considering that the spots in the two-dimensional image are more convenient to position in a three-dimensional space than the corner points and the edge points, a spot feature point extraction method with size invariance such as SIFT is adopted to extract feature points and feature point description information of a projected image, and coordinates of three-dimensional space points corresponding to the corresponding feature points in a world coordinate system are obtained from position sub-images according to pixel positions of the feature points in texture sub-images. The specific method comprises the following steps: and acquiring a two-dimensional pixel position (u, v) of the feature point in the texture sub-image, and acquiring a data value at the two-dimensional pixel position of the feature point in the position sub-image with the same resolution, wherein the data value is not an RGB color value but an XYZ position value in a three-dimensional rectangular coordinate system with the earth center as an origin.
S103: and extracting characteristic points from the current video frame image.
The present embodiment does not limit the specific method for extracting the feature points from the current video frame image, and a mature method in the prior art may be adopted as long as the purpose of the present embodiment can be achieved, but the method for extracting the feature points from the current video frame image should be consistent with the method for extracting the feature points from the texture sub-image of the projection image, so as to ensure the accuracy of the subsequent feature point matching. For example, in the present embodiment, several pieces of feature point information obtained by using a feature point extraction method with scale invariance, such as SIFT, may be applied.
S104: and matching the characteristic points of the current video frame image with the characteristic points of the projected image, and reconstructing theoretical position data, theoretical attitude data and theoretical focal length values of the pod camera corresponding to the current video frame image according to the coordinates of the matched characteristic points and the characteristic points of the projected image corresponding to the three-dimensional space points in a world coordinate system.
Specifically, feature points of a current video frame image are matched with feature points of a projected image, and a pod camera matrix can be recovered as long as the number of the matched feature points is greater than or equal to 6 according to a geometrical relation between two views in computer vision, and because the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod and the current focal length value error of the pod camera obtained in the step S101 are small, the feature points can be matched with the feature points by being greater than or equal to 6 in a normal case. If the matching number is insufficient, the frame can be abandoned, and the step S101 is executed again; if the number of matches is sufficient, the following procedure can be continued.
In an implementation manner, reconstructing theoretical position data, theoretical attitude data and theoretical focal length value of the pod camera corresponding to the current video frame image according to coordinates of the matched feature points and the feature points of the projection image corresponding to the three-dimensional space points in the world coordinate system, includes: calculating a pod camera matrix corresponding to the current video frame image according to the coordinates of the three-dimensional space point in a world coordinate system corresponding to the matched feature points and the feature points of the projected image; solving a pod camera rotation matrix, a pod camera central position relative to the world coordinate system and a pod camera internal parameter matrix according to the numerical relations of the pod camera matrix and the pod camera rotation matrix, the pod camera central position relative to the world coordinate system, the pod camera internal parameter matrix and the unit matrix; and solving the theoretical focal length value of the pod camera according to the internal parameter matrix of the pod camera and the preset pixel width of the pod camera.
Specifically, according to the matched feature points and the coordinates of the corresponding three-dimensional space points in the world coordinate system, a pod camera matrix corresponding to the current video frame image, that is, a 3 × 4 matrix P with a rank of 3, may be calculated by using a maximum likelihood estimation method: p = K { R, t } = { M | -MC }, where R is a 3 × 3 pod camera rotation matrix, C is a center position (world coordinate system) of the pod camera, K is a pod camera intrinsic parameter matrix, M is KR, and t is-RC.
Further, the above formula is left-multiplied by-M -1 Obtaining: m -1 P = { -I | C }, thereby obtaining the value of C, wherein I is an identity matrix. And then carrying out matrix QR decomposition on M to obtain values of K and R. The QR decomposition result is not unique, but redundant result can be eliminated by the following method: considering that the acquired measurement pose information of the unmanned aerial vehicle and the pod has errors but does not have excessive deviation from an actual value, a rough rotation matrix R2 can be calculated according to the acquired current position and current pose of the unmanned aerial vehicle and the current position and current pose result of the pod, and the value of R obtained by QR decomposition (namely, the difference between R2 and R is not large) is constrained by R2 to obtain a unique QR decomposition result. Here, the nacelle camera is constrained from rotating by R2And the matrix R ensures that the value is unique. And finally solving a nacelle camera rotation matrix R, a world coordinate position C of the center of the nacelle camera and a parameter matrix K in the nacelle camera. In addition, the current theoretical focal length value F of the pod camera can be obtained by multiplying the first row and first column elements K11 of the parameter matrix K in the pod camera by the known pixel width dx of the pod camera, namely the theoretical position data, the theoretical attitude data and the theoretical focal length value of the pod camera corresponding to the current video frame image are obtained.
S105: and calculating the current position and the current attitude of the pod camera according to the current position and the current attitude of the unmanned aerial vehicle and the current position and the current attitude of the pod, and correcting the attitude of the pod according to the current position, the current attitude and the current focal length value of the pod camera and the error values of the theoretical position data, the theoretical attitude data and the theoretical focal length value of the pod camera.
Specifically, the nacelle camera rotation matrix R, the position C, and the theoretical focal length value F in the world coordinate system calculated in step S104 are used as theoretical criteria. And then, calculating a rotation matrix R1 and a position C1 under a world coordinate system currently measured by the camera in the pod according to the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod, and the current focal length value of a pod camera, namely longitude, latitude, height, azimuth, roll, pitch, the azimuth, the pitch and the azimuth of the pod relative to the unmanned aerial vehicle, the pitch and the roll (for the pod of the three-axis pan-tilt head) parameters measured before, and acquiring a focal length value F1 from a focusing mechanism. It is understood that the calculation result of the pod camera theoretical focal length value F in step S104 may be used as a kind of closed-loop feedback of the focus adjustment mechanism in step S105.
Further, although the position correction of the pod camera is inconvenient, the rotation matrix (i.e., the correction posture) and the focal length value can be corrected. Therefore, the error between R1 and R can be eliminated by adjusting the roll of the unmanned aerial vehicle and the orientation, the pitch and the roll of the pod relative to the unmanned aerial vehicle, and the error between F1 and F can be eliminated by zooming, namely the attitude of the pod can be corrected. The embodiment eliminates the influence of relative position and attitude errors of the unmanned aerial vehicle relative to the pod by taking the rotation matrix of the pod camera in the world coordinate system and the focal length value of the pod camera as correction indexes.
It can be understood that the pod attitude correction method based on the three-dimensional scene model provided by the application can be implemented in real time or according to preset interval time when the unmanned aerial vehicle patrols the line, and the specific implementation frequency can be set according to the requirement, so as to ensure that the ROI area is effectively covered.
The pod attitude correction method based on the three-dimensional scene model is an automatic low-altitude and large-inclination-angle pod attitude correction method based on the high-precision three-dimensional scene model, a projection image most similar to a current video frame image is generated in real time, feature point matching is carried out on the two images, and correction indexes of attitude and focal length are obtained through an algorithm. In addition, by adopting the pod attitude correction method provided by the application, the unmanned aerial vehicle only needs to ensure that the view field of the pod camera is in a three-dimensional scene including but not limited to the ROI at the operation starting stage, namely, the pod camera angle does not need to be manually adjusted to search the ROI or the marker at the initial stage, and the automation degree is greatly improved.
The pod attitude correction device based on the three-dimensional scene model provided by the embodiment of the present application is introduced below, and the pod attitude correction device based on the three-dimensional scene model described below and the pod attitude correction method based on the three-dimensional scene model described above may be referred to in correspondence with each other. Referring to fig. 3, fig. 3 is a schematic structural diagram of a pod attitude modification apparatus based on a three-dimensional scene model according to an embodiment of the present application, including:
the measurement data acquisition module 201 is used for acquiring the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod, the current focal length value of the pod camera and the current video frame image acquired by the pod camera;
the theoretical characteristic point acquisition module 202 is used for generating a corresponding projection image based on a preset three-dimensional scene model according to the current position and the current posture of the unmanned aerial vehicle, the current position and the current posture of the pod and the current focal length value of a pod camera, and acquiring the characteristic points of the projection image and the coordinates of the three-dimensional space points corresponding to the characteristic points of the projection image in a world coordinate system;
a measurement feature point obtaining module 203, configured to extract feature points from a current video frame image;
a theoretical pose calculation module 204, configured to match feature points of the current video frame image with feature points of the projection image, and reconstruct theoretical position data, theoretical attitude data, and a theoretical focal length value of the pod camera corresponding to the current video frame image according to coordinates of the three-dimensional space points corresponding to the matched feature points and feature points of the projection image in a world coordinate system;
and the correction data acquisition module 205 is used for calculating the current position and the current attitude of the pod camera according to the current position and the current attitude of the unmanned aerial vehicle and the current position and the current attitude of the pod, and correcting the attitude of the pod according to the current position, the current attitude and the current focal length value of the pod camera and error values of theoretical position data, theoretical attitude data and theoretical focal length value of the pod camera.
In some specific embodiments, the theoretical feature point obtaining module includes:
the virtual pose acquisition unit is used for inputting the current position and the current pose of the unmanned aerial vehicle and the current position and the current pose of the pod into a preset three-dimensional scene model to obtain the virtual position and the virtual pose of the pod camera in the preset three-dimensional scene model;
the projection image generation unit is used for generating a corresponding projection image in the preset three-dimensional scene model according to the virtual position and the virtual posture of the pod camera in the preset three-dimensional scene model and the current focal length value of the pod camera, and the projection image comprises texture sub-images and position sub-images;
and the theoretical characteristic point extraction unit is used for extracting characteristic points from the texture subimages of the projected images and acquiring the coordinates of the three-dimensional space points corresponding to each characteristic point in the world coordinate system from the position subimages.
In some specific embodiments, the theoretical pose calculation module includes:
the camera matrix calculation unit is used for calculating a pod camera matrix corresponding to the current video frame image according to the coordinates of the three-dimensional space point in the world coordinate system corresponding to the matched feature points and the feature points of the projected image;
the camera pose calculation unit is used for solving a pod camera rotation matrix, a pod camera central position relative to the world coordinate system and a pod camera internal parameter matrix according to the numerical relations of the pod camera matrix and the pod camera rotation matrix, the pod camera central position relative to the world coordinate system, the pod camera internal parameter matrix and the unit matrix;
and the camera focal length calculating unit is used for solving a pod camera theoretical focal length value according to the pod camera internal parameter matrix and the preset pod camera pixel width.
In some specific embodiments, the method further comprises: and the three-dimensional scene model generation module is used for generating a preset three-dimensional scene model.
In some specific embodiments, the three-dimensional scene generation module includes:
the scene data acquisition unit is used for acquiring image control points manually on the ground and acquiring a plurality of high-resolution images for three-dimensional reconstruction and ground image control point information by combining an unmanned aerial vehicle oblique photography technology;
and the scene model generating unit is used for inputting the information of the control points of the plurality of high-resolution images and the ground images into three-dimensional reconstruction software to obtain a three-dimensional scene model with geographical position information.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
In the following, an unmanned aerial vehicle provided by an embodiment of the present application is introduced, and the unmanned aerial vehicle described below and the pod attitude correction method based on the three-dimensional scene model described above may be referred to correspondingly.
The embodiment also provides an unmanned aerial vehicle collocated with a pod, including:
a memory for storing a computer program;
and the processor is used for realizing the steps of the nacelle attitude correction method based on the three-dimensional scene model when executing the computer program.
The memory comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer-readable instructions, and the internal memory provides an environment for the operating system and the computer-readable instructions in the non-volatile storage medium to run. The processor provides the computational and control capabilities for the drone, executing computer programs stored in memory.
Since the embodiment of the unmanned aerial vehicle portion corresponds to the embodiment of the pod attitude correction method portion based on the three-dimensional scene model, please refer to the description of the embodiment of the pod attitude correction portion based on the three-dimensional scene model, and details thereof are not repeated here.
In the following, a computer-readable storage medium provided by an embodiment of the present application is introduced, and the computer-readable storage medium described below and the nacelle attitude modification method based on a three-dimensional scene model described above may be referred to in correspondence with each other.
The present application provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method for nacelle attitude modification based on a three-dimensional scene model as described above.
Since the embodiment of the computer-readable storage medium portion corresponds to the embodiment of the pod attitude correction method portion based on the three-dimensional scene model, please refer to the description of the embodiment of the pod attitude correction method portion based on the three-dimensional scene model for the embodiment of the computer-readable storage medium portion, and details are not repeated here.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The pod attitude correction method based on the three-dimensional scene model, the pod attitude correction device based on the three-dimensional scene model, the unmanned aerial vehicle and the computer-readable storage medium provided by the application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, without departing from the principle of the present application, the present application can also make several improvements and modifications, and those improvements and modifications also fall into the protection scope of the claims of the present application.

Claims (10)

1. A pod attitude correction method based on a three-dimensional scene model is characterized by comprising the following steps:
acquiring the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of a pod, the current focal length value of a pod camera and a current video frame image acquired by the pod camera;
generating a corresponding projection image based on a preset three-dimensional scene model according to the current position and the current posture of the unmanned aerial vehicle, the current position and the current posture of the pod and the current focal length value of a pod camera, and acquiring feature points of the projection image and coordinates of three-dimensional space points corresponding to the feature points of the projection image in a world coordinate system;
extracting feature points from a current video frame image;
matching the characteristic points of the current video frame image with the characteristic points of the projected image, and reconstructing theoretical position data, theoretical attitude data and theoretical focal length value of the pod camera corresponding to the current video frame image according to the matched characteristic points and the coordinates of the three-dimensional space points corresponding to the characteristic points of the projected image in a world coordinate system;
and calculating the current position and the current attitude of the pod camera according to the current position and the current attitude of the unmanned aerial vehicle and the current position and the current attitude of the pod, and correcting the attitude of the pod according to the current position, the current attitude and the current focal length value of the pod camera and the error values of the theoretical position data, the theoretical attitude data and the theoretical focal length value of the pod camera.
2. The pod attitude modification method based on the three-dimensional scene model according to claim 1, wherein the generating a corresponding projection image based on the preset three-dimensional scene model according to the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod, and the current focal length value of the pod camera, and acquiring the feature points of the projection image and the coordinates of the three-dimensional space points corresponding to the feature points of the projection image in a world coordinate system comprises:
inputting the current position and the current posture of the unmanned aerial vehicle and the current position and the current posture of the pod into a preset three-dimensional scene model to obtain the virtual position and the virtual posture of the pod camera in the preset three-dimensional scene model;
generating a corresponding projection image in a preset three-dimensional scene model according to the virtual position and the virtual posture of the pod camera in the preset three-dimensional scene model and the current focal length value of the pod camera, wherein the projection image comprises a texture sub-image and a position sub-image;
extracting characteristic points from texture sub-images of the projected images, and acquiring coordinates of the three-dimensional space points corresponding to each characteristic point in a world coordinate system from the position sub-images.
3. The method for correcting the pod attitude based on the three-dimensional scene model according to claim 2, wherein the generating of the corresponding projection image in the preset three-dimensional scene model according to the virtual position and the virtual attitude of the pod camera in the preset three-dimensional scene model and the current focus value of the pod camera, the projection image comprising the texture sub-image and the position sub-image, comprises:
calculating the projection direction and the field angle of the current field of view of the pod camera in the preset three-dimensional scene model according to the virtual position and the virtual attitude of the pod camera in the preset three-dimensional scene model, the size and the resolution of the preset pod camera and the current focal length value of the pod camera;
determining a corresponding projection area in a preset three-dimensional scene model according to the projection direction and the field angle of the current field of view of the pod camera in the preset three-dimensional scene model, rendering the color of each grid of the projection area on a texture sub-image of a projection image, converting the three-dimensional coordinate of each grid into a world coordinate system, and rendering the three-dimensional coordinate of each grid on a position sub-image of the projection image, wherein only the grid positioned at the foreground part is rendered for a plurality of grids with shielding relations in the projection area.
4. The pod attitude modification method based on the three-dimensional scene model according to claim 1, wherein reconstructing theoretical position data, theoretical attitude data and theoretical focal length value of a pod camera corresponding to the current video frame image according to coordinates of the matched feature points and feature points of the projected image corresponding to the three-dimensional space points in a world coordinate system comprises:
calculating a pod camera matrix corresponding to the current video frame image according to the coordinates of the three-dimensional space point in a world coordinate system corresponding to the matched feature points and the feature points of the projected image;
solving a nacelle camera rotation matrix, a nacelle camera central position relative to the world coordinate system and a nacelle camera internal parameter matrix according to the numerical relationship between the nacelle camera matrix and the nacelle camera rotation matrix, the nacelle camera central position relative to the world coordinate system, the nacelle camera internal parameter matrix and the unit matrix;
and solving the theoretical focal length value of the pod camera according to the internal parameter matrix of the pod camera and the preset pixel width of the pod camera.
5. The pod attitude modification method based on the three-dimensional scene model according to claim 1, wherein the generation method of the preset three-dimensional scene model comprises the following steps:
acquiring image control points manually on the ground, and acquiring a plurality of high-resolution images for three-dimensional reconstruction and ground image control point information by combining an unmanned aerial vehicle oblique photography technology;
and inputting the information of the control points of the plurality of high-resolution images and the ground images into three-dimensional reconstruction software to obtain a three-dimensional scene model with geographical position information.
6. A pod attitude correction device based on a three-dimensional scene model is characterized by comprising:
the measurement data acquisition module is used for acquiring the current position and the current attitude of the unmanned aerial vehicle, the current position and the current attitude of the pod, the current focal length value of the pod camera and the current video frame image acquired by the pod camera;
the theoretical characteristic point acquisition module is used for generating a corresponding projection image based on a preset three-dimensional scene model according to the current position and the current posture of the unmanned aerial vehicle, the current position and the current posture of the pod and the current focal length value of a pod camera, and acquiring the characteristic points of the projection image and the coordinates of the three-dimensional space points corresponding to the characteristic points of the projection image in a world coordinate system;
the measurement characteristic point acquisition module is used for extracting characteristic points from the current video frame image;
the theoretical pose calculation module is used for matching the feature points of the current video frame image with the feature points of the projected image, and reconstructing theoretical position data, theoretical attitude data and theoretical focal length values of the pod camera corresponding to the current video frame image according to the coordinates of the matched feature points and the feature points of the projected image corresponding to the three-dimensional space points in a world coordinate system;
and the correction data acquisition module is used for calculating the current position and the current attitude of the pod camera according to the current position and the current attitude of the unmanned aerial vehicle and the current position and the current attitude of the pod, and correcting the attitude of the pod according to the current position, the current attitude and the current focal length value of the pod camera and error values of theoretical position data, theoretical attitude data and theoretical focal length value of the pod camera.
7. The pod attitude modification apparatus based on the three-dimensional scene model according to claim 6, wherein the theoretical feature point acquisition module comprises:
the virtual pose acquisition unit is used for inputting the current position and the current pose of the unmanned aerial vehicle and the current position and the current pose of the pod into a preset three-dimensional scene model to obtain the virtual position and the virtual pose of the pod camera in the preset three-dimensional scene model;
the projection image generation unit is used for generating a corresponding projection image in the preset three-dimensional scene model according to the virtual position and the virtual posture of the pod camera in the preset three-dimensional scene model and the current focal length value of the pod camera, and the projection image comprises a texture sub-image and a position sub-image;
and the theoretical characteristic point extraction unit is used for extracting characteristic points from the texture subimages of the projected images and acquiring the coordinates of the three-dimensional space points corresponding to each characteristic point in the world coordinate system from the position subimages.
8. The pod attitude modifier based on three-dimensional scene model according to claim 6, wherein the theoretical pose calculation module comprises:
the camera matrix calculation unit is used for calculating a pod camera matrix corresponding to the current video frame image according to the coordinates of the three-dimensional space point in the world coordinate system corresponding to the matched feature points and the feature points of the projected image;
the camera pose calculation unit is used for solving a pod camera rotation matrix, a pod camera central position relative to the world coordinate system and a pod camera internal parameter matrix according to the numerical relationship between the pod camera matrix and the pod camera rotation matrix, the central position of the pod camera relative to the world coordinate system, the pod camera internal parameter matrix and the unit matrix;
and the camera focal length calculating unit is used for solving the theoretical focal length value of the pod camera according to the internal parameter matrix of the pod camera and the preset pixel width of the pod camera.
9. An unmanned aerial vehicle, its characterized in that, collocation has the nacelle, includes:
a memory for storing a computer program;
a processor for executing the computer program for implementing the steps of the method for nacelle attitude modification based on a three-dimensional scene model according to any of claims 1 to 5.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when being executed by a processor, realizes the steps of the three-dimensional scene model-based pod attitude modification method according to any one of claims 1 to 5.
CN202110871786.4A 2021-07-30 2021-07-30 Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle Active CN113415433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110871786.4A CN113415433B (en) 2021-07-30 2021-07-30 Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110871786.4A CN113415433B (en) 2021-07-30 2021-07-30 Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN113415433A CN113415433A (en) 2021-09-21
CN113415433B true CN113415433B (en) 2022-11-29

Family

ID=77718644

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110871786.4A Active CN113415433B (en) 2021-07-30 2021-07-30 Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN113415433B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115164823B (en) * 2022-05-16 2024-04-02 上海芯翌智能科技有限公司 Method and device for acquiring gyroscope information of camera
CN116758157B (en) * 2023-06-14 2024-01-30 深圳市华赛睿飞智能科技有限公司 Unmanned aerial vehicle indoor three-dimensional space mapping method, system and storage medium

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012118666A (en) * 2010-11-30 2012-06-21 Iwane Laboratories Ltd Three-dimensional map automatic generation device
CN103129752A (en) * 2013-02-28 2013-06-05 中国资源卫星应用中心 Dynamic compensation method for attitude angle errors of optical remote sensing satellite based on ground navigation
JP2014045276A (en) * 2012-08-24 2014-03-13 Olympus Imaging Corp Photographing device
CN104898653A (en) * 2015-05-18 2015-09-09 国家电网公司 Flight control system
CN106454209A (en) * 2015-08-06 2017-02-22 航天图景(北京)科技有限公司 Unmanned aerial vehicle emergency quick action data link system and unmanned aerial vehicle emergency quick action monitoring method based on spatial-temporal information fusion technology
CN107728637A (en) * 2017-12-02 2018-02-23 广东容祺智能科技有限公司 A kind of UAS of intelligent adjustment camera angle
CN207638247U (en) * 2017-12-28 2018-07-20 中国科学院西安光学精密机械研究所 Intelligent line patrolling photoelectric nacelle
CN108334844A (en) * 2018-02-06 2018-07-27 贵州电网有限责任公司 A kind of automatic tracking method along the line of polling transmission line
CN108733066A (en) * 2018-05-07 2018-11-02 中国人民解放军国防科技大学 Target tracking control method based on pod attitude feedback
CN108803668A (en) * 2018-06-22 2018-11-13 航天图景(北京)科技有限公司 A kind of intelligent patrol detection unmanned plane Towed bird system of static object monitoring
CN109618134A (en) * 2018-12-10 2019-04-12 北京智汇云舟科技有限公司 A kind of unmanned plane dynamic video three-dimensional geographic information real time fusion system and method
CN209192249U (en) * 2018-12-04 2019-08-02 成都纵横大鹏无人机科技有限公司 A kind of gondola suspension arrangement for unmanned plane
CN110543800A (en) * 2018-05-29 2019-12-06 北京京东尚科信息技术有限公司 target identification and tracking method and device for nacelle and nacelle
CN110580054A (en) * 2019-08-21 2019-12-17 东北大学 Control system and method of photoelectric pod based on autonomous visual tracking
CN110648283A (en) * 2019-11-27 2020-01-03 成都纵横大鹏无人机科技有限公司 Image splicing method and device, electronic equipment and computer readable storage medium
CN110706273A (en) * 2019-08-21 2020-01-17 成都携恩科技有限公司 Real-time collapse area measuring method based on unmanned aerial vehicle
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN112649884A (en) * 2021-01-13 2021-04-13 中国自然资源航空物探遥感中心 Pod attitude real-time adjusting method applied to aviation electromagnetic measurement system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9047675B2 (en) * 2012-08-13 2015-06-02 The Boeing Company Strike detection using video images
IL233684B (en) * 2014-07-17 2018-01-31 Shamir Hanan Stabilization and display of remote images

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012118666A (en) * 2010-11-30 2012-06-21 Iwane Laboratories Ltd Three-dimensional map automatic generation device
JP2014045276A (en) * 2012-08-24 2014-03-13 Olympus Imaging Corp Photographing device
CN103129752A (en) * 2013-02-28 2013-06-05 中国资源卫星应用中心 Dynamic compensation method for attitude angle errors of optical remote sensing satellite based on ground navigation
CN104898653A (en) * 2015-05-18 2015-09-09 国家电网公司 Flight control system
CN106454209A (en) * 2015-08-06 2017-02-22 航天图景(北京)科技有限公司 Unmanned aerial vehicle emergency quick action data link system and unmanned aerial vehicle emergency quick action monitoring method based on spatial-temporal information fusion technology
CN107728637A (en) * 2017-12-02 2018-02-23 广东容祺智能科技有限公司 A kind of UAS of intelligent adjustment camera angle
CN207638247U (en) * 2017-12-28 2018-07-20 中国科学院西安光学精密机械研究所 Intelligent line patrolling photoelectric nacelle
CN108334844A (en) * 2018-02-06 2018-07-27 贵州电网有限责任公司 A kind of automatic tracking method along the line of polling transmission line
CN108733066A (en) * 2018-05-07 2018-11-02 中国人民解放军国防科技大学 Target tracking control method based on pod attitude feedback
CN110543800A (en) * 2018-05-29 2019-12-06 北京京东尚科信息技术有限公司 target identification and tracking method and device for nacelle and nacelle
CN108803668A (en) * 2018-06-22 2018-11-13 航天图景(北京)科技有限公司 A kind of intelligent patrol detection unmanned plane Towed bird system of static object monitoring
CN209192249U (en) * 2018-12-04 2019-08-02 成都纵横大鹏无人机科技有限公司 A kind of gondola suspension arrangement for unmanned plane
CN109618134A (en) * 2018-12-10 2019-04-12 北京智汇云舟科技有限公司 A kind of unmanned plane dynamic video three-dimensional geographic information real time fusion system and method
CN110580054A (en) * 2019-08-21 2019-12-17 东北大学 Control system and method of photoelectric pod based on autonomous visual tracking
CN110706273A (en) * 2019-08-21 2020-01-17 成都携恩科技有限公司 Real-time collapse area measuring method based on unmanned aerial vehicle
CN110648283A (en) * 2019-11-27 2020-01-03 成都纵横大鹏无人机科技有限公司 Image splicing method and device, electronic equipment and computer readable storage medium
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN112649884A (en) * 2021-01-13 2021-04-13 中国自然资源航空物探遥感中心 Pod attitude real-time adjusting method applied to aviation electromagnetic measurement system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于单目多视角影像的场景三维重建;吴铮铮等;《光学与光电技术》;20201010(第05期);全文 *
基于无人机平台的正摄影像测量研究;史腾飞等;《云南电力技术》;20160615(第03期);全文 *
航空视频影像的正射影像制作关键技术研究;任超锋;《中国优秀博士学位论文全文数据库基础科学辑》;20151231;第1-137页 *

Also Published As

Publication number Publication date
CN113415433A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
US10274316B2 (en) Surveying system
CN110310248B (en) A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
JP4685313B2 (en) Method for processing passive volumetric image of any aspect
KR100912715B1 (en) Method and apparatus of digital photogrammetry by integrated modeling for different types of sensors
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
CN110930508B (en) Two-dimensional photoelectric video and three-dimensional scene fusion method
US11689808B2 (en) Image synthesis system
CN113415433B (en) Pod attitude correction method and device based on three-dimensional scene model and unmanned aerial vehicle
US20060215935A1 (en) System and architecture for automatic image registration
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
KR102159134B1 (en) Method and system for generating real-time high resolution orthogonal map for non-survey using unmanned aerial vehicle
CN115439531A (en) Method and equipment for acquiring target space position information of target object
Koeva 3D modelling and interactive web-based visualization of cultural heritage objects
CN112907745B (en) Method and device for generating digital orthophoto map
CN114187344A (en) Map construction method, device and equipment
CN110940318A (en) Aerial remote sensing real-time imaging method, electronic equipment and storage medium
JP2004127322A (en) Stereo image forming method and apparatus
CN115967354A (en) Photovoltaic fault detection method based on unmanned aerial vehicle cruising
CN117173359A (en) Model making method, system and medium based on oblique photography
CN117708378A (en) Image data processing method and device, electronic equipment and storage medium
CN116128978A (en) Laser radar-camera external parameter self-calibration method and device
CN117746313A (en) Regional monitoring method and device, electronic equipment and electronic fence system
CN115775351A (en) Aviation image orthorectification multitask parallel processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant