CN116612184B - Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene - Google Patents

Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene Download PDF

Info

Publication number
CN116612184B
CN116612184B CN202310383452.1A CN202310383452A CN116612184B CN 116612184 B CN116612184 B CN 116612184B CN 202310383452 A CN202310383452 A CN 202310383452A CN 116612184 B CN116612184 B CN 116612184B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
camera
pose
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310383452.1A
Other languages
Chinese (zh)
Other versions
CN116612184A (en
Inventor
蔡国林
杨进
徐柱
张奥丽
孙美玲
孙鑫超
唐敏
邓江渝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest Jiaotong University
Original Assignee
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest Jiaotong University filed Critical Southwest Jiaotong University
Priority to CN202310383452.1A priority Critical patent/CN116612184B/en
Publication of CN116612184A publication Critical patent/CN116612184A/en
Application granted granted Critical
Publication of CN116612184B publication Critical patent/CN116612184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Pure & Applied Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Algebra (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle camera pose accurate estimation method based on a monitoring scene, which comprises the following steps of: determining an initial pose of a camera of the unmanned aerial vehicle; constructing an adjacent multi-view photo album according to the initial pose of the unmanned aerial vehicle camera; identifying and calculating the photo collection to obtain similarity values of each photo and the original image; and selecting an optimal solution meeting a set threshold value according to the similarity value as the real pose of the unmanned aerial vehicle camera, so as to realize the determination of the pose of the camera. The method solves the problem that systematic errors are greatly influenced due to inaccurate homonymous points in the traditional method, effectively improves the precision of the spatial position and the attitude angle of the unmanned aerial vehicle camera, and meanwhile, the method does not depend on external equipment, and basically realizes automatic calculation.

Description

Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene
Technical Field
The invention relates to the field of photogrammetry and video monitoring, in particular to an unmanned aerial vehicle camera pose accurate estimation method based on a monitoring scene.
Background
The unmanned aerial vehicle has the capabilities of communication, remote control, high-resolution visual image information, data transmission and the like, and can rapidly analyze a large amount of video and image information, so that the unmanned aerial vehicle has wide application in the fields of photogrammetry and video monitoring. The unmanned aerial vehicle video and the three-dimensional geographic scene are fused, so that the geographic element association analysis of shooting surrounding scenes can be realized, the space sense and the sense of reality are added for the video, and the multifunctional application of the unmanned aerial vehicle is expanded. Unmanned aerial vehicle aerial photogrammetry has been applied to aspects such as geographical mapping, emergency relief, agricultural valuation, hydraulic and electric engineering construction, homeland resource planning, etc. The unmanned aerial vehicle video and three-dimensional live-action fusion display effect is good, the information breadth of the unmanned aerial vehicle video in the geographic space is effectively widened, and the unmanned aerial vehicle video is an important ring in digital three-dimensional application. In the current fusion of unmanned aerial vehicle video and three-dimensional scene, the most extensive way of camera space position and gesture parameter sources is to directly rely on unmanned aerial vehicle hardware equipment to transmit flight parameters. In addition, co-name point registration and pose estimation are also common in close-up shooting. However, for a light and small unmanned aerial vehicle, the problem of low accuracy, difficult method and long process still exists in the determination of the space pose of the camera.
Disclosure of Invention
Aiming at the defects in the prior art, the method for accurately estimating the pose of the unmanned aerial vehicle based on the monitoring scene provided by the invention solves the problems of low precision, difficult method and long flow of the pose determination of the unmanned aerial vehicle in the fusion of the unmanned aerial vehicle video and the three-dimensional scene at present.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: an unmanned aerial vehicle camera pose accurate estimation method based on a monitoring scene, the method comprises the following steps:
s1: determining an initial pose of a camera of the unmanned aerial vehicle;
s2: constructing an adjacent multi-view photo album according to the initial pose of the unmanned aerial vehicle camera;
s3: identifying and calculating the photo collection to obtain similarity values of each photo and the original image;
s4: and selecting an optimal solution meeting a set threshold value according to the similarity value as the real pose of the unmanned aerial vehicle camera, so as to realize the determination of the pose of the camera.
The beneficial effect of above-mentioned scheme is: through the technical scheme, the method for calculating the spatial position and the attitude angle of the camera of the unmanned aerial vehicle is improved, and the problems that systematic errors are greatly influenced due to inaccuracy of homonymous points in the traditional method and the traditional method is difficult to calculate and long in flow are solved.
Further, the step S1 comprises the following sub-steps:
s1-1: extracting homonymy points between images by adopting a SIFT algorithm through unmanned aerial vehicle video frame taking and a research area real scene, and acquiring homonymy point matching point pairs by adopting a FLANN algorithm;
s1-2: removing residual point pairs in the same-name point matching point pairs through a RANSAC algorithm, and screening to obtain N pairs of 3D-2D matching point pairs;
s1-3: and 4 pairs of matching point pairs are selected from the N pairs of 3D-2D matching point pairs, and the initial pose of the camera is obtained by adopting an EPnP algorithm.
The beneficial effects of the above-mentioned further scheme are: according to the technical scheme, the initial position and the initial gesture of the unmanned aerial vehicle camera are obtained through the traditional feature extraction and matching method.
Further, the step of obtaining the initial pose of the camera by adopting the EPnP algorithm in S1-3 comprises the following formula:
taking virtual control points in world coordinate systemIs that
Wherein,respectively different virtual control points under the world coordinate system;
coordinates of virtual control points in camera coordinate systemIs that
Wherein,respectively representing different coordinate points under a camera coordinate system, wherein the superscript T represents the transposition of the matrix;
virtual control pointIs simplified into
Wherein i and j are serial numbers of different points under two coordinate systems,i=1, 2,3,4, j=1, 2,3,4, a for the coordinates of the virtual control point in the world coordinate system ij Weights for homogeneous coordinate points;
obtained from perspective projection model
Wherein w is i For the projection depth of the spatial point, u i And v i Is the coordinates of the image points of the world coordinate system after passing through the camera model, f u And f v Length describing focal length in x and y axis directions using pixels, u o And v o The actual positions of the principal points are described by using pixels;
converted, for each control point
Assuming that the target object contains n control points, there are 2n equations to form a linear equation set
Hx=0
Wherein H is a 2n x 12 matrix, x is a 12-dimensional vector comprising 12 solving parameters, L is a dimension, and beta i The control point coordinates are finally obtained;
and respectively solving the linear equation sets to obtain virtual control point coordinates, solving the absolute orientation problem decomposition matrix to obtain camera pose parameters, and selecting a set with minimum reprojection error as the initial pose of the camera.
The beneficial effects of the above-mentioned further scheme are: the EPnP algorithm is to utilize 4 non-coplanar virtual control points in a three-dimensional linear space to re-linearly represent characteristic points of the three-dimensional space, and obtain pose parameters of a camera through an absolute orientation problem.
Further, when constructing the adjacent multi-view photo album in S2, the spatial coordinates and the attitude angle of the unmanned aerial vehicle are used as the variation parameters, which specifically includes the following formulas:
the initial spatial position of the camera is (X, Y, Z), the attitude angle is (alpha, beta, gamma), and the change limits of the spatial position and the attitude angle are as follows:
X=X±X 1
Y=Y±Y 1
Z=Z±Z 1
α=α±α 1
β=β±β 1
γ=γ±γ1
wherein X is 1 、Y 1 、Z 1 、α 1 、β 1 And gamma 1 The amounts of change of X, Y, Z, α, β and γ, respectively;
simultaneously setting the change step sizes in each direction of X, Y and Z as followsAnd->Setting the changing steps of alpha, beta and gamma directions as +.>And->Then 2 XM is generated 3 ×N 3 And (3) a photo, wherein M is X, Y and Z, and N is alpha, beta and gamma.
The beneficial effects of the above-mentioned further scheme are: and respectively setting step sizes according to the initial spatial position and the attitude angle of the camera to obtain a multi-view photo album.
Further, the step S3 comprises the following sub-steps:
s3-1: identifying the photo album by using an inter-image matching algorithm, and taking the similarity between images as a calculation index;
s3-2: based on the calculation index, traversing and comparing the real video frame of the unmanned aerial vehicle with each image in the adjacent multi-view photo collection, and obtaining the similarity value of each photo and the original image.
The beneficial effects of the above-mentioned further scheme are: and traversing and comparing the real video of the unmanned aerial vehicle with the constructed multi-view photo album to obtain a similarity value.
Further, the similarity value between each photo in S3-2 and the original image is obtained through a mean hash algorithm, and the method specifically comprises the following steps:
s3-2-1: scaling the size of each photo and the original image to 8×8, and performing gray scale processing;
s3-2-2: comparing each pixel value in the image after gray processing with the average value, marking the value higher than the average value as 1 and the value lower than the average value as 0, and obtaining hash codes of each photo and the original image;
s3-2-3: based on hash coding, the similarity value W is obtained through a Hamming distance algorithm, and the formula is as follows:
W=D-d/D
wherein D is the total number of code values and D is the hamming distance value.
The beneficial effects of the above-mentioned further scheme are: the image size is scaled to 8 multiplied by 8, because the resolution of the original image is high, the number of pixels is large, each pixel comprises an RGB value, the information quantity is huge, and the complexity is high when the similarity matching is carried out, so that details can be hidden after scaling, and the information quantity is reduced.
Further, when the optimal solution meeting the set threshold is selected in S4, if all the photos do not meet the set threshold, the view angle range is enlarged and the inter-photo change step length is reduced until the photos meet the set threshold.
The beneficial effects of the above-mentioned further scheme are: and according to the similarity value obtained in the steps, taking the photo as the real gesture of the unmanned aerial vehicle camera when the photo meets the threshold value, and completing the screening of the photo by expanding the visual angle range and reducing the inter-photo change step length if the photo does not meet the threshold value.
Drawings
Fig. 1 is a flowchart of a method for accurately estimating the pose of a unmanned aerial vehicle based on a monitoring scene.
FIG. 2 is a diagram of a constructed adjacent multi-view collection.
Fig. 3 is an image gray value calculation chart.
Fig. 4 is a hash code calculation diagram.
Fig. 5 is a comparison chart before and after correction.
Detailed Description
The invention will be further described with reference to the drawings and specific examples.
As shown in fig. 1, a method for accurately estimating the pose of an unmanned aerial vehicle based on a monitoring scene, the method comprises the following steps:
s1: determining an initial pose of a camera of the unmanned aerial vehicle;
s2: constructing an adjacent multi-view photo album according to the initial pose of the unmanned aerial vehicle camera;
s3: identifying and calculating the photo collection to obtain similarity values of each photo and the original image;
s4: and selecting an optimal solution meeting a set threshold value according to the similarity value as the real pose of the unmanned aerial vehicle camera, so as to realize the determination of the pose of the camera.
S1 comprises the following sub-steps:
s1-1: extracting homonymy points between images by adopting a SIFT algorithm through unmanned aerial vehicle video frame taking and a research area real scene, and acquiring homonymy point matching point pairs by adopting a FLANN algorithm;
s1-2: removing residual point pairs in the same-name point matching point pairs through a RANSAC algorithm, and screening to obtain N pairs of 3D-2D matching point pairs;
s1-3: and 4 pairs of matching point pairs are selected from the N pairs of 3D-2D matching point pairs, and the initial pose of the camera is obtained by adopting an EPnP algorithm.
The method for obtaining the initial pose of the camera by adopting the EPnP algorithm in S1-3 comprises the following formulas:
taking virtual control points in world coordinate systemIs that
Wherein,respectively different virtual control points under the world coordinate system;
coordinates of virtual control points in camera coordinate systemIs that
Wherein,respectively representing different coordinate points under a camera coordinate system, wherein the superscript T represents the transposition of the matrix;
virtual control pointIs simplified into
Wherein i and j are serial numbers of different points under two coordinate systems,i=1, 2,3,4, j=1, 2,3,4, a for the coordinates of the virtual control point in the world coordinate system ij Weights for homogeneous coordinate points;
obtained from perspective projection model
Wherein w is i For the projection depth of the spatial point, u i And v i Is the coordinates of the image points of the world coordinate system after passing through the camera model, f u And f v Length describing focal length in x and y axis directions using pixels, u o And v o The actual positions of the principal points are described by using pixels;
converted, for each control point
Assuming that the target object contains n control points, there are 2n equations to form a linear equation set
Hx=0
Wherein H is a 2n x 12 matrix, x is a 12-dimensional vector comprising 12 solving parameters, L is a dimension, and beta i The control point coordinates are finally obtained;
and respectively solving the linear equation sets to obtain virtual control point coordinates, solving the absolute orientation problem decomposition matrix to obtain camera pose parameters, and selecting a set with minimum reprojection error as the initial pose of the camera.
As shown in fig. 2, a multi-view integration is obtained by changing the pitch angle, the yaw angle, the roll angle, and the longitude and latitude of the initial view. S2, when constructing an adjacent multi-view photo album, taking the space coordinates and attitude angles of the unmanned aerial vehicle as change parameters, wherein the method specifically comprises the following formulas:
the initial spatial position of the camera is (X, Y, Z), the attitude angle is (alpha, beta, gamma), and the change limits of the spatial position and the attitude angle are as follows:
X=X±X 1
Y=Y±Y 1
Z=Z±Z 1
α=α±α 1
β=β±β 1
γ=γ±γ 1
wherein X is 1 、Y 1 、Z 1 、α 1 、β 1 And gamma 1 The amounts of change of X, Y, Z, α, β and γ, respectively;
simultaneously setting the change step sizes in each direction of X, Y and Z as followsAnd->Setting the changing steps in each direction of alpha, beta and gammaThe length is +.>And->Then 2 XM is generated 3 ×N 3 And (3) a photo, wherein M is X, Y and Z, and N is alpha, beta and gamma.
S3, the following sub-steps are included:
s3-1: identifying the photo album by using an inter-image matching algorithm, and taking the similarity between images as a calculation index;
s3-2: based on the calculation index, traversing and comparing the real video frame of the unmanned aerial vehicle with each image in the adjacent multi-view photo collection, and obtaining the similarity value of each photo and the original image.
The similarity value of each photo in S3-2 and the original image is obtained through a mean hash algorithm, and the method specifically comprises the following steps:
s3-2-1: scaling the size of each photo and the original image to 8×8, and performing gray scale processing;
s3-2-2: comparing each pixel value in the image after gray processing with the average value, wherein the average value is higher than 1, the average value is lower than 0, hash codes of each photo and the original image are obtained, in one embodiment of the invention, the gray value calculation of the image is shown in fig. 3, the average value in the embodiment is 130.86, the average value is higher than 1, the average value is lower than 0, the number of 8 bits formed by the values in the same row is converted into 16-system values, and the obtained values of each row are spliced into character strings, so that the hash codes of each image are obtained, as shown in fig. 4;
s3-2-3: based on hash coding, the similarity value W is obtained through a Hamming distance algorithm, and the formula is as follows:
W=D-d/D
wherein D is the total number of code values and D is the hamming distance value.
And S4, when the optimal solution meeting the set threshold is selected, if all the photos do not meet the set threshold, enlarging the view angle range and reducing the inter-photo change step length until the photos meet the set threshold.
In one embodiment of the invention, the initial pose of the unmanned aerial vehicle camera is solved by a traditional feature extraction and matching method, and an image of the pose data in a scene is taken as an initial view angle picture to construct an adjacent multi-view angle collection based on the initial view angle picture. And comparing the initial view angles with all view angles in the collection by taking the similarity between the images as an evaluation index, and selecting data corresponding to the image with the highest matching degree in the collection as the real pose of the unmanned aerial vehicle camera. The method effectively improves the calculation precision of the space position and the attitude angle of the unmanned aerial vehicle camera, assists the real-time scene fusion of the unmanned aerial vehicle video, and widens the information breadth of the unmanned aerial vehicle monitoring video in the geographic space.
According to the scheme, the calculation method of the spatial position and the attitude angle of the unmanned aerial vehicle camera is improved, the problem that systematic errors are greatly influenced due to inaccuracy of homonymous points in the traditional method is solved, as shown in fig. 5, the accuracy of the spatial position and the attitude angle of the unmanned aerial vehicle camera can be effectively improved through comparison before and after correction, and the scheme does not depend on external equipment, so that automatic calculation is basically realized.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit of the invention, and such modifications and combinations are still within the scope of the invention.

Claims (5)

1. The unmanned aerial vehicle camera pose accurate estimation method based on the monitoring scene is characterized by comprising the following steps of:
s1: determining an initial pose of a camera of the unmanned aerial vehicle;
s2: constructing an adjacent multi-view photo album according to the initial pose of the unmanned aerial vehicle camera;
s3: identifying and calculating the photo collection to obtain similarity values of each photo and the original image;
s4: selecting an optimal solution meeting a set threshold value according to the similarity value as the real pose of the unmanned aerial vehicle camera, and determining the pose of the camera;
the S1 comprises the following sub-steps:
s1-1: extracting homonymy points between images by adopting a SIFT algorithm through unmanned aerial vehicle video frame taking and a research area real scene, and acquiring homonymy point matching point pairs by adopting a FLANN algorithm;
s1-2: removing residual point pairs in the same-name point matching point pairs through a RANSAC algorithm, and screening to obtain N pairs of 3D-2D matching point pairs;
s1-3: selecting 4 pairs of matching point pairs from the N pairs of 3D-2D matching point pairs, and obtaining an initial pose of the camera by adopting an EPnP algorithm;
the step of obtaining the initial pose of the camera by adopting the EPnP algorithm in the step S1-3 comprises the following formula:
taking virtual control points in world coordinate systemIs that
Wherein,respectively different virtual control points under the world coordinate system;
coordinates of virtual control points in camera coordinate systemIs that
Wherein,respectively camera seatsDifferent coordinate points under the standard system, superscript +.>Representing a transpose of the matrix;
virtual control pointIs simplified into
Wherein,and->For the sequence numbers of different points in two coordinate systems, +.>For the coordinates of the virtual control point under the world coordinate system,/->,/>,/>Weights for homogeneous coordinate points;
obtained from perspective projection model
Wherein the method comprises the steps of,For the projection depth of the spatial point, +.>And->For the coordinates of the image points of the world coordinate system after passing the camera model,/for the points of the world coordinate system>And->Description of using pixels respectively +.>And->Length of axial focal length, +.>And->The actual positions of the principal points are described by using pixels;
converted, for each control point
Assuming that the target containsThe control points are->Equation ofA linear equation set is formed
,/>
Wherein,is +.>Matrix of->Is a 12-dimensional vector comprising 12 solving parameters +.>In order to be dimensional in number,the control point coordinates are finally obtained;
and respectively solving the linear equation sets to obtain virtual control point coordinates, solving the absolute orientation problem decomposition matrix to obtain camera pose parameters, and selecting a set with minimum reprojection error as the initial pose of the camera.
2. The method for accurately estimating the pose of the unmanned aerial vehicle based on the monitored scene according to claim 1, wherein the spatial coordinates and the pose angle of the unmanned aerial vehicle are used as the variation parameters when constructing the adjacent multi-view photo album in S2, specifically comprising the following formula:
the initial spatial position of the camera isThe attitude angle is +.>The spatial position and attitude angle change limits are:
wherein,、/>、/>、/>、/>and->Respectively->、/>、/>、/>、/>And->Is a variable amount of (a);
at the same time set up、/>And->The step length of the change in each direction is +.>、/>And->Setting->、/>And->The step length of the change in each direction is +.>、/>And->Then produce +.>Photo, ->Is->、/>And->Step-size related parameters in each direction,/->Is->、/>And->Step size related parameters in each direction.
3. The method for accurately estimating the pose of the unmanned aerial vehicle based on the monitoring scene according to claim 1, wherein the step S3 comprises the following sub-steps:
s3-1: identifying the photo album by using an inter-image matching algorithm, and taking the similarity between images as a calculation index;
s3-2: based on the calculation index, traversing and comparing the real video frame of the unmanned aerial vehicle with each image in the adjacent multi-view photo collection, and obtaining the similarity value of each photo and the original image.
4. The method for accurately estimating the pose of the unmanned aerial vehicle based on the monitored scene according to claim 3, wherein the similarity value between each photo in the S3-2 and the original image is obtained by a mean hash algorithm, specifically comprising the following steps:
s3-2-1: scaling the size of each photo and the original image to 8×8, and performing gray scale processing;
s3-2-2: comparing each pixel value in the image after gray processing with the average value, marking the value higher than the average value as 1 and the value lower than the average value as 0, and obtaining hash codes of each photo and the original image;
s3-2-3: based on Hash coding, similarity value is obtained through Hamming distance algorithmThe formula is as follows:
wherein,for the total number of coded values>Is a hamming distance value.
5. The method for accurately estimating the pose of the unmanned aerial vehicle based on the monitored scene according to claim 1, wherein when the optimal solution meeting the set threshold is selected in the step S4, if all the photos do not meet the set threshold, the view angle range is enlarged and the inter-photo change step length is reduced until the photos meet the set threshold.
CN202310383452.1A 2023-04-11 2023-04-11 Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene Active CN116612184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310383452.1A CN116612184B (en) 2023-04-11 2023-04-11 Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310383452.1A CN116612184B (en) 2023-04-11 2023-04-11 Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene

Publications (2)

Publication Number Publication Date
CN116612184A CN116612184A (en) 2023-08-18
CN116612184B true CN116612184B (en) 2023-12-05

Family

ID=87675371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310383452.1A Active CN116612184B (en) 2023-04-11 2023-04-11 Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene

Country Status (1)

Country Link
CN (1) CN116612184B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN112233177A (en) * 2020-10-10 2021-01-15 中国安全生产科学研究院 Unmanned aerial vehicle pose estimation method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107564061B (en) * 2017-08-11 2020-11-20 浙江大学 Binocular vision mileage calculation method based on image gradient joint optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586360A (en) * 2020-05-14 2020-08-25 佳都新太科技股份有限公司 Unmanned aerial vehicle projection method, device, equipment and storage medium
CN112233177A (en) * 2020-10-10 2021-01-15 中国安全生产科学研究院 Unmanned aerial vehicle pose estimation method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于无人机影像自动恢复三维场景系统数据库的设计与实现;刘锟铭 等;测绘工程;第26卷(第06期);第60-65页 *
无人机位姿测量的点特征视觉方法;吴雷 等;飞控与探测;第2卷(第01期);第37-42页 *

Also Published As

Publication number Publication date
CN116612184A (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN108335353B (en) Three-dimensional reconstruction method, device and system of dynamic scene, server and medium
CN110568447B (en) Visual positioning method, device and computer readable medium
Teller et al. Calibrated, registered images of an extended urban area
CN113592989B (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
US11461911B2 (en) Depth information calculation method and device based on light-field-binocular system
EP3274964B1 (en) Automatic connection of images using visual features
CN109063549B (en) High-resolution aerial video moving target detection method based on deep neural network
CN111582022B (en) Fusion method and system of mobile video and geographic scene and electronic equipment
CN114332385A (en) Monocular camera target detection and spatial positioning method based on three-dimensional virtual geographic scene
CN107843251A (en) The position and orientation estimation method of mobile robot
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
Cho et al. Diml/cvl rgb-d dataset: 2m rgb-d images of natural indoor and outdoor scenes
CN111144349A (en) Indoor visual relocation method and system
CN112132900B (en) Visual repositioning method and system
WO2021035627A1 (en) Depth map acquisition method and device, and computer storage medium
CN115359195A (en) Orthoimage generation method and device, storage medium and electronic equipment
WO2022247126A1 (en) Visual localization method and apparatus, and device, medium and program
CN116612184B (en) Unmanned aerial vehicle camera pose accurate estimation method based on monitoring scene
CN114140581A (en) Automatic modeling method and device, computer equipment and storage medium
CN112418344A (en) Training method, target detection method, medium and electronic device
CN117726687B (en) Visual repositioning method integrating live-action three-dimension and video
CN114937123B (en) Building modeling method and device based on multi-source image fusion
CN113361544B (en) Image acquisition equipment, and external parameter correction method, device and storage medium thereof
CN111010558B (en) Stumpage depth map generation method based on short video image
CN117011354A (en) Method for calculating object depth of two-dimensional image by using deep neural network cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant