CN115471555A - Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching - Google Patents

Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching Download PDF

Info

Publication number
CN115471555A
CN115471555A CN202211155964.4A CN202211155964A CN115471555A CN 115471555 A CN115471555 A CN 115471555A CN 202211155964 A CN202211155964 A CN 202211155964A CN 115471555 A CN115471555 A CN 115471555A
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
pose
image
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211155964.4A
Other languages
Chinese (zh)
Inventor
黄郑
王红星
姜海波
宋煜
陈玉权
张欣
朱洁
杜彪
霍丹江
顾徐
陈露
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Jiangsu Electric Power Co Ltd
Jiangsu Fangtian Power Technology Co Ltd
Original Assignee
State Grid Jiangsu Electric Power Co Ltd
Jiangsu Fangtian Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Jiangsu Electric Power Co Ltd, Jiangsu Fangtian Power Technology Co Ltd filed Critical State Grid Jiangsu Electric Power Co Ltd
Priority to CN202211155964.4A priority Critical patent/CN115471555A/en
Publication of CN115471555A publication Critical patent/CN115471555A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

An unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching comprises the steps that an unmanned aerial vehicle collects an infrared image, preprocesses the infrared image, matches the infrared image with a power transmission line feature database, and selects feature points of a power transmission line in the infrared image; and then moving the unmanned aerial vehicle to the side of the power transmission line for a distance by keeping the height of the unmanned aerial vehicle, acquiring the characteristic point image again, obtaining the physical coordinates of the characteristic points according to the pixel coordinates of the characteristic points of the two acquired images and the coordinates of the unmanned aerial vehicle calculated by inertial navigation/Wei Dao, matching and correcting the physical coordinates of the characteristic points with a standard database, performing P3P calculation by using the physical coordinates of the characteristic points and the infrared pixel coordinates to obtain the pose of the inspection unmanned aerial vehicle relative to the power transmission line, and finally performing weighting filtering fusion with the poses of inertial navigation and Wei Daowei to obtain the navigation pose of the high-precision unmanned aerial vehicle. The invention solves the problems that the satellite navigation system of the unmanned aerial vehicle of the power transmission line is easy to be interfered and the inertial navigation positioning system drifts along with time under the complex environment of high voltage and strong interference, and improves the navigation positioning precision and stability of the unmanned aerial vehicle.

Description

Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching
Technical Field
The invention relates to the technical field of unmanned aerial vehicle visual navigation and positioning, in particular to an unmanned aerial vehicle infrared inspection pose determining method based on image feature point matching.
Background
The unmanned aerial vehicle technique has obtained long-term development in recent years, and along with the improvement of unmanned aerial vehicle automation, intelligent level, unmanned aerial vehicle also begins to use in a large number and replaces the manual work to patrol and examine in-process to electric wire netting transmission line, has practiced thrift manpower and materials greatly. However, a large amount of electromagnetic interference often exists near a high-voltage transmission line, and the electromagnetic interference is particularly obvious in environments such as ultrahigh voltage and extra-high voltage when line defects are checked in a short distance, so that the traditional satellite navigation and electronic compass are easy to lose effectiveness. And the inertial navigation system has the condition of long-time drift, and cannot obtain stable pose data. Therefore, a novel navigation mode of the unmanned aerial vehicle for power transmission line inspection needs to be developed to solve the problems.
The image navigation is a new navigation mode, and corresponding navigation positioning information is obtained by extracting image feature points and matching. But visible light is sensitive to brightness and weather conditions, and cannot work all the day. The infrared image is imaged by collecting the infrared light of the target, so that the unmanned aerial vehicle has the advantages of being illuminated, little influenced by climate, large in detection range, capable of working in all weather and the like, can work in all weather under a complex electromagnetic interference environment by combining an inertial navigation system and a satellite navigation system, and is used as an auxiliary navigation means for patrolling the unmanned aerial vehicle.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an unmanned aerial vehicle infrared inspection pose determining method based on image feature point matching; the unmanned aerial vehicle is patrolled and examined in order to improve transmission line defends under high pressure, strong magnetic environment that the system easily receives the interference, is used to lead to patrolling and examining the problem that unmanned aerial vehicle navigation positioning accuracy is not high of system drift along with time.
In order to realize the purpose, the invention adopts the following technical scheme:
an unmanned aerial vehicle infrared inspection pose determining method based on image feature point matching comprises the following steps:
s1: collecting an infrared image of the power transmission line by using an infrared camera mounted by an unmanned aerial vehicle, and performing distortion removal and normalization pretreatment on the collected infrared image;
s2: matching the infrared image with a power transmission line characteristic database, namely removing environmental influence, and selecting and acquiring characteristic points of the power transmission line in the infrared image;
s3: taking the point of the current flight position of the unmanned aerial vehicle as an original point p1, resetting inertial navigation system data in the unmanned aerial vehicle, controlling the unmanned aerial vehicle to keep the original height to move a distance d laterally towards the power transmission line, calculating a new coordinate point p2 of the flight position of the unmanned aerial vehicle after the flight distance d is calculated by the inertial navigation system, and simultaneously determining a flight matrix R during the unmanned aerial vehicle;
s4: calculating physical coordinates of the characteristic points through binocular solution according to the p1 and p2 point coordinates of the unmanned aerial vehicle and the image coordinates of the characteristic points under the p1 and p2 points and the flight matrix R;
s5: matching the resolved feature point physical coordinates with a power transmission line database, and if the matching is successful, correcting by using accurate physical coordinates in the power transmission line database; if not, continuing to use the physical coordinates of the feature points calculated in the step S4;
s6: calculating by using the physical coordinates of the feature points processed in the step S5 and using a P3P algorithm to obtain pose information of the unmanned aerial vehicle relative to the power transmission line;
s7: and performing data fusion on the pose information obtained by the infrared image matching in the step S6 and pose information in the inertial navigation system and the satellite navigation system to obtain final pose information.
In order to optimize the technical scheme, the specific measures adopted further comprise:
further, the power transmission line characteristic database in the step S2 is trained in advance, and is obtained by adopting a YOLOv5S lightweight network model and training through PyTorch; wherein, the training classification has 3 types, including: the training data are derived from public data information and photos collected in daily inspection, and the data information and the photos are covered with various inspection scenes of sunny days, cloudy days, morning, noon and evening.
Further, the method for requesting and selecting the distance d to move in step S3 is as follows:
the principle of moving a distance d is as follows: the feature points selected in the step S2 are still kept in the infrared image and are different from the infrared image before movement;
the distance d is selected by the following method: the pixel moving distance of the selected characteristic points in the image coordinate system is more than 10 percent of the diagonal pixel distance of the infrared image resolution ratio, namely
Figure BDA0003858619210000021
Where i is any ith feature point,
Figure BDA0003858619210000022
the pixel distance, p, of the ith feature point pixel moving in the geographic coordinate system by the distance d and then in the image coordinate system x And p y Respectively is to collectLength and width of the infrared image resolution of (2).
Further, the new coordinate point p2 where the flight position of the unmanned aerial vehicle is located after the flight distance d is calculated by the inertial navigation system in step S3, and a specific calculation formula for determining the flight matrix R during the period of the unmanned aerial vehicle is as follows:
Figure BDA0003858619210000023
wherein T represents a moving radius;
Figure BDA0003858619210000024
the position change of the unmanned aerial vehicle is represented, and is obtained by dead reckoning of an internal inertial navigation system of the unmanned aerial vehicle; t is t 1 Represents the moment before the unmanned plane moves, delta t represents the time used in the moving process of the unmanned plane,
Figure BDA0003858619210000025
the speed of the unmanned aerial vehicle output by the inertial navigation system,
Figure BDA0003858619210000031
For inertial navigation gyroscope signals, euler is an attitude angle comprising a pitch angle theta, a yaw angle psi and a roll angle gamma; and determining a new coordinate point p2 where the flying position of the unmanned aerial vehicle is located after the flying distance d and a flying matrix R during the unmanned aerial vehicle by the formula.
Further, the formula for specifically calculating the physical coordinates of the feature points in step S4 is as follows:
Figure BDA0003858619210000032
Figure BDA0003858619210000033
in the formula (I), the compound is shown in the specification,
Figure BDA0003858619210000034
is any one ofImage coordinates of the feature points; k is an internal parameter matrix of the infrared camera;
Figure BDA0003858619210000035
the method comprises the following steps of (1) expanding a formula (2) to form only 2 equations, wherein unknowns are spatial three-dimensional positions, namely 3 unknowns, so that the unknowns cannot be directly solved, and therefore, the formula (3) lists 3 matrix equations by using the formula (2) and adding the relationship between the translation matrix T and the rotation matrix R obtained in the previous step to the corresponding relationship between the 2 images, namely, the physical coordinates of the characteristic points can be obtained by using the formula (3);
setting X 1 ,X 2 Before and after the unmanned aerial vehicle moves, the same characteristic point is used for solving X through the normalized image coordinates in different states and the simultaneous formulas (2) and (3) 1 ,X 2 Then, using numerical method to obtain X 1 ,X 2 Corresponding feature point physical coordinates; similarly, the physical coordinates of the plurality of feature points are obtained by the formulas (2) and (3).
Further, the specific content of matching the physical coordinates of the feature points with the power transmission line database in step S5 is as follows:
selecting a plurality of physical coordinates of the characteristic points and combining the physical coordinates into a measuring polygon, selecting corresponding position points of a power transmission line database and combining the position points into the measuring polygon, placing and overlapping the centroids of two measuring polytypes together, and calculating and evaluating the matching degree M of the two polygons, wherein the expression of M is as follows:
Figure BDA0003858619210000036
wherein n is the number of characteristic points, x i ,y i Is the physical coordinates of the feature points,
Figure BDA0003858619210000037
is a physical coordinate of a transmission line database;
and rotating one measuring polygon around the centroid to obtain a re-matching result of the two measuring polygons, and when the minimum value of M is smaller than a certain threshold value after rotating 360 degrees, considering that the physical coordinates of the characteristic points are successfully matched with the power transmission line database, namely, meeting the following requirements:
min(M)<M thr (5)
in the formula, M thr Is a set matching threshold.
Further, the unmanned aerial vehicle pose resolving method described in step S6 has the following concrete formula:
firstly, in step S5, 3 feature points A, B and C with known physical coordinates are selected, and the coordinates of the 3 feature points with the known physical coordinates in a camera coordinate system are calculated; wherein, suppose O to be camera focus position, there is according to the spatial position relation:
Figure BDA0003858619210000041
wherein p =2cos & lt BOA, q =2cos & lt AOC, r =2cos & lt COB,
Figure BDA0003858619210000042
in the known manner, it is known that,
Figure BDA0003858619210000043
Figure BDA0003858619210000044
unknown and is the solution to be solved by the formula (6); solving the 2-element 2-degree equation of the above formula (6) by using a Wu elimination method to obtain 4 groups of solutions of x and y;
Figure BDA0003858619210000045
substituting 4 groups of x and y by combining the formula (7) to obtain 4 groups of distances OA, OB and OC between the characteristic points A, B and C and the camera focal point O; the image coordinates corresponding to the A, B and C are given by the infrared camera, so that the direction vectors of the characteristic points A, B and C
Figure BDA0003858619210000046
It is known that each of 4 sets of coordinates A of A, B, C in the camera coordinate system can be obtained by the following formula (8) image 、B image 、C image
Figure BDA0003858619210000047
And S5, selecting a 4 th verification feature point D to be brought into a camera coordinate system, calculating Euclidean distances between D and the 4 solutions projected under the camera coordinate system, selecting the correct solution with the smallest projection error under the physical coordinate system from the 4 solutions, further obtaining corresponding coordinates of the correct solutions of A, B and C under the camera coordinate system and the physical coordinate system, and solving according to simple coordinate transformation to obtain the pose of the infrared camera under the physical coordinate system, namely the pose r of the unmanned aerial vehicle under the physical coordinate system.
Further, the data fusion method in step S7 is to adopt covariance weighting, and then perform kalman filtering, so as to obtain final pose information;
the algorithm of covariance weighting is as follows:
Figure BDA0003858619210000051
Figure BDA0003858619210000052
in the formula, i =1,2,3 respectively represents an infrared matching system 1, an inertial navigation system 2 and a satellite navigation system 3; e.g. of a cylinder i Representing the covariance of the pose data of the three types; c. C i Weight coefficients for three types of fusion; r is i I =1,2,3 respectively represent pose data of the infrared matching system 1, the inertial navigation system 2 and the satellite navigation system 3; r is out Weighting and fusing the final covariance to obtain navigation pose data;
wherein, the final covariance is weighted and fused to obtain navigation pose data r out Kalman filtering to remove noise to obtain more accurate data resultThe following is contained:
firstly, calculating a weighted and fused pose data estimation covariance q:
Figure BDA0003858619210000053
in the formula, E is the average value,
Figure BDA0003858619210000054
is the true pose information;
then, navigation pose data r after weighted fusion is carried out by utilizing Kalman filtering algorithm out Carrying out tracking filtering; the state equation and the measurement equation of the Kalman filtering system can be expressed by a general formula as follows:
Figure BDA0003858619210000055
in the formula, r out (k) Is a matrix of pose states, fr out (k)]Is the system state transfer function, w (k) is the Gaussian system noise, Γ (k) is the distribution matrix of w (k), z (k) is the pose measurement matrix, h [ r [ r ] ] out (k)]And v (k) is a measurement transfer function, v (k) is Gaussian measurement noise, the covariance of v (k) is a fused covariance q, and v (k) is determined according to the covariance q.
The invention has the beneficial effects that:
1. the invention integrates the advantages of the image navigation system, the satellite navigation system and the inertial navigation system, and performs characteristic matching navigation by introducing the infrared image, so that the inspection unmanned aerial vehicle is less influenced by the working environment, has wider working range and can work day and night. The previous image matching research mainly focuses on visible light, has high requirements on working environment and is limited in actual navigation effect.
2. According to the method, the identified target is subjected to binocular solution by using different space-time images of the single infrared camera and short-term high-precision navigation information of inertial navigation to obtain physical coordinate information of the target, and then the position and the attitude of the unmanned aerial vehicle are calculated through the physical coordinate of the target, so that the problem that the single infrared camera cannot obtain depth information and cannot obtain position and attitude information is solved.
3. According to the invention, only the characteristic points of the 3 types of targets of the power transmission line are identified by introducing the target model database, so that the influence of environmental obstacles is reduced; and the calculated physical coordinates of the feature points are compared with the model database, and if the point exists in the database, the physical coordinates of the feature points can be corrected, so that the pose determination precision of infrared feature matching is improved.
Drawings
Fig. 1 is a schematic flow chart of the overall scheme provided in the embodiment of the present invention.
Fig. 2 is a schematic diagram of a method and steps for determining the pose of an unmanned aerial vehicle according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating a method and steps for generating an infrared feature database according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of the binocular vision solution feature point physical coordinates provided in the embodiment of the present invention.
Fig. 5 is a schematic diagram of the matching correction of the physical coordinates of the feature points provided in the embodiment of the present invention.
Fig. 6 is a schematic diagram of solving the pose P3P of the unmanned aerial vehicle provided in the embodiment of the present invention.
Detailed Description
The present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 1 and fig. 2, the method for determining the infrared inspection pose of the unmanned aerial vehicle based on image feature point matching provided by this embodiment includes:
s1, collecting an infrared image of the power transmission line by using an infrared camera mounted by an unmanned aerial vehicle, and preprocessing the image. If the infrared camera is mounted on the unmanned aerial vehicle through the cradle head, the attitude parameters and the mounting matrix of the cradle head are known; if the infrared camera is directly installed on the unmanned aerial vehicle, the installation matrix of the camera relative to the unmanned aerial vehicle is known, and subsequent attitude calculation is facilitated. And the obtained infrared image is sent to the next step after distortion removal and normalization according to the camera internal parameter matrix. Wherein, the distortion removal uses Zhang Zhengyou calibration method, and the normalization processing method is as follows:
Figure BDA0003858619210000061
wherein input is the input undistorted image pixel and output is the normalized image pixel.
And S2, matching the infrared image with the trained power transmission line characteristic database, namely removing environmental influence, selecting a characteristic frame of the power transmission line in the infrared image, and then calculating characteristic points from the characteristic frame of the power transmission line. SIFT algorithm is adopted for calculating the feature points;
as shown in fig. 3, the power transmission line characteristic database is trained by using PyTorch, and the used database is from an open database and photos collected by ordinary inspection, and covers many inspection scenes such as sunny days, cloudy days, morning, noon and evening, and the like. Setting the training class to 3 includes: wire, insulator, shaft tower. The power transmission line is marked by using a picture marking tool, and a YOLOv5s lightweight model is adopted in a training network, so that the power transmission line is conveniently deployed on a mobile computing platform of the unmanned aerial vehicle.
Step S3, as shown in fig. 4, after the unmanned aerial vehicle hovers at the current point and acquires the infrared image, clearing the inertial navigation system data by using the point as an origin point P1, keeping the height of the unmanned aerial vehicle moving laterally toward the power transmission line by a distance d to reach P2, wherein the moving distance d is selected to satisfy that the characteristic point selected in the step S2 is still in the infrared image and is different from the infrared image before moving, and the selection method of the distance d is as follows: the pixel moving distance under the selected characteristic point image coordinate system is larger than 10% of the diagonal pixel distance of the image resolution, namely:
Figure BDA0003858619210000071
where i is any ith feature point,
Figure BDA0003858619210000072
is the pixel distance, p, of the pixel moving under the image coordinate system after the pixel moves the distance d under the geographic coordinate system x And p y Respectively for capturing imagesThe length and width of resolution. And (4) performing integral calculation through the inertial navigation system, and if the satellite navigation system is not interfered, introducing satellite navigation information for navigation and positioning to obtain a new coordinate P2 after the flying distance d of the inspection unmanned aerial vehicle. In order to improve the relative position accuracy of P1 and P2, the unmanned aerial vehicle keeps straight line stable flight in the flight process and uses speed-acceleration double-loop closed-loop PID control;
the inertial navigation system outputs the velocity of the unmanned aerial vehicle as
Figure BDA0003858619210000073
Inertial navigation gyroscope signal is
Figure BDA0003858619210000074
The euler attitude angles are a pitch angle theta, a yaw angle psi and a roll angle gamma respectively. Position change of unmanned aerial vehicle in observation period
Figure BDA0003858619210000075
The method can be obtained by dead reckoning of an inertial navigation system. And taking a relative displacement vector obtained by dead reckoning of the unmanned aerial vehicle at the adjacent sampling moment as an unmanned aerial vehicle movement vector. The moving vector T and the rotation matrix R are specifically expressed as:
Figure BDA0003858619210000076
and S4, solving the physical coordinates of the characteristic points by adopting a binocular vision method according to the coordinates of the unmanned aerial vehicles P1 and P2 and the pixel coordinates of the characteristic points in the P1 and P2. The concrete solving steps are as follows: the camera imaging model is as follows:
Figure BDA0003858619210000077
wherein
Figure BDA0003858619210000081
Is a pixel coordinate; k is a camera internal reference matrix which is obtained by the calibration method of Zhang Zhengyou in the foregoing;
Figure BDA0003858619210000082
is the normalized coordinate of the characteristic point in the image coordinate system. Let X 1 ,X 2 Respectively are the normalized coordinates of the characteristic points under the two images, and satisfy the following conditions:
X 1 =K -1 P 1
X 2 =K -1 P 2
X 2 =RP 1 +T (5)
where R and T are the rotation and translation matrices between the two images P1 and P2, respectively, as has been derived from the foregoing. Solving the above formula by numerical method to obtain X 1 ,X 2 The physical coordinates of the feature points can be obtained.
And S5, as shown in FIG. 5, matching the calculated physical coordinates of the feature points with a power transmission line database, and if the matching is successful, correcting by using the accurate physical coordinates in the database. The method for matching the physical coordinates of the characteristic points with the power transmission line database comprises the following steps:
according to the standard theory of the free network of measuring edge rank deficiency, the centroid of a measuring polygon formed by the physical coordinates of the characteristic points is translated to the centroid of a polygon in the power transmission line database, and then the matching degree M of the two polygons is calculated and evaluated. The expression of M is:
Figure BDA0003858619210000083
where n is the number of feature points, x i ,y i Is the physical coordinates of the feature points,
Figure BDA0003858619210000084
are the database physical coordinates. And rotating the measuring polygon around the centroid to obtain new physical coordinates of the characteristic points, and continuously calculating M. And when the minimum value of M is smaller than a certain threshold value after the rotation of 360 degrees, the physical coordinates of the characteristic points are considered to be successfully matched with the power transmission line database. Namely, the following conditions are satisfied:
min(M)<M thr (7)
wherein M is thr Is a set matching threshold.
And S6, as shown in FIG. 6, in the subsequent flight process of the unmanned aerial vehicle, if the feature points in the step S5 are successfully matched and corrected, selecting the feature points for pose calculation by using the P3P algorithm from the feature points of the corrected physical coordinates. If the number of corrected points is less than 4, the remaining points are continuously selected from the points with the maximum physical coordinate distance. If the matching is unsuccessful and not corrected, selecting 4 points with the largest relative physical coordinate distance from the infrared image characteristic points, and resolving the position and pose by using a P3P algorithm to obtain the position and pose information of the unmanned aerial vehicle relative to the power transmission line. The pose resolving specifically comprises the following steps:
let the focus of the camera be O, where 3 feature points are a, B, and C, and the spatial P3P problem can be known to satisfy:
Figure BDA0003858619210000085
Figure BDA0003858619210000091
wherein p =2cos & lt BOA, q =2cos & lt AOC, r =2cos & lt COB,
Figure BDA0003858619210000092
the 2-element 2-degree equation of the above formula can be solved by using the Wu elimination method to obtain 4 solutions (x) i ,y i ) I =1,2,3,4. OA, OB and OC can be obtained by substituting the formula into x, y and w, v, and the positions of A, B and C in the camera coordinate system can be obtained:
Figure BDA0003858619210000093
and (3) substituting the 4 th characteristic point D into a camera coordinate system, calculating Euclidean distances projected by the D and the 4 solutions under the camera coordinate system, and selecting the solution with the minimum projection error under a physical coordinate system from the 4 solutions to be the correct solution. And then, according to the correct position of A, B, C in the camera coordinate system and the corresponding position in the physical coordinate system, solving the transformation of the two coordinate systems, and further obtaining the pose of the camera in the physical coordinate system, namely the pose r of the unmanned aerial vehicle in the physical coordinate system.
And S7, carrying out covariance weighted data fusion on the pose information r obtained by infrared image matching with an inertial navigation system and a satellite navigation system. The key of the covariance weighted track fusion is that the covariance weighted fusion Kalman filtering algorithm of pose data is as follows by utilizing error covariance matrixes of different positioning systems:
according to the position and attitude data covariance e of the infrared matching 1, the inertial navigation 2 and the satellite navigation system 3 i 2 I =1,2,3 calculates the fusion weight, the fusion weight coefficient c i Respectively as follows:
Figure BDA0003858619210000094
that is, if the infrared, inertial navigation, wei Daowei attitude data are r respectively i I =1,2,3, the final covariance weighted fused navigation pose data r out Comprises the following steps:
Figure BDA0003858619210000095
the pose data estimation covariance after weighted fusion is as follows:
Figure BDA0003858619210000101
wherein, E is the average value of the calculation,
Figure BDA0003858619210000102
is the real pose information.
And then tracking and filtering the weighted and fused positioning data by using a Kalman filtering algorithm, wherein a state equation and a measurement equation of a Kalman filtering system can be expressed by a general formula as follows:
Figure BDA0003858619210000103
wherein r is out (k) Is a matrix of pose states, fr out (k)]Is the system state transfer function, w (k) is the Gaussian system noise, Γ (k) is the distribution matrix of w (k), z (k) is the pose measurement matrix, h [ r [ r ] ] out (k)]Is the measurement transfer function, and v (k) is the Gaussian measurement noise.
After the pose of the inspection unmanned aerial vehicle is determined and the number of the physical coordinates of the known feature points is larger than 4, if a new feature point is found, the unmanned aerial vehicle does not need to be moved for a certain distance, and the physical coordinates of the newly added feature points can be calculated through the step S4 binocular mode directly according to the calculated pose of the unmanned aerial vehicle.
And repeating the steps S1-S7 to obtain the pose of the inspection unmanned aerial vehicle, wherein the steps S3-S5 are to obtain the physical coordinates of the feature points, if the number of the obtained physical coordinates of the feature points is more than 4, the steps S3-S5 can be skipped, and the steps S6-S7 can be repeated subsequently to solve the pose of the unmanned aerial vehicle.
The object of this embodiment research is based on the navigation positioning demand that transmission line patrolled and examined unmanned aerial vehicle. In the embodiment, the target is identified, classified and coordinate corrected by introducing the feature database and the convolutional neural network. The identification stability and accuracy are improved to a great extent. And the infrared images in different time and space are identified and matched, the pose data of the unmanned aerial vehicle are reversely solved after the physical coordinates of the feature points are calculated, and the problem that monocular infrared cannot acquire depth information and therefore pose calculation cannot be carried out is solved. And the newly added feature points are directly solved through the known physical coordinates of the feature points subsequently, so that the repeated different space-time image acquisition processes are avoided, and the normal operation of the inspection unmanned aerial vehicle is not influenced. And finally, pose data of a plurality of sets of positioning and navigation systems are fused, so that pose determination stability and precision of the power transmission line inspection unmanned aerial vehicle are improved.
It should be noted that the terms "upper", "lower", "left", "right", "front", "back", etc. used in the present invention are for clarity of description only, and are not intended to limit the scope of the present invention, and the relative relationship between the terms and the terms may be changed or adjusted without substantial technical change.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (8)

1. An unmanned aerial vehicle infrared inspection pose determining method based on image feature point matching is characterized by comprising the following steps:
s1: collecting an infrared image of the power transmission line by using an infrared camera mounted by an unmanned aerial vehicle, and performing distortion removal and normalization pretreatment on the collected infrared image;
s2: matching the infrared image with a power transmission line characteristic database, namely removing environmental influence, and selecting and acquiring characteristic points of the power transmission line in the infrared image;
s3: taking the point of the current flight position of the unmanned aerial vehicle as an original point p1, resetting inertial navigation system data in the unmanned aerial vehicle, controlling the unmanned aerial vehicle to keep the original height to move a distance d laterally towards the power transmission line, calculating a new coordinate point p2 of the flight position of the unmanned aerial vehicle after the flight distance d is calculated by the inertial navigation system, and simultaneously determining a flight matrix R during the unmanned aerial vehicle;
s4: calculating physical coordinates of the characteristic points through binocular solution according to the p1 and p2 point coordinates of the unmanned aerial vehicle and the image coordinates of the characteristic points under the p1 and p2 points and the flight matrix R;
s5: matching the resolved feature point physical coordinates with a power transmission line database, and if the matching is successful, correcting by using accurate physical coordinates in the power transmission line database; if not, continuing to use the physical coordinates of the feature points calculated in the step S4;
s6: resolving by using the physical coordinates of the feature points processed in the step S5 and using a P3P algorithm to obtain pose information of the unmanned aerial vehicle relative to the power transmission line;
s7: and performing data fusion on the pose information obtained by the infrared image matching in the step S6 and pose information in the inertial navigation system and the satellite navigation system to obtain final pose information.
2. The unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching according to claim 1, wherein the power transmission line feature database in the step S2 is trained in advance, and is obtained by adopting a YOLOv5S lightweight network model and training through PyTorch; wherein, the training classification has 3 types, including: the training data are derived from public data information and photos collected in daily inspection, and the data information and the photos are covered with various inspection scenes of sunny days, cloudy days, morning, noon and evening.
3. The unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching according to claim 1, wherein the requirement and selection method for moving a distance d in the step S3 is as follows:
the principle of moving a distance d is as follows: the feature points selected in the step S2 are still kept in the infrared image and are different from the infrared image before movement;
the distance d is selected by the following method: the pixel moving distance of the selected characteristic points in the image coordinate system is more than 10% of the diagonal pixel distance of the infrared image resolution ratio, namely
Figure FDA0003858619200000011
Where i is any ith feature point,
Figure FDA0003858619200000012
the pixel distance, p, of the ith feature point pixel moving in the geographic coordinate system by the distance d and then in the image coordinate system x And p y Respectively, the length and width of the resolution of the acquired infrared image.
4. The method for determining the infrared inspection pose of the unmanned aerial vehicle based on image feature point matching according to claim 3, wherein the new coordinate point p2 where the flight position of the unmanned aerial vehicle is located after the flight distance d is calculated by the inertial navigation system in the step S3, and a specific calculation formula for determining the flight matrix R during the unmanned aerial vehicle is as follows:
Figure FDA0003858619200000021
wherein T represents a moving radius;
Figure FDA0003858619200000022
the position change of the unmanned aerial vehicle is represented, and the position change is obtained by dead reckoning of an internal inertial navigation system of the unmanned aerial vehicle; t is t 1 Represents the moment before the unmanned plane moves, delta t represents the time used in the moving process of the unmanned plane,
Figure FDA0003858619200000023
the speed of the unmanned aerial vehicle output by the inertial navigation system,
Figure FDA0003858619200000024
For inertial navigation gyroscope signals, euler is an attitude angle comprising a pitch angle theta, a yaw angle psi and a roll angle gamma; and determining a new coordinate point p2 where the flying position of the unmanned aerial vehicle is located after the flying distance d and a flying matrix R during the unmanned aerial vehicle by the formula.
5. The unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching according to claim 4, wherein the formula for specifically calculating the physical coordinates of the feature points in the step S4 is as follows:
Figure FDA0003858619200000025
Figure FDA0003858619200000026
in the formula (I), the compound is shown in the specification,
Figure FDA0003858619200000027
image coordinates of any one feature point; k is an internal reference matrix of the infrared camera;
Figure FDA0003858619200000028
the normalized coordinates of the corresponding characteristic points in the image coordinate system are obtained;
setting X 1 ,X 2 Before and after the unmanned aerial vehicle moves, the same characteristic point is used for solving X in the normalized image coordinates under different states through simultaneous formulas (2) and (3) 1 ,X 2 Then, using numerical method to obtain X 1 ,X 2 Corresponding feature point physical coordinates; similarly, the physical coordinates of the plurality of feature points are obtained by equations (2) and (3).
6. The unmanned aerial vehicle infrared inspection pose determining method based on image feature point matching according to claim 1, wherein the specific contents of matching the feature point physical coordinates with the power transmission line database in the step S5 are as follows:
selecting a plurality of physical coordinates of the characteristic points and combining the physical coordinates into a measuring polygon, selecting corresponding position points of a power transmission line database and combining the position points into the measuring polygon, placing and overlapping the centroids of two measuring polytypes together, and calculating and evaluating the matching degree M of the two polygons, wherein the expression of M is as follows:
Figure FDA0003858619200000031
in the formula (I), the compound is shown in the specification, n is the number of feature points, x i ,y i Is the physical coordinates of the feature points,
Figure FDA0003858619200000032
is the transmission line database physicsCoordinates;
and rotating one measuring polygon around the centroid to obtain a re-matching result of the two measuring polygons, and when the minimum value of M is smaller than a certain threshold value after rotating 360 degrees, considering that the physical coordinates of the characteristic points are successfully matched with the power transmission line database, namely, meeting the following requirements:
min(M)<M thr (5)
in the formula, M thr Is a set matching threshold.
7. The unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching according to claim 1, wherein the unmanned aerial vehicle pose calculation method in the step S6 has a specific formula as follows:
firstly, in step S5, 3 feature points A, B and C with known physical coordinates are selected, and the coordinates of the 3 feature points with the known physical coordinates in a camera coordinate system are calculated; wherein, suppose O to be camera focus position, there is according to the spatial position relation:
Figure FDA0003858619200000033
wherein p =2cos & lt BOA, q =2cos & lt AOC, r =2cos & lt COB,
Figure FDA0003858619200000034
in the known manner, it is known that,
Figure FDA0003858619200000035
Figure FDA0003858619200000036
unknown and is the solution to be solved by the formula (6); solving a 2-element 2-order equation of the above formula (6) by using a Wu elimination method to obtain 4 groups of solutions of x and y;
Figure FDA0003858619200000037
substituting 4 groups of x and y by combining the formula (7) to obtain 4 groups of distances OA, OB and OC between the characteristic points A, B and C and the camera focal point O; the image coordinates corresponding to the A, B and C are given by the infrared camera, so that the direction vectors of the characteristic points A, B and C
Figure FDA0003858619200000038
It is known that each of 4 sets of coordinates A of A, B, C in the camera coordinate system can be obtained by the following formula (8) image 、B image 、C image
Figure FDA0003858619200000041
And S5, selecting a 4 th verification feature point D to be brought into a camera coordinate system, calculating Euclidean distances between D and the 4 solutions projected under the camera coordinate system, selecting the correct solution with the smallest projection error under the physical coordinate system from the 4 solutions, further obtaining corresponding coordinates of the correct solutions of A, B and C under the camera coordinate system and the physical coordinate system, and solving according to simple coordinate transformation to obtain the pose of the infrared camera under the physical coordinate system, namely the pose r of the unmanned aerial vehicle under the physical coordinate system.
8. The unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching according to claim 7, wherein the data fusion method in the step S7 is that covariance weighting is adopted, kalman filtering is performed, and final pose information is obtained;
the algorithm of covariance weighting is as follows:
Figure FDA0003858619200000042
Figure FDA0003858619200000044
in the formula, i =1,2,3 represents an infrared matching 1, an inertial navigation system 2 and a satellite navigation system 3 respectively; e.g. of the type i Representing the covariance of the pose data of the three types; c. C i Weight coefficients for three types of fusion; r is i I =1,2,3 are respectively expressed as pose data of the infrared matching 1, the inertial navigation system 2 and the satellite navigation system 3; r is out Weighting and fusing the final covariance to obtain navigation pose data;
wherein, the final covariance is weighted and fused to obtain navigation pose data r out The specific contents of performing kalman filtering to remove noise and obtain a more accurate data result are as follows:
firstly, calculating a weighted and fused pose data estimation covariance q:
Figure FDA0003858619200000045
in the formula, E is the average value,
Figure FDA0003858619200000051
is the true pose information;
then, navigation pose data r after weighted fusion is carried out by utilizing Kalman filtering algorithm out Carrying out tracking filtering; the state equation and the measurement equation of the Kalman filtering system can be expressed by a general formula as follows:
Figure FDA0003858619200000052
in the formula, r out (k) Is a matrix of pose states, fr out (k)]Is the system state transfer function, w (k) is the Gaussian system noise, Γ (k) is the distribution matrix of w (k), z (k) is the pose measurement matrix, h [ r ] (k) out (k)]And v (k) is a measurement transfer function, v (k) is Gaussian measurement noise, the covariance of v (k) is a fused covariance q, and v (k) is determined according to the covariance q.
CN202211155964.4A 2022-09-22 2022-09-22 Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching Pending CN115471555A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211155964.4A CN115471555A (en) 2022-09-22 2022-09-22 Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211155964.4A CN115471555A (en) 2022-09-22 2022-09-22 Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching

Publications (1)

Publication Number Publication Date
CN115471555A true CN115471555A (en) 2022-12-13

Family

ID=84334768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211155964.4A Pending CN115471555A (en) 2022-09-22 2022-09-22 Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching

Country Status (1)

Country Link
CN (1) CN115471555A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116576850A (en) * 2023-07-12 2023-08-11 北京集度科技有限公司 Pose determining method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116576850A (en) * 2023-07-12 2023-08-11 北京集度科技有限公司 Pose determining method and device, computer equipment and storage medium
CN116576850B (en) * 2023-07-12 2023-10-20 北京集度科技有限公司 Pose determining method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110070615B (en) Multi-camera cooperation-based panoramic vision SLAM method
CN109931939B (en) Vehicle positioning method, device, equipment and computer readable storage medium
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN109872372B (en) Global visual positioning method and system for small quadruped robot
CN115439424B (en) Intelligent detection method for aerial video images of unmanned aerial vehicle
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN106529538A (en) Method and device for positioning aircraft
CN110458877B (en) Navigation method based on bionic vision for fusing infrared and visible light information
CN109631911B (en) Satellite attitude rotation information determination method based on deep learning target recognition algorithm
CN104501779A (en) High-accuracy target positioning method of unmanned plane on basis of multi-station measurement
CN109900274B (en) Image matching method and system
CN113220818B (en) Automatic mapping and high-precision positioning method for parking lot
CN113313659B (en) High-precision image stitching method under multi-machine cooperative constraint
CN113627473A (en) Water surface unmanned ship environment information fusion sensing method based on multi-mode sensor
CN111998862A (en) Dense binocular SLAM method based on BNN
CN115471555A (en) Unmanned aerial vehicle infrared inspection pose determination method based on image feature point matching
CN109883400B (en) Automatic target detection and space positioning method for fixed station based on YOLO-SITCOL
CN110889353A (en) Space target identification method based on primary focus large-visual-field photoelectric telescope
Knyaz et al. Joint geometric calibration of color and thermal cameras for synchronized multimodal dataset creating
Sun et al. Automatic targetless calibration for LiDAR and camera based on instance segmentation
CN114419259B (en) Visual positioning method and system based on physical model imaging simulation
CN115950435A (en) Real-time positioning method for unmanned aerial vehicle inspection image
CN113554705B (en) Laser radar robust positioning method under changing scene
CN114170376B (en) Multi-source information fusion grouping type motion restoration structure method for outdoor large scene
CN113589848B (en) Multi-unmanned aerial vehicle detection, positioning and tracking system and method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination