CN114238675A - Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching - Google Patents

Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching Download PDF

Info

Publication number
CN114238675A
CN114238675A CN202111254997.XA CN202111254997A CN114238675A CN 114238675 A CN114238675 A CN 114238675A CN 202111254997 A CN202111254997 A CN 202111254997A CN 114238675 A CN114238675 A CN 114238675A
Authority
CN
China
Prior art keywords
matching
image
pairs
unmanned aerial
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111254997.XA
Other languages
Chinese (zh)
Inventor
兰子柠
张华君
李钟谷
陈文鑫
张紫龙
周子鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei Institute Of Aerospacecraft
Original Assignee
Hubei Institute Of Aerospacecraft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Institute Of Aerospacecraft filed Critical Hubei Institute Of Aerospacecraft
Priority to CN202111254997.XA priority Critical patent/CN114238675A/en
Publication of CN114238675A publication Critical patent/CN114238675A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an unmanned aerial vehicle ground target positioning method based on heterogeneous image matching, which comprises the following steps: s1, acquiring a high-resolution remote sensing satellite map of a flight area of the unmanned aerial vehicle from the internet according to the flight mission of the unmanned aerial vehicle; s2, carrying a visible light camera by the unmanned aerial vehicle, carrying out carpet search in a flight area, adjusting the pitch angle of the camera to keep the camera in normal incidence on the target, and shooting an image of the normal incidence of the target; s3, processing the aerial image; s4, performing sliding window search on the satellite base map to obtain a plurality of base map blocks, and searching the base map block closest to the aerial photo map in all the base map blocks; and S5, calculating homography matrixes M of the two images by using OpenCV according to matching pairs obtained by matching the aerial images and the closest base image blocks through heterogeneous image, and mapping the central points of the aerial images to the satellite base images by using the homography matrixes M to obtain the longitude and latitude of which the central points are also the target centers. The method has stronger anti-interference capability, can be more suitable for the field environment, and can carry out the ground target positioning of the unmanned aerial vehicle without depending on the GPS.

Description

Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching
Technical Field
The invention belongs to the field of ground target positioning, and particularly relates to an unmanned aerial vehicle ground target positioning method based on heterogeneous image matching.
Background
The unmanned aerial vehicle has unique superiority and flexibility, and is responsible for various application scenes such as battlefields, rescue reconnaissance, target monitoring and other tasks. The positioning function of the unmanned aerial vehicle on the ground target is a precondition for realizing the tasks. The existing positioning technology mainly depends on a photoelectric reconnaissance platform, a laser range finder is required to be arranged in the photoelectric platform, information such as an azimuth angle and a distance of a target relative to the photoelectric platform can be obtained through the photoelectric reconnaissance platform, and then the longitude and latitude of the target are resolved through a target positioning equation (such as a three-point positioning method) by combining position and attitude information of an unmanned aerial vehicle. Because the position information of the unmanned aerial vehicle is needed, when the position of the unmanned aerial vehicle cannot be accurately known in an environment with weak GPS signals or under the condition of electromagnetic interference, the precision of target positioning is greatly damaged; and above-mentioned positioning method need be equipped with laser range finder on unmanned aerial vehicle, and the cost is higher. Therefore, a ground target positioning method of the unmanned aerial vehicle, which has stronger anti-interference capability, can adapt to the field environment and does not depend on the GPS, is needed.
Disclosure of Invention
The invention provides an unmanned aerial vehicle ground target positioning method based on heterogeneous image matching, which aims to solve the problem that the target positioning precision is influenced because the accurate position of the unmanned aerial vehicle ground target cannot be obtained under the condition of no GPS signal in the existing unmanned aerial vehicle ground target positioning method.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an unmanned aerial vehicle ground target positioning method based on heterogeneous image matching comprises the following steps:
s1, acquiring a high-resolution remote sensing satellite map of a flight area of the unmanned aerial vehicle from the internet according to the flight mission of the unmanned aerial vehicle, wherein each pixel on the satellite map has accurate longitude and latitude information and is used as a satellite base map matched with the image;
s2, carrying a visible light camera by the unmanned aerial vehicle, carrying out carpet type search in a flight area, controlling the unmanned aerial vehicle to fly right above the center of a target if the target is found, adjusting the pitch angle of the camera to keep the camera in normal incidence on the target, and shooting an normal incidence image of the target;
s3, processing the aerial image, firstly zooming the aerial image to ensure that the spatial resolution of the aerial image is consistent with that of the satellite image, then rotating the aerial image according to the heading angle to ensure that the image direction is kept to be north, and at the moment, the orientation of all objects in the aerial image is consistent with that of the satellite image;
s4, sliding window searching is carried out on the satellite base map, the window size is set to be the size of the processed aerial photo map, the overlapping rate is set to be 60% or more, a plurality of base map blocks are obtained, and base map blocks closest to the aerial photo map are searched in all the base map blocks;
s5, according to a matching pair obtained after matching the aerial photo and the nearest base map block through heterogeneous images, calculating a homography matrix M of the two maps by using a findHomography module in OpenCV, mapping the central point of the aerial photo to a satellite base map by using the homography matrix M, wherein each point of the satellite base map is provided with latitude and longitude information, and thus obtaining the latitude and longitude of the central point of the aerial photo which is also the target center.
Further, the remote sensing satellite map in the step S1 has 1-18 levels, is derived from map software (google map, Baidu map, Gaode map, Tencent map and the like), selects the satellite base map with the highest level of 18 levels, and has a spatial resolution of more than 0.5 m.
Further, the specific search strategy in step S4 is: and carrying out heterogeneous image matching on the aerial photography image and any base image block, obtaining a plurality of matching pairs between the two images in each matching, and taking the base image block with the largest number of matching pairs as the closest base image block.
Further, the heterogeneous image matching process of the aerial photography image and any base image block is as follows:
s41, performing feature extraction on the aerial photo by using a deep learning model D2-Net, and performing feature extraction on all base blocks one by using D2-Net;
s42, roughly matching the characteristics of the aerial photograph image and any base image block by using a K neighbor search algorithm to obtain a plurality of matching pairs;
s43, purifying the matched pairs by using a dynamic self-adaptive constraint condition;
s44, further rejecting mismatching pairs by using a RANSAC algorithm;
and S45, finally, taking the bottom image block with the most matched pairs as the closest bottom image block.
Further, the specific process of rough matching in step S42 is as follows: roughly matching the feature vectors of the two pictures by using a K neighbor algorithm, and enabling K to be 2 to obtain N matching pairs, wherein each matching pair comprises the 1 st matching point dis closest to the Euclidean distancejAnd the next 2 nd matching point dis'j
Further, the specific process of purifying the matching pairs in step S43 is as follows: and (3) purifying the matching pairs by using a dynamic self-adaptive Euclidean distance constraint condition, and counting the mean value of the distance difference between the 1 st matching point and the 2 nd matching point in all the matching pairs:
Figure BDA0003323821400000031
for each matched pair to be screened, the condition of purification is that the 1 st distance is smaller than the difference between the 2 nd distance and the distance difference mean value avgdis, and the formula is as follows:
disj<dis′j-avgdis
disjindicates a distance value, dis ', from the nearest 1 st matching point'jThe distance value of the second closest matching point is represented, the matching pairs which do not satisfy the formula are deleted, and the matching pairs which satisfy the commonA matched pair of formulae (la).
Further, the specific process of using the RANSAC algorithm to eliminate the mismatching pairs in step S44 is as follows:
s441, randomly extracting a plurality of pairs of matching pair samples from the purified matching pairs, and fitting a model P by using the plurality of pairs of matching pair samples;
s442, calculating errors of the other matching pairs and the model P, and if the errors are smaller than a threshold value, determining the matching pairs as local points, and if the errors are larger than the threshold value, determining the matching pairs as local points;
and S443, the process is called one iteration, a certain result with the largest number of local points after r iterations is a final result, all the calculated local points are mismatching pairs, and the mismatching pairs are directly removed.
Further, the number of pairs of matching pair samples in the step S441 is less than 10 pairs.
The unmanned aerial vehicle ground target positioning method based on the heterogeneous image matching has the following advantages:
(1) the invention can complete high-precision positioning of the ground target without the help of a GPS and has strong anti-electromagnetic interference capability.
(2) Compared with the existing positioning method based on laser ranging and attitude measurement, the method provided by the invention does not need a complex positioning device or other auxiliary sensors, is low in cost, and only needs to carry a camera on the unmanned aerial vehicle and utilize image information to position the ground target.
(3) The invention combines the unmanned aerial vehicle and the deep learning technology, can normally work in complex environments such as deserts, mountains and the like which are difficult to search by manpower, has simple operation and convenient implementation, and can effectively replace large-scale ground search.
In a word, the method has stronger anti-interference capability, can be more suitable for the field environment, and can carry out the ground target positioning of the unmanned aerial vehicle without depending on the GPS.
Drawings
FIG. 1 is a schematic flow chart of the method for locating the ground target of the unmanned aerial vehicle based on the matching of heterogeneous images according to the present invention;
FIG. 2 is a schematic diagram of the matching between an aerial photograph and a heterogeneous image of any bottom block;
FIG. 3 is an aerial image and base image block with a minimum of matching pairs;
fig. 4 is an aerial photograph and a bottom chart with the most matched pairs.
In the figure, two end points of a 1-aerial photography figure, a 11-aerial photography figure 1 (the same figure as the aerial photography figure and different in direction), a 2-satellite bottom figure block and a 21-bottom figure block 1, 22-bottom figure block 2, 3-line are matched and paired with each other.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the detailed description and specific examples, while indicating the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the invention provides an unmanned aerial vehicle ground target positioning method based on heterogeneous image matching, comprising the following steps:
s1, acquiring a high-resolution remote sensing satellite map of a flight area of the unmanned aerial vehicle from the internet according to the flight mission of the unmanned aerial vehicle, wherein each pixel on the satellite map has accurate longitude and latitude information and is used as a satellite base map matched with the image;
in the step S1, the remote sensing map has 1-18 levels, is derived from map software (Google map, Baidu map, Gaode map, Tencent map and the like), selects the satellite base map with the highest level of 18 levels, and has the spatial resolution of more than 0.5 meter.
S2, carrying a visible light/infrared camera on the unmanned aerial vehicle, carrying out carpet type search in a certain area, controlling the unmanned aerial vehicle to fly right above the center of the target if the target is found, adjusting the pitch angle of the camera to keep the camera in normal incidence on the target, and shooting an normal incidence image of the target.
S3, processing the aerial image, firstly zooming the aerial image to ensure that the spatial resolution of the aerial image is consistent with that of the satellite image, then rotating the aerial image according to the heading angle to ensure that the image direction is kept to be north, and at the moment, the orientation of all objects in the aerial image is consistent with that of the satellite image.
And S4, performing sliding window search on the satellite base map, setting the window size as the size of the processed aerial photo map, and setting the overlapping rate to be 60% or more to obtain a plurality of base map blocks. And searching all the bottom image blocks for the bottom image block closest to the aerial image.
The specific search strategy in step S4 is: and carrying out heterogeneous image matching on the aerial photography image and any base image block, obtaining a plurality of matching pairs between the two images in each matching, and taking the base image block with the largest number of matching pairs as the closest base image block. Because the ground range related to the satellite base map is far larger than that of the aerial image, the size of the aerial image after processing is usually far smaller than that of the satellite map, and the direct matching error of the satellite base map and the satellite map is larger. The sliding window method ensures that the aerial photography image only needs to be matched with the base image blocks within the same ground range every time, and effectively improves the matching accuracy.
As shown in fig. 2, the heterogeneous image matching process between the aerial photograph and any bottom image block in the search strategy is as follows: s41, performing feature extraction on the aerial photo by using a deep learning model D2-Net, and performing feature extraction on all base blocks one by using D2-Net;
in detail, the feature extraction method in the patent uses a deep learning model D2-Net. The traditional feature extraction method is to detect the key point (keypoint detection) and then extract the descriptor (feature description) of the key point, namely the detect-the-describe mode. Finally, a plurality of key points and descriptors corresponding to the key points, namely the key points and the corresponding n-dimensional feature vectors, can be obtained from a single image. Commonly used SIFT and SURF algorithms belong to the category of detection and description. Although the features extracted by the traditional method have scale invariance, the expression capability is limited when the method faces different source images with large differences of illumination, seasons and wave bands, D2-Net is a learning type key point detection and description algorithm, the key points and descriptors can be simultaneously extracted end to end by training a CNN network, and the method for simultaneously extracting the key points and the descriptors is called descriptor-and-detect. D2-Net first computes a feature map of the input image using CNN, then computes descriptors by slicing the feature map, and selects local maxima in the feature map as keypoints. The key points and the descriptors in the D2-Net are extracted by the CNN, have high-level semantic information, and can obtain better effect on the different source images with larger differences. After the aerial photography image and the bottom image block are respectively subjected to feature extraction by using D2-Net, respective key points of the two images are obtained, and each key point is represented by an n-dimensional feature vector.
S42, roughly matching the characteristics of the aerial photograph image and any base image block by using a K neighbor search algorithm to obtain a plurality of matching pairs;
the specific process of rough matching is as follows: coarse matching is carried out on the feature vectors of the two pictures by using a K neighbor search algorithm, K is made to be 2, N matching pairs are obtained, and each matching pair comprises the 1 st matching point dis closest to the Euclidean distancejAnd the next 2 nd matching point dis'j
In theory, any number K >2 can be taken, but K is typically 2 because only the two closest points are used.
In the present embodiment, first, n feature vectors (n1, n2 … … nn) are extracted from the aerial photography image, and m feature vectors (m1, m2, … … mm) are extracted from the bottom image block, each of which represents a pixel point in the original image. For n1, K vectors closest to the n1 vectors in euclidean distance are found from m eigenvectors, where K equals 2, and it is described that only the 1 st matching point closest to the n1 vector and the second 2 nd matching point need to be found each time; each feature vector in n1 … … nn can find its corresponding point in (m1, m2 … … mm), so that there are n pairs of matching pairs; for example, if the 1 st matching point of the n1 vector is m3 and the 2 nd matching point is m9, then (n1, m3) is considered to be a matching pair.
After the K neighbor is used, each point (n1 … … nn) can find a matching pair, which is not good, so that the matching pair is refined by using a dynamic adaptive euclidean distance constraint condition, and since the used constraint condition relates to the 1 st matching point and the next 2 nd matching point, K is 2.
S43, purifying the matched pairs by using a dynamic self-adaptive constraint condition, and specifically comprising the following steps:
and (3) purifying the matching pairs by using a dynamic self-adaptive Euclidean distance constraint condition, and counting the mean value of the distance difference between the 1 st matching point and the 2 nd matching point in all the matching pairs, as shown in a formula (1):
Figure BDA0003323821400000071
for each matched pair to be screened, the condition of purification is that the 1 st distance is smaller than the difference between the 2 nd distance and the distance difference mean value avgdis, as shown in formula (2):
disj<dis′j-avgdis (2)
disjindicates a distance value, dis ', from the nearest 1 st matching point'jDistance values representing second closest matching points, e.g. n1 eigenvector of aerial photograph, the 1 st matching point of which is m3 vector of ground block, and Euclidean distance values between n1 and m3 being disj(ii) a The 2 nd matching point is m9 vector, and the Euclidean distance value between n1 and m9 is dis'jDeleting the matching pairs which do not satisfy the formula, and leaving the matching pairs which satisfy the formula; and carrying out RANSAC algorithm on the purified matching pairs to eliminate mismatching pairs, so as to obtain matching pairs with higher quality, and more accurate longitude and latitude calculation.
S44, further rejecting mismatching pairs by using a RANSAC algorithm;
the RANSAC algorithm is a common algorithm in the field of computer vision. Assuming a set of data has "in-office points" and "out-office points," the distribution of the in-office points conforms to a mathematical model, and the out-office points are data that cannot fit the model. In this specific embodiment, the process of rejecting the mismatching pairs by the RANSAC algorithm is as follows:
s441, randomly extracting a plurality of pairs of matching pair samples from the purified matching pairs, and fitting a model P, specifically a homography matrix of 3 multiplied by 3, with the plurality of pairs of matching pair samples;
s442, calculating errors of the other matching pairs and the model P, and if the errors are smaller than a threshold value, determining the matching pairs as local points, and if the errors are larger than the threshold value, determining the matching pairs as local points;
and S443, the process is called one iteration, a certain result with the largest number of local points after r iterations is a final result, all the calculated local points are mismatching pairs, and the mismatching pairs are directly removed.
The number of pairs of matching pair samples in the step S441 is less than 10 pairs. In this example, 4 pairs of matching pair samples were taken.
Generally, after an aerial photo and a satellite ground picture block are subjected to primary matching, namely k neighbors, dozens to hundreds of matching pairs can exist, then a part of matching pairs is removed by using dynamic conditions, a part of matching pairs is removed by using ransac, dozens to dozens of matching pairs can be left, and finally the remaining matching pairs are used for calculating a homography matrix. Specifically, assuming that 100 matching pairs exist in an aerial photograph and a bottom picture block before using the RANSAC algorithm, the RANSAC algorithm randomly selects several matching pairs from the 100 matching pairs to fit a model P, then calculates errors of the remaining matching pairs and the model P, separates out local points and local points, and assumes that 30 pairs of local points are calculated after repeating the matching pairs for n times, then eliminates the 30 pairs of local points, and the remaining 70 matching pairs are regarded as high-quality matching pairs, so that the calculated longitude and latitude accuracy is more accurate.
S45, finally, the bottom image block with the most matched pairs is taken as the closest bottom image block;
as shown in fig. 3 and 4, the left side of the two drawings is the aerial photography drawing 1 after rotation and zoom, the right side of fig. 3 is a bottom drawing block 1, each connecting line in the middle represents a matching pair, and the line connects a point in the left drawing with a point in the right drawing, which indicates that the two points are at the same position in the actual environment; on the right of fig. 4 is a bottom block 2. The more the number of the matching pairs is, the more the two pictures are similar, so that the purpose is to find the bottom picture block which is most similar to the aerial photography picture from the plurality of bottom picture blocks; an aerial photo can be matched with a plurality of base image blocks, 10 matching pairs are obtained by matching the aerial photo 1 with the base image block 1, 20 matching pairs are obtained by matching with the base image block 3 (not shown), 25 matching pairs are obtained by matching with the base image block 2, the number of the matching pairs of the base image block 2 is the largest, the aerial photo 1 is most similar to two images of the base image block 2, and the base image block 2 is the closest base image block of the aerial photo 1.
And S5, calculating the homography matrix M of the two images by using a findHomography module in OpenCV according to a matching pair obtained by matching the aerial image with the nearest base image block through heterogeneous image. M can map the central point of the aerial photography image to the satellite base image, and each point of the satellite base image is provided with latitude and longitude information, so that latitude and longitude of the central point of the aerial photography image, which is also the target center, are obtained.

Claims (8)

1. An unmanned aerial vehicle ground target positioning method based on heterogeneous image matching is characterized by comprising the following steps:
s1, acquiring a high-resolution remote sensing satellite map of a flight area of the unmanned aerial vehicle from the internet according to the flight mission of the unmanned aerial vehicle, wherein each pixel on the satellite map has accurate longitude and latitude information and is used as a satellite base map matched with the image;
s2, carrying a visible light camera by the unmanned aerial vehicle, carrying out carpet type search in a flight area, controlling the unmanned aerial vehicle to fly right above the center of a target if the target is found, adjusting the pitch angle of the camera to keep the camera in normal incidence on the target, and shooting an normal incidence image of the target;
s3, processing the aerial image, firstly zooming the aerial image to ensure that the spatial resolution of the aerial image is consistent with that of the satellite image, then rotating the aerial image according to the heading angle to ensure that the image direction is kept to be north, and at the moment, the orientation of all objects in the aerial image is consistent with that of the satellite image;
s4, sliding window searching is carried out on the satellite base map, the window size is set to be the size of the processed aerial photo map, the overlapping rate is set to be 60% or more, a plurality of base map blocks are obtained, and base map blocks closest to the aerial photo map are searched in all the base map blocks;
s5, according to a matching pair obtained after matching the aerial photo and the nearest base map block through heterogeneous images, calculating a homography matrix M of the two maps by using a findHomography module in OpenCV, mapping the central point of the aerial photo to a satellite base map by using the homography matrix M, wherein each point of the satellite base map is provided with latitude and longitude information, and thus obtaining the latitude and longitude of the central point of the aerial photo which is also the target center.
2. The method for positioning ground targets of unmanned aerial vehicles based on heterogeneous image matching as claimed in claim 1, wherein the remote sensing satellite map in step S1 has 1-18 levels, is derived from map software, selects the satellite base map with the highest level of 18 levels, and has a spatial resolution of 0.5 m or more.
3. The method for positioning ground targets of unmanned aerial vehicles based on heterogeneous image matching according to claims 1-2, wherein the specific search strategy in step S4 is: and carrying out heterogeneous image matching on the aerial photography image and any base image block, obtaining a plurality of matching pairs between the two images in each matching, and taking the base image block with the largest number of matching pairs as the closest base image block.
4. The method for positioning the ground target of the unmanned aerial vehicle based on the heterogeneous image matching as claimed in claim 3, wherein the heterogeneous image matching process between the aerial photograph image and any of the base blocks is as follows:
s41, performing feature extraction on the aerial photo by using a deep learning model D2-Net, and performing feature extraction on all base blocks one by using D2-Net;
s42, roughly matching the characteristics of the aerial photograph image and any base image block by using a K neighbor search algorithm to obtain a plurality of matching pairs;
s43, purifying the matched pairs by using a dynamic self-adaptive constraint condition;
s44, further rejecting mismatching pairs by using a RANSAC algorithm;
and S45, finally, taking the bottom image block with the most matched pairs as the closest bottom image block.
5. The method for positioning ground targets of unmanned aerial vehicles based on heterogeneous image matching according to claim 4, wherein the specific process of rough matching in step S42 is as follows: roughly matching the feature vectors of the two pictures by using a K neighbor algorithm, and enabling K to be 2 to obtain N matching pairs, wherein each matching pair comprises the 1 st matching point dis closest to the Euclidean distancejAnd the next 2 nd matching point dis'j
6. The method for locating the ground target of the unmanned aerial vehicle based on the heterogeneous image matching as claimed in claim 4, wherein the specific process of refining the matching pair in the step S43 is as follows: and (3) purifying the matching pairs by using a dynamic self-adaptive Euclidean distance constraint condition, and counting the mean value of the distance difference between the 1 st matching point and the 2 nd matching point in all the matching pairs:
Figure FDA0003323821390000021
for each matched pair to be screened, the condition of purification is that the 1 st distance is smaller than the difference between the 2 nd distance and the distance difference mean value avgdis, and the formula is as follows:
disj<dis′j-avgdis
disjindicates a distance value, dis ', from the nearest 1 st matching point'jAnd (4) representing the distance value of the second closest matching point, deleting the matching pairs which do not meet the formula, and leaving the matching pairs which meet the formula.
7. The method for positioning ground targets of unmanned aerial vehicles based on heterogeneous image matching according to claim 4, wherein the specific process of rejecting mismatching pairs in step S44 is as follows:
s441, randomly extracting a plurality of pairs of matching pair samples from the purified matching pairs, and fitting a model P by using the plurality of pairs of matching pair samples;
s442, calculating errors of the other matching pairs and the model P, and if the errors are smaller than a threshold value, determining the matching pairs as local points, and if the errors are larger than the threshold value, determining the matching pairs as local points;
and S443, the process is called one iteration, a certain result with the largest number of local points after r iterations is a final result, all the calculated local points are mismatching pairs, and the mismatching pairs are directly removed.
8. The method of claim 7, wherein the number of matched pair samples in step S441 is less than 10 pairs.
CN202111254997.XA 2021-10-27 2021-10-27 Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching Pending CN114238675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111254997.XA CN114238675A (en) 2021-10-27 2021-10-27 Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111254997.XA CN114238675A (en) 2021-10-27 2021-10-27 Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching

Publications (1)

Publication Number Publication Date
CN114238675A true CN114238675A (en) 2022-03-25

Family

ID=80743295

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111254997.XA Pending CN114238675A (en) 2021-10-27 2021-10-27 Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching

Country Status (1)

Country Link
CN (1) CN114238675A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637876A (en) * 2022-05-19 2022-06-17 中国电子科技集团公司第五十四研究所 Large-scene unmanned aerial vehicle image rapid positioning method based on vector map feature expression
CN114998773A (en) * 2022-08-08 2022-09-02 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN115932823A (en) * 2023-01-09 2023-04-07 中国人民解放军国防科技大学 Aircraft ground target positioning method based on heterogeneous region feature matching

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637876A (en) * 2022-05-19 2022-06-17 中国电子科技集团公司第五十四研究所 Large-scene unmanned aerial vehicle image rapid positioning method based on vector map feature expression
CN114637876B (en) * 2022-05-19 2022-08-12 中国电子科技集团公司第五十四研究所 Large-scene unmanned aerial vehicle image rapid positioning method based on vector map feature expression
CN114998773A (en) * 2022-08-08 2022-09-02 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN114998773B (en) * 2022-08-08 2023-02-17 四川腾盾科技有限公司 Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system
CN115932823A (en) * 2023-01-09 2023-04-07 中国人民解放军国防科技大学 Aircraft ground target positioning method based on heterogeneous region feature matching

Similar Documents

Publication Publication Date Title
CN110009739B (en) Method for extracting and coding motion characteristics of digital retina of mobile camera
CN114238675A (en) Unmanned aerial vehicle ground target positioning method based on heterogeneous image matching
CN102088569B (en) Sequence image splicing method and system of low-altitude unmanned vehicle
US10438366B2 (en) Method for fast camera pose refinement for wide area motion imagery
CN106529538A (en) Method and device for positioning aircraft
CN114216454B (en) Unmanned aerial vehicle autonomous navigation positioning method based on heterogeneous image matching in GPS refusing environment
CN110319772B (en) Visual large-span distance measurement method based on unmanned aerial vehicle
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
US20170228585A1 (en) Face recognition system and face recognition method
CN111323024B (en) Positioning method and device, equipment and storage medium
CN110097498B (en) Multi-flight-zone image splicing and positioning method based on unmanned aerial vehicle flight path constraint
CN117036300A (en) Road surface crack identification method based on point cloud-RGB heterogeneous image multistage registration mapping
CN111583342B (en) Target rapid positioning method and device based on binocular vision
CN117218201A (en) Unmanned aerial vehicle image positioning precision improving method and system under GNSS refusing condition
CN111950370A (en) Dynamic environment offline visual milemeter expansion method
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN113096016A (en) Low-altitude aerial image splicing method and system
CN111583332A (en) Visual positioning method, system and device based on parallel search 2D-3D matching
Zhang et al. An UAV navigation aided with computer vision
Venable Improving real-world performance of vision aided navigation in a flight environment
Venable Improving Real World Performance for Vision Navigation in a Flight Environment
Wang et al. Stereo rectification based on epipolar constrained neural network
CN115597592B (en) Comprehensive positioning method applied to unmanned aerial vehicle inspection
Zhang et al. Video image target recognition and geolocation method for UAV based on landmarks
WO2020076263A2 (en) A system for providing position determination with high accuracy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination