CN112419374A - Unmanned aerial vehicle positioning method based on image registration - Google Patents

Unmanned aerial vehicle positioning method based on image registration Download PDF

Info

Publication number
CN112419374A
CN112419374A CN202011252158.XA CN202011252158A CN112419374A CN 112419374 A CN112419374 A CN 112419374A CN 202011252158 A CN202011252158 A CN 202011252158A CN 112419374 A CN112419374 A CN 112419374A
Authority
CN
China
Prior art keywords
image
unmanned aerial
aerial vehicle
shot
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011252158.XA
Other languages
Chinese (zh)
Other versions
CN112419374B (en
Inventor
百晓
张鹏程
张亮
刘祥龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202011252158.XA priority Critical patent/CN112419374B/en
Publication of CN112419374A publication Critical patent/CN112419374A/en
Application granted granted Critical
Publication of CN112419374B publication Critical patent/CN112419374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an unmanned aerial vehicle positioning method for image registration, which comprises (1) preprocessing an unmanned aerial vehicle shooting image, acquiring the flight height of the unmanned aerial vehicle from a height sensor carried by the unmanned aerial vehicle, acquiring the flight direction of the unmanned aerial vehicle from a carried course sensor, obtaining the spatial resolution difference and the direction difference between the unmanned aerial vehicle shooting image and a satellite map according to the marking information of the satellite map image, and performing rotation transformation and scale transformation on the shooting image to enable the shooting image and the map image to have the same direction and scale; (2) detecting key points of images shot by the unmanned aerial vehicle; (3) extracting SIFT characteristics of key points detected in an unmanned aerial vehicle shot image; (4) matching the characteristics of the images shot by the unmanned aerial vehicle and the map images to obtain the corresponding relation of the coordinates of the key point images in the two images; (5) and estimating the spatial transformation from the image shot by the unmanned aerial vehicle to the satellite map image, and obtaining the longitude and latitude of the central point of the image shot by the unmanned aerial vehicle as the current longitude and latitude coordinates of the unmanned aerial vehicle by combining the geographic information of the map.

Description

Unmanned aerial vehicle positioning method based on image registration
Technical Field
The invention relates to an unmanned aerial vehicle positioning method based on vision under the condition of GPS failure, in particular to an unmanned aerial vehicle positioning method through unmanned aerial vehicle visual angle image and satellite map image registration.
Background
Unmanned aerial vehicles have the advantages of low power consumption, low cost, flexibility and expandability, and are widely applied to tasks such as photography, aerial surveying and mapping, agriculture, rescue and logistics. In the process of executing specific tasks, the positioning of the unmanned aerial vehicle is the basis of unmanned aerial vehicle control and decision making. At present, most unmanned aerial vehicle platforms are equipped with airborne camera and GPS chip, can carry on sensors such as IMU, airborne compass, barometer simultaneously for unmanned aerial vehicle perception environment and detection self state. Under the general condition, unmanned aerial vehicle accessible special hardware such as GPS fixes a position, and under special environment, when GPS signal is unavailable or hardware fault appears, will ensure the smooth execution of unmanned aerial vehicle current task, needs more stable positioning method, combines the sensor platform that unmanned aerial vehicle carried on, and the aerial unmanned aerial vehicle positioning method based on vision can realize the location purpose effectively, improves unmanned aerial vehicle system's stability and reliability.
Image registration is a typical problem in computer vision, given two images, it maps one image onto the other by finding a spatial transformation so that points in the two images that correspond to the same location in space can be mapped one to one. In practice, a Homography (homographic) is usually used to describe the spatial transformation between two images, and the homogeneous coordinates of a corresponding point in one image are obtained by multiplying the Homography with the homogeneous coordinates of the midpoint in the other image. The method comprises the steps of realizing image registration by a common key point feature matching algorithm, estimating the spatial transformation from A to B for an image A and an image B to be registered, and firstly detecting key points in the two images, namely pixel points which have stability to the image transformation and significance in an image scene; extracting neighborhood information of the detected key points to construct vectors with fixed length as feature description of the key points; for the key point features in the image A, k adjacent neighbors in the key point features of the image B are searched according to the distance of the feature vector, the key points with poor significance and the features of the key points in the image A are removed through significance testing, and for the remaining key points and the features of the key points in the image A, the nearest neighbor in the k adjacent neighbors is selected as a final matching point; and fitting by using a RANSAC algorithm or direct linear transformation according to the matching relation between the residual key points in the image A and the key points in the image B, and estimating a homography matrix transformed from the image A to the image B. In the aerial unmanned aerial vehicle positioning method based on image registration, the unmanned aerial vehicle registers an image shot on the ground and a satellite map image with geographic information, and a homography matrix with high precision is solved to realize the positioning of the unmanned aerial vehicle.
In recent years, many research teams at home and abroad propose an aerial unmanned aerial vehicle positioning method based on vision. The unmanned aerial vehicle positioning problem based on vision is firstly regarded as an image retrieval problem, a large-scale image database with geographic information needs to be established, images shot by an unmanned aerial vehicle are processed to extract characteristics of the images, the most similar images are retrieved from the database according to characteristic similarity, then accurate image matching is carried out, the method needs to construct a high-quality and large-scale database, and the database also needs a larger space for storage. Compared with an image retrieval mode, the unmanned aerial vehicle ground shot image and the regional satellite map are registered more efficiently, when the unmanned aerial vehicle shot image and the regional satellite map are directly registered by using algorithms such as SIFT, SURF and the like in image registration, the satellite map and the unmanned aerial vehicle shot image have large scale difference and rotation angle, abundant textures contained in the whole regional satellite map introduce a large amount of noise, and the algorithm cannot find a correct matching point in the satellite map for a key point in the unmanned aerial vehicle shot image, so that a coarse-to-fine positioning method is generated, the position of the unmanned aerial vehicle in the region is roughly estimated, and then image registration is performed on a small region where the rough position is located. A coarse-to-fine unmanned aerial vehicle visual positioning method based on a visual odometer is adopted by Ahmed Nassar and the like in France in 2018 and Hunter Goforth and the like in America in 2019, the initial positioning of the unmanned aerial vehicle needs to be known and an initial position image needs to be shot when the unmanned aerial vehicle takes off, the unmanned aerial vehicle is required to continuously shoot ground images in the flying process, effective overlapping must exist between shot frames, rough positioning is obtained through image registration accumulation between the frames, and fine image registration is carried out on a roughly positioned area. The Ahmed Nassar and the like estimate the transformation between frames through an SIFT algorithm, and realize accurate matching through matching the geometric shapes of objects after semantic segmentation, and need effective support of an aerial visual angle image semantic segmentation model. Hunter Goforth et al estimate the frame-to-frame transformations using a pre-trained neural network, update the frame-to-frame and frame-to-map image transformations by an optimization method, improve the accuracy of the estimation, but do not get rid of the limitations of using a visual odometer, and still have great limitations in practical applications.
Disclosure of Invention
The problems solved by the invention are as follows: because the unmanned aerial vehicle shot image and the satellite map image have larger scale difference and rotation angle, and abundant textures contained in the satellite map image introduce a large amount of noise, when the unmanned aerial vehicle shot image and the satellite map image of the whole area are directly registered through the traditional key point characteristics, the algorithm is difficult to obtain the correct matching point of the key point in the satellite map for the key point in the unmanned aerial vehicle shot image, and an effective homography matrix cannot be obtained; the coarse-to-fine visual positioning method relies on the continuous accumulation of the visual odometer to estimate the position of the unmanned aerial vehicle, the positioning of the unmanned aerial vehicle is realized by two-stage and multiple image registration, and the application is limited. The invention provides a method for realizing visual positioning of an unmanned aerial vehicle by carrying out primary registration on an image shot by the unmanned aerial vehicle and a satellite map image of a complete area in combination with a sensor carried by an unmanned aerial vehicle hardware platform.
The technical solution of the invention comprises the following steps:
(1) and (5) carrying out image preprocessing by the unmanned aerial vehicle. The unmanned aerial vehicle shooting image and the satellite map image have great difference in direction and scale, the difference cannot be effectively processed only through invariance of characteristics, the difference between the unmanned aerial vehicle shooting image and the satellite map image needs to be reduced to realize effective registration of the unmanned aerial vehicle shooting image and the satellite map image, the flight height of the unmanned aerial vehicle can be read from a sensor such as a barometer and the like by combining a simple sensor carried by an unmanned aerial vehicle platform, the flight direction of the unmanned aerial vehicle can be read from a sensor such as an onboard compass and the like, the spatial resolution of the satellite map image is known from a standard direction, and the rotation angle and scale difference between the unmanned aerial vehicle shooting image and the satellite map image can be estimated from the, the unmanned aerial vehicle shot image can be aligned with the satellite map image through rotation transformation and scaling transformation, so that the scale difference and the rotation difference of the two images are effectively reduced, and the stability and the accuracy of image registration are improved;
(2) and detecting key points of the images shot by the unmanned aerial vehicle. The preprocessed unmanned aerial vehicle image and the preprocessed satellite map image are approximately aligned in the dimension and direction, the edges of objects in the two images and the edges of all parts in the objects have obvious corresponding relations, a super-pixel algorithm SEEDS can divide the images into a plurality of homogeneous super-pixels, the boundaries of the super-pixels are overlapped with the edges of the objects or all parts in the objects in the images, meanwhile, the dimensions of the two images are corrected, the spatial resolution is basically consistent, the SEEDS algorithm has accurate corresponding relations on super-pixel edge points generated on the same object in the two images, and accurate image transformation relations can be estimated by associating the coordinates of the key points;
(3) and (5) extracting key point features of the images shot by the unmanned aerial vehicle. And (2) detecting a large number of key points in the images shot by the unmanned aerial vehicle and the satellite map images, and estimating the transformation relation between the two images by associating the key points in the two images. The relevance of the key points in the two images is measured through the similarity of the key point features, the SIFT feature description is used for extracting the appearance information of the key point neighborhood in the images, the SIFT feature has certain rotation invariance and scale invariance, certain tolerance is provided for the preprocessing error in the step (1), and stable and effective key point feature description is provided;
(4) the method comprises the steps of matching the features of images shot by an unmanned aerial vehicle and satellite map images, performing feature matching on key point feature sets extracted from the two images by using a SuperGlue algorithm, aggregating key point features through a graph neural network, and generating feature description containing key point appearance information, key point spatial position information and adjacent key point information for each key point. Constructing a similarity matrix for the key point characteristics obtained by polymerization, and using a Sinkhorn algorithm to iteratively optimize the incidence relation to obtain the optimal matching result of the key points in the two images;
(5) estimating a spatial transformation from the unmanned aerial vehicle captured image to the map image. And (4) determining the corresponding relation of the midpoints of the two images obtained according to feature matching, and solving the homography which is best fitted to the corresponding relation of the key points by using an RANSAC algorithm which is stable to noise, can filter error matching and output an accurate homography. And calculating the coordinates of the center of the image shot by the unmanned aerial vehicle in the map image according to the homography matrix estimated by RANSAC, and obtaining the longitude and latitude of the point as the current longitude and latitude coordinates of the unmanned aerial vehicle by combining the geographic information of the map.
Compared with the prior art, the unmanned aerial vehicle positioning method has the advantages that the images shot by the unmanned aerial vehicle are simply preprocessed and then are directly registered with the map images, the unmanned aerial vehicle positioning method is different from a coarse-to-fine unmanned aerial vehicle visual positioning method, the initial position of the unmanned aerial vehicle is not required to be given, the position of the unmanned aerial vehicle is not required to be roughly estimated through sensors such as a visual odometer or an IMU (inertial measurement unit), the visual positioning of the unmanned aerial vehicle can be completed through one-time image registration, and the use limit of the image registration-based air unmanned aerial vehicle positioning method is effectively reduced.
Drawings
FIG. 1 is a flow chart of a method for positioning an UAV based on image registration;
FIG. 2 is a schematic diagram showing the relationship between the flying height of the unmanned aerial vehicle and the spatial resolution of the captured image;
FIG. 3 is a diagram illustrating the detection result of the image key points;
FIG. 4 is a flowchart of the SuperGlue feature matching algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present method more clearly apparent, the following detailed description is made with reference to the accompanying drawings and embodiments.
As shown in fig. 1, the unmanned aerial vehicle positioning method based on image registration of the present invention includes the following steps:
(1) the method comprises the steps of preprocessing a shot image of the unmanned aerial vehicle, acquiring the flight height of the unmanned aerial vehicle from a height sensor carried by the unmanned aerial vehicle, acquiring the flight direction of the unmanned aerial vehicle from a course sensor carried by the unmanned aerial vehicle, obtaining the spatial resolution difference and the direction difference between the shot image of the unmanned aerial vehicle and a satellite map according to the marking information of the satellite map image, and performing rotation transformation and scale transformation on the shot image of the unmanned aerial vehicle to enable the shot image of the unmanned aerial vehicle and the map image to have the same direction and scale;
(2) detecting key points of the unmanned aerial vehicle shot image, segmenting the preprocessed unmanned aerial vehicle shot image by using an SEEDS super-pixel segmentation algorithm to generate super-pixel division, and selecting boundary points of all super-pixels as key points;
(3) extracting key point features of the unmanned aerial vehicle shot image, namely extracting SIFT features of the key points detected in the unmanned aerial vehicle shot image;
(4) matching the features of the images shot by the unmanned aerial vehicle and the map images, performing feature matching on key point feature sets extracted from the two images by using a SuperGlue algorithm, and obtaining the corresponding relation of the coordinates of the key point images in the two images according to the corresponding relation of the features;
(5) estimating the space transformation from the image shot by the unmanned aerial vehicle to the satellite map image, solving a homography matrix of the image shot by the unmanned aerial vehicle transformed to the satellite map image by using a RANSAC algorithm according to the corresponding relation of the middle points of the two images obtained by feature matching, calculating the coordinate of the center of the image shot by the unmanned aerial vehicle in the map image, and obtaining the longitude and latitude of the point as the current longitude and latitude coordinates of the unmanned aerial vehicle by combining the geographic information of the map.
According to the step (1), the spatial resolution and the standard direction of the satellite map image are known, the heading angle of the unmanned aerial vehicle in flight can be determined by reading sensor information, and the image shot by the unmanned aerial vehicle can be aligned with the direction of the map by rotating the image. The spatial resolution that unmanned aerial vehicle shot the image then can calculate through its flying height, like fig. 2, to the same camera that unmanned aerial vehicle carried on, when the vertical ground image that shoots downwards of different flying heights, there is similar relation in the field of vision of camera, promptly:
Figure BDA0002771939130000051
wherein h is1、h2Are different flying heights (unit: meter) a of the unmanned aerial vehicle1、a2The space length (unit: meter) of the image shot by the unmanned plane at different flight heights, b1、b2The space width (unit: meter) for shooting images by the unmanned aerial vehicle at different flying heights. The camera is tested at a given height in advance, the spatial resolution of the image shot by the camera at the given height is determined, the flight height of the unmanned aerial vehicle is read during positioning in flight, the spatial resolution of the image shot by the unmanned aerial vehicle is calculated according to the similarity relation, and the image shot by the unmanned aerial vehicle is zoomed to enable the image shot by the unmanned aerial vehicle to reach the spatial resolution consistent with the satellite map image.
According to the step (2), the edge of the object in the image and the edge of each part in the object can be used as effective key points for image registration, as shown in fig. 3, the image is segmented into a plurality of homogeneous superpixels by the SEEDS superpixel segmentation algorithm, the boundary of the superpixel is matched with the edge of the object and each part in the image, after the step (1), the two images have consistent scale and direction, and the boundary points of the superpixels divided in the two images by the SEEDS algorithm have accurate corresponding relation. The SEEDS algorithm receives the number of superpixels generated by the parameter setting algorithm, in the key point detection step of the invention, the parameter is calculated according to the size of the image, and the value of the parameter is set as follows:
Figure BDA0002771939130000052
wherein h isi、wiThe height and width (unit: pixel) of the image,
Figure BDA0002771939130000053
is downwardAnd (6) carrying out rounding operation. For the satellite map image, image key point detection is carried out before the unmanned aerial vehicle takes off, the key point set which is detected for the first time is used all the time in the flight process of the unmanned aerial vehicle, repeated detection is not carried out, when the unmanned aerial vehicle is positioned every time, key point detection is carried out only on the image shot by the preprocessed unmanned aerial vehicle, the parameters of the SEEDS algorithm are set by using the same method in the detection of the key points of the two images, and the same superpixel division is ensured to be generated on the same object of the algorithm.
According to the step (3), SIFT features of the detected key points in the image are extracted, the SIFT features calculate the gradient direction of the image in the neighborhood of the key points, feature description containing neighborhood information is constructed, and the method has certain rotation invariance and scale invariance and certain fault tolerance on preprocessed errors. Before the unmanned aerial vehicle takes off, after the key point detection, the SIFT features of the satellite map image at the key point are extracted in advance and stored, the SIFT feature set extracted for the first time is used all the time in the flight process of the unmanned aerial vehicle, and repeated calculation is not performed any more. When positioning is carried out in flight, key point detection is carried out on the ground image shot by the unmanned aerial vehicle, and the SIFT feature set of the ground image shot by the unmanned aerial vehicle at the key point is extracted.
According to the step (4), SIFT feature sets extracted from the two images are matched, as shown in fig. 4, edges are established between key points in the two images and between the key points between the two images to form a graph, the node features of the graph are composed of image coordinates of the key points and SIFT feature descriptions, the characteristics of each node in the graph are aggregated by using a graph neural network through a SuperGlue algorithm, feature transformation is carried out by integrating the neighborhood appearance, the spatial position and the adjacent key point information of the key points, the node features are updated, and the key point features with higher significance are obtained. Recording the number of key points in the image shot by the unmanned aerial vehicle as M, the number of key points in the satellite map image as N, calculating the similarity of every two key point features in the two images, establishing an MxN similarity matrix, performing iterative optimization by using a Sinkhorn algorithm, setting the maximum iteration time T as 100 to obtain the MxN correlation matrix, wherein each row of the correlation matrix represents the correlation between the feature of the corresponding point in the image shot by the unmanned aerial vehicle and the feature of each key point in the satellite map image, and selecting the key point with the highest correlation in the satellite map image as a matching point for the key point in the image shot by the unmanned aerial vehicle according to the value of the correlation matrix.
And (5) fitting the corresponding relation of the key point image coordinates obtained in the step (4) by using a RANSAC algorithm, setting the error threshold of the RANSAC algorithm to be 1 pixel, setting the maximum iteration number to be 2048, solving a homography matrix with the best fitting effect on the key point matching result, and taking the homography matrix as a spatial transformation relation from the unmanned aerial vehicle shooting image to the satellite map image. The longitude and latitude of a central point of an image shot by the unmanned aerial vehicle are taken as the longitude and latitude of the current position of the unmanned aerial vehicle, the coordinate of the point in the satellite map image is solved according to the homography matrix, and the longitude and latitude of the central point are calculated according to the longitude and latitude marks of the satellite map.
Through the steps, the influence of a large amount of noise contained in the satellite map image on the registration result in the registration of the unmanned aerial vehicle shooting image and the satellite map image is reduced by combining the simple sensor carried by the unmanned aerial vehicle; compared with the coarse-to-fine visual positioning method, the position of the unmanned aerial vehicle is roughly estimated without depending on hardware such as a visual odometer or an IMU (inertial measurement unit), the position of the unmanned aerial vehicle in the area can be determined only by one-time image registration, and the limitation of the unmanned aerial vehicle visual positioning method in application is reduced. Parts of the invention not described in detail are well known in the art.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.

Claims (7)

1. An unmanned aerial vehicle positioning method for image registration is characterized by comprising the following steps:
(1) the method comprises the steps of preprocessing a shot image of the unmanned aerial vehicle, acquiring the flight height of the unmanned aerial vehicle from a height sensor carried by the unmanned aerial vehicle, acquiring the flight direction of the unmanned aerial vehicle from a course sensor carried by the unmanned aerial vehicle, obtaining the spatial resolution difference and the direction difference between the shot image of the unmanned aerial vehicle and a satellite map according to the marking information of the satellite map image, and performing rotation transformation and scale transformation on the shot image of the unmanned aerial vehicle to enable the shot image of the unmanned aerial vehicle and the map image to have the same direction and scale;
(2) detecting key points of the unmanned aerial vehicle shot image, segmenting the preprocessed unmanned aerial vehicle shot image by using an SEEDS super-pixel segmentation algorithm to generate super-pixel division, and selecting boundary points of all super-pixels as key points;
(3) extracting key point features of the unmanned aerial vehicle shot image, namely extracting SIFT features of the key points detected in the unmanned aerial vehicle shot image;
(4) matching the features of the images shot by the unmanned aerial vehicle and the map images, performing feature matching on key point feature sets extracted from the two images by using a SuperGlue algorithm, and obtaining the corresponding relation of the coordinates of the key point images in the two images according to the corresponding relation of the features;
(5) estimating the space transformation from the image shot by the unmanned aerial vehicle to the satellite map image, solving a homography matrix of the image shot by the unmanned aerial vehicle transformed to the satellite map image by using a RANSAC algorithm according to the corresponding relation of the middle points of the two images obtained by feature matching, calculating the coordinate of the center of the image shot by the unmanned aerial vehicle in the map image, and obtaining the longitude and latitude of the point as the current longitude and latitude coordinates of the unmanned aerial vehicle by combining the geographic information of the map.
2. The drone positioning method for image registration according to claim 1, characterized in that said step (1) comprises in particular:
the spatial resolution and the standard direction of the satellite map image are known, the heading angle of the unmanned aerial vehicle in flight is determined by reading sensor information, and the image shot by the unmanned aerial vehicle is rotated to be aligned with the direction of the map; the spatial resolution who unmanned aerial vehicle shot the image then calculates through its flying height, to the same camera that unmanned aerial vehicle carried on, when the vertical ground image that shoots downwards of different flying heights, there is similar relation in the field of vision of camera, promptly:
Figure FDA0002771939120000011
wherein h is1、h2For different flying heights of the drone, a1、a2For unmanned aerial vehicles, the spatial length of the images taken at different flight heights, b1、b2The space width of the image shot by the unmanned aerial vehicle at different flight heights is obtained; the camera is tested at a given height in advance, the spatial resolution of the image shot by the camera at the given height is determined, the flight height of the unmanned aerial vehicle is read during positioning in flight, the spatial resolution of the image shot by the unmanned aerial vehicle is calculated according to the similarity relation, and the image shot by the unmanned aerial vehicle is zoomed to enable the image shot by the unmanned aerial vehicle to reach the spatial resolution consistent with the satellite map image.
3. The drone positioning method for image registration according to claim 1, wherein the step (2) of keypoint detection specifically comprises:
using the edge of an object in an image and the edge of each part in the object as effective key points for image registration, using an SEEDS super-pixel segmentation algorithm to segment the image into a plurality of homogeneous super-pixels, wherein the boundary of the super-pixels is matched with the edge of the object in the image and each part in the object, the two images have consistent scale and direction after the step (1), the super-pixel boundary points divided in the two images by the SEEDS algorithm have accurate corresponding relation, the SEEDS algorithm receives the number of the super-pixels generated by a parameter setting algorithm, in the key point detection step, the parameter is calculated according to the size of the image, and the value of the parameter is set as
Figure FDA0002771939120000021
Wherein h isi、wiAre the height and width of the image,
Figure FDA0002771939120000022
is a rounding down operation.
4. The unmanned aerial vehicle positioning method for image registration according to claim 3, wherein the satellite map image is subjected to image key point detection before the unmanned aerial vehicle takes off, the first detected key point set is always used in the flight process of the unmanned aerial vehicle, repeated detection is not performed, only the preprocessed unmanned aerial vehicle photographed image is subjected to key point detection every time the unmanned aerial vehicle is positioned, the same method is used in the two image key point detection to set parameters of the SEEDS algorithm, and the same object of the algorithm is ensured to generate the same superpixel division.
5. The drone positioning method for image registration according to claim 1, characterized in that the step (3) comprises in particular:
the method comprises the steps of extracting SIFT features of key points detected in an image, extracting the SIFT features of the key points of the SIFT features, wherein the SI.
6. The drone positioning method for image registration according to claim 1, characterized in that said step (4) comprises in particular:
matching SIFT feature sets extracted from two images, establishing edges between key points in the two images and key points between the two images to form a graph, wherein the node features of the graph are composed of image coordinates of the key points and SIFT feature descriptions, a SuperGlue algorithm uses a graph neural network to aggregate the features of each node in the graph, integrates the neighborhood appearance, the spatial position and the adjacent key point information of the key points to perform feature transformation, updates the node features to obtain key point features with higher significance, records that the number of the key points in an unmanned aerial vehicle image is M, the number of the key points in a satellite map image is N, calculates the pairwise similarity of the key point features in the two images, establishes an MxN similarity matrix, uses a Sinkhorn algorithm to perform iterative optimization, the maximum iterative times is T to obtain an MxN correlation matrix, each row of the correlation matrix represents the correlation degree of the features of corresponding points in the unmanned aerial vehicle image and the features of each key point in the satellite map image, and selecting the key point with the highest relevance degree in the satellite map image as the matching point for the key point in the image shot by the unmanned aerial vehicle according to the value of the relevance matrix.
7. The drone positioning method for image registration according to claim 1, characterized in that said step (5) comprises in particular:
and (3) fitting the corresponding relation of the image coordinates of the key points obtained in the step (4) by using a RANSAC algorithm, setting an error threshold value and the maximum iteration times of the RANSAC algorithm, solving a homography matrix with the best fitting effect on the matching result of the key points, taking the homography matrix as a space transformation relation from the image shot by the unmanned aerial vehicle to the satellite map image, selecting the longitude and latitude of the central point of the image shot by the unmanned aerial vehicle as the longitude and latitude of the current position of the unmanned aerial vehicle, solving the coordinates of the point in the satellite map image according to the homography matrix, and calculating the longitude and latitude of the central point according to the longitude and latitude range of the satellite map.
CN202011252158.XA 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration Active CN112419374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011252158.XA CN112419374B (en) 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011252158.XA CN112419374B (en) 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration

Publications (2)

Publication Number Publication Date
CN112419374A true CN112419374A (en) 2021-02-26
CN112419374B CN112419374B (en) 2022-12-27

Family

ID=74781858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011252158.XA Active CN112419374B (en) 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration

Country Status (1)

Country Link
CN (1) CN112419374B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113432594A (en) * 2021-07-05 2021-09-24 北京鑫海宜科技有限公司 Unmanned aerial vehicle automatic navigation system based on map and environment
CN113838126A (en) * 2021-09-27 2021-12-24 广州市赋安电子科技有限公司 Video monitoring and unmanned aerial vehicle image alignment method
CN114201633A (en) * 2022-02-17 2022-03-18 四川腾盾科技有限公司 Self-adaptive satellite image generation method for unmanned aerial vehicle visual positioning
CN114612788A (en) * 2022-03-22 2022-06-10 东北林业大学 Urban landscape plant diversity monitoring method based on neural network
CN115495611A (en) * 2022-11-18 2022-12-20 中国电子科技集团公司第五十四研究所 Space scene retrieval method oriented to autonomous positioning of unmanned aerial vehicle
CN115861591A (en) * 2022-12-09 2023-03-28 南京航空航天大学 Unmanned aerial vehicle positioning method based on transform key texture coding matching
CN117253029A (en) * 2023-09-07 2023-12-19 北京自动化控制设备研究所 Image matching positioning method based on deep learning and computer equipment
CN117274391A (en) * 2023-11-23 2023-12-22 长春通视光电技术股份有限公司 Digital map matching target positioning method based on graphic neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485655A (en) * 2015-09-01 2017-03-08 张长隆 A kind of taken photo by plane map generation system and method based on quadrotor
CN109460046A (en) * 2018-10-17 2019-03-12 吉林大学 A kind of unmanned plane identify naturally not with independent landing method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111028292A (en) * 2019-12-13 2020-04-17 中国电子科技集团公司第二十研究所 Sub-pixel level image matching navigation positioning method
US20200226352A1 (en) * 2012-06-13 2020-07-16 San Diego State University Research Foundation Wide area intermittent video using non-orthorectified feature matching in long period aerial image capture with pixel-based georeferencing
CN111666882A (en) * 2020-06-08 2020-09-15 武汉唯理科技有限公司 Method for extracting answers of handwritten test questions
CN111780764A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200226352A1 (en) * 2012-06-13 2020-07-16 San Diego State University Research Foundation Wide area intermittent video using non-orthorectified feature matching in long period aerial image capture with pixel-based georeferencing
CN106485655A (en) * 2015-09-01 2017-03-08 张长隆 A kind of taken photo by plane map generation system and method based on quadrotor
CN109460046A (en) * 2018-10-17 2019-03-12 吉林大学 A kind of unmanned plane identify naturally not with independent landing method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111028292A (en) * 2019-12-13 2020-04-17 中国电子科技集团公司第二十研究所 Sub-pixel level image matching navigation positioning method
CN111666882A (en) * 2020-06-08 2020-09-15 武汉唯理科技有限公司 Method for extracting answers of handwritten test questions
CN111780764A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DUAN FUZHOU 等: "A novel approach for coarse-to-fine windthrown tree extraction based on unmanned aerial vehicle images", 《REMOTE SENSING》 *
乔文治: "基于SIFT特征的无人机航拍图像拼接", 《教练机》 *
刘学等: "基于SIFT算法的无人机视觉导航研究", 《无线电工程》 *
龙古灿: "基于摄像测量的无人机对地面目标精确定位关键技术研究", 《中国优秀博硕士学位论文全文数据库(博士) 工程科技Ⅱ辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113432594A (en) * 2021-07-05 2021-09-24 北京鑫海宜科技有限公司 Unmanned aerial vehicle automatic navigation system based on map and environment
CN113838126A (en) * 2021-09-27 2021-12-24 广州市赋安电子科技有限公司 Video monitoring and unmanned aerial vehicle image alignment method
CN113838126B (en) * 2021-09-27 2022-05-10 广州赋安数字科技有限公司 Video monitoring and unmanned aerial vehicle image alignment method
CN114201633A (en) * 2022-02-17 2022-03-18 四川腾盾科技有限公司 Self-adaptive satellite image generation method for unmanned aerial vehicle visual positioning
CN114612788A (en) * 2022-03-22 2022-06-10 东北林业大学 Urban landscape plant diversity monitoring method based on neural network
CN115495611A (en) * 2022-11-18 2022-12-20 中国电子科技集团公司第五十四研究所 Space scene retrieval method oriented to autonomous positioning of unmanned aerial vehicle
CN115861591A (en) * 2022-12-09 2023-03-28 南京航空航天大学 Unmanned aerial vehicle positioning method based on transform key texture coding matching
CN115861591B (en) * 2022-12-09 2024-02-02 南京航空航天大学 Unmanned aerial vehicle positioning method based on transformer key texture coding matching
CN117253029A (en) * 2023-09-07 2023-12-19 北京自动化控制设备研究所 Image matching positioning method based on deep learning and computer equipment
CN117274391A (en) * 2023-11-23 2023-12-22 长春通视光电技术股份有限公司 Digital map matching target positioning method based on graphic neural network
CN117274391B (en) * 2023-11-23 2024-02-06 长春通视光电技术股份有限公司 Digital map matching target positioning method based on graphic neural network

Also Published As

Publication number Publication date
CN112419374B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
Nassar et al. A deep CNN-based framework for enhanced aerial imagery registration with applications to UAV geolocalization
US8259994B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
US9280832B2 (en) Methods, systems, and computer readable media for visual odometry using rigid structures identified by antipodal transform
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
Dabeer et al. An end-to-end system for crowdsourced 3D maps for autonomous vehicles: The mapping component
US9495747B2 (en) Registration of SAR images by mutual information
US11748449B2 (en) Data processing method, data processing apparatus, electronic device and storage medium
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN107560603A (en) A kind of unmanned plane oblique photograph measuring system and measuring method
Dawood et al. Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera
CN113340312A (en) AR indoor live-action navigation method and system
CN115423863B (en) Camera pose estimation method and device and computer readable storage medium
Müller et al. Squeezeposenet: Image based pose regression with small convolutional neural networks for real time uas navigation
JP2023530449A (en) Systems and methods for air and ground alignment
Chen et al. Real-time geo-localization using satellite imagery and topography for unmanned aerial vehicles
CN113838129B (en) Method, device and system for obtaining pose information
Hou et al. UAV pose estimation in GNSS-denied environment assisted by satellite imagery deep learning features
Majdik et al. Micro air vehicle localization and position tracking from textured 3d cadastral models
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
CN117073669A (en) Aircraft positioning method
Zhang et al. An UAV navigation aided with computer vision
Han et al. Uav vision: Feature based accurate ground target localization through propagated initializations and interframe homographies
KR20220062709A (en) System for detecting disaster situation by clustering of spatial information based an image of a mobile device and method therefor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant