CN112419374B - Unmanned aerial vehicle positioning method based on image registration - Google Patents

Unmanned aerial vehicle positioning method based on image registration Download PDF

Info

Publication number
CN112419374B
CN112419374B CN202011252158.XA CN202011252158A CN112419374B CN 112419374 B CN112419374 B CN 112419374B CN 202011252158 A CN202011252158 A CN 202011252158A CN 112419374 B CN112419374 B CN 112419374B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
image
shot
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011252158.XA
Other languages
Chinese (zh)
Other versions
CN112419374A (en
Inventor
百晓
张鹏程
张亮
刘祥龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202011252158.XA priority Critical patent/CN112419374B/en
Publication of CN112419374A publication Critical patent/CN112419374A/en
Application granted granted Critical
Publication of CN112419374B publication Critical patent/CN112419374B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an unmanned aerial vehicle positioning method for image registration, which comprises the steps that (1) an unmanned aerial vehicle shooting image is preprocessed, the flying height of the unmanned aerial vehicle is obtained from a height sensor carried by the unmanned aerial vehicle, the flying direction of the unmanned aerial vehicle is obtained from a carried course sensor, the spatial resolution difference and the direction difference between the unmanned aerial vehicle shooting image and a satellite map are obtained according to the marking information of the satellite map image, and the shooting image is subjected to rotation transformation and scale transformation to enable the shooting image and the map image to have the same direction and scale; (2) detecting key points of images shot by the unmanned aerial vehicle; (3) Extracting SIFT characteristics of key points detected in an unmanned aerial vehicle shot image; (4) Matching the characteristics of the images shot by the unmanned aerial vehicle and the map images to obtain the corresponding relation of the coordinates of the key point images in the two images; (5) And estimating the spatial transformation from the image shot by the unmanned aerial vehicle to the satellite map image, and obtaining the longitude and latitude of the central point of the image shot by the unmanned aerial vehicle as the current longitude and latitude coordinates of the unmanned aerial vehicle by combining the geographic information of the map.

Description

Unmanned aerial vehicle positioning method based on image registration
Technical Field
The invention relates to an unmanned aerial vehicle positioning method based on vision under the condition of GPS failure, in particular to an unmanned aerial vehicle positioning method through unmanned aerial vehicle visual angle image and satellite map image registration.
Background
Unmanned aerial vehicles have the advantages of low power consumption, low cost, flexibility and expandability, and are widely applied to tasks such as photography, aerial surveying and mapping, agriculture, rescue and logistics. In the process of executing specific tasks, the positioning of the unmanned aerial vehicle is the basis of unmanned aerial vehicle control and decision making. At present, the unmanned aerial vehicle platform is mostly equipped with airborne camera and GPS chip, can carry on sensors such as IMU, airborne compass, barometer simultaneously for unmanned aerial vehicle perception environment and detection self state. Under the general condition, the unmanned aerial vehicle can be positioned through special hardware such as a GPS (global positioning system), and under a special environment, when a GPS signal is unavailable or hardware faults occur, a more stable positioning method is needed to ensure the smooth execution of the current task of the unmanned aerial vehicle, and by combining a sensor platform carried by the unmanned aerial vehicle, the positioning purpose can be effectively realized by the vision-based aerial unmanned aerial vehicle positioning method, and the stability and the reliability of an unmanned aerial vehicle system are improved.
Image registration is a typical problem in computer vision, given two images, it maps one image onto the other by finding a spatial transformation so that points in the two images that correspond to the same location in space can be mapped one to one. In practice, a Homography (homographic) is usually used to describe the spatial transformation between two images, and the homogeneous coordinates of a corresponding point in one image are obtained by multiplying the Homography with the homogeneous coordinates of the midpoint in the other image. The image registration is realized by a common key point feature matching algorithm, the spatial transformation from A to B is estimated for an image A and an image B to be registered, and key points in the two images are firstly detected, namely pixel points which have stability to the image transformation and significance in an image scene; extracting neighborhood information of the detected key points to construct vectors with fixed length as feature description of the key points; for the key point characteristics in the image A, k neighbors in the key point characteristics in the image B are searched according to the distance of the characteristic vector, the key points with poor significance and the characteristics thereof in the image A are removed through significance testing, and for the rest key points and the characteristics thereof in the image A, the nearest neighbor in the k neighbors is selected as a final matching point; and fitting by using a RANSAC algorithm or direct linear transformation according to the matching relation between the remaining key points in the image A and the key points in the image B, and estimating a homography matrix for transforming the image A to the image B. In the aerial unmanned aerial vehicle positioning method based on image registration, the unmanned aerial vehicle registers an image shot on the ground and a satellite map image with geographic information, and a high-precision homography matrix is solved to realize positioning of the unmanned aerial vehicle.
In recent years, many research teams at home and abroad propose an aerial unmanned aerial vehicle positioning method based on vision. The unmanned aerial vehicle positioning problem based on vision is firstly regarded as an image retrieval problem, a large-scale image database with geographic information needs to be established, images shot by an unmanned aerial vehicle are processed to extract characteristics of the images, the most similar images are retrieved from the database according to characteristic similarity, then accurate image matching is carried out, the method needs to construct a high-quality and large-scale database, and the database also needs a larger space for storage. Compared with an image retrieval mode, the unmanned aerial vehicle ground shot image and the regional satellite map are registered more efficiently, when the unmanned aerial vehicle shot image and the regional satellite map are directly registered by using algorithms such as SIFT, SURF and the like in image registration, the satellite map and the unmanned aerial vehicle shot image have large scale difference and rotation angle, abundant textures contained in the whole regional satellite map introduce a large amount of noise, and the algorithm cannot find a correct matching point in the satellite map for a key point in the unmanned aerial vehicle shot image, so that a coarse-to-fine positioning method is generated, the position of the unmanned aerial vehicle in the region is roughly estimated, and then image registration is performed on a small region where the rough position is located. A coarse-to-fine unmanned aerial vehicle visual positioning method based on a visual odometer is adopted by Ahmed Nassar and the like in France in 2018 and Hunter Goforth and the like in America in 2019, the initial positioning of the unmanned aerial vehicle needs to be known and an initial position image needs to be shot when the unmanned aerial vehicle takes off, the unmanned aerial vehicle is required to continuously shoot ground images in the flying process, effective overlapping must exist between shot frames, rough positioning is obtained through image registration accumulation between the frames, and fine image registration is carried out on a roughly positioned area. The Ahmed Nassar and the like estimate the transformation between frames through an SIFT algorithm, realize accurate matching through matching the geometric shapes of the objects after semantic segmentation, and need effective support of an aerial visual angle image semantic segmentation model. Hunter Goforth et al estimate the frame-to-frame transformations using a pre-trained neural network, update the frame-to-frame and frame-to-map image transformations by an optimization method, improve the accuracy of the estimation, but do not get rid of the limitations of using a visual odometer, and still have great limitations in practical applications.
Disclosure of Invention
The problems solved by the invention are as follows: because the unmanned aerial vehicle shot image and the satellite map image have larger scale difference and rotation angle, and abundant textures contained in the satellite map image introduce a large amount of noise, when the unmanned aerial vehicle shot image and the satellite map image of the whole area are directly registered through the traditional key point characteristics, the algorithm is difficult to obtain the correct matching point of the key point in the satellite map for the key point in the unmanned aerial vehicle shot image, and an effective homography matrix cannot be obtained; the coarse-to-fine visual positioning method relies on the continuous accumulation of the visual odometer to estimate the position of the unmanned aerial vehicle, the positioning of the unmanned aerial vehicle is realized by two-stage and multiple image registration, and the application is limited. The invention provides a method for realizing visual positioning of an unmanned aerial vehicle by carrying out primary registration on an image shot by the unmanned aerial vehicle and a satellite map image of a complete area in combination with a sensor carried by an unmanned aerial vehicle hardware platform.
The technical solution of the invention comprises the following steps:
(1) And (5) carrying out image preprocessing by the unmanned aerial vehicle. The unmanned aerial vehicle shot image and the satellite map image have great difference in direction and scale, the difference cannot be effectively processed only through invariance of features, the difference between the two images needs to be reduced to realize effective registration from the unmanned aerial vehicle shot image to the satellite map image, the flight height of the unmanned aerial vehicle can be read from a sensor such as a barometer and the like in combination with a simple sensor carried by an unmanned aerial vehicle platform, the flight direction of the unmanned aerial vehicle can be read from a sensor such as an onboard compass and the like, the spatial resolution and the standard direction of the satellite map image are known, the rotation angle and the scale difference between the unmanned aerial vehicle shot image and the satellite map image can be estimated according to the flight height and the direction of the unmanned aerial vehicle, the unmanned aerial vehicle shot image and the satellite map image can be aligned through rotation transformation and zoom transformation, the scale difference and the rotation difference between the two images are effectively reduced, and the stability and the accuracy of image registration are improved;
(2) And detecting key points of the images shot by the unmanned aerial vehicle. The preprocessed unmanned aerial vehicle image and the preprocessed satellite map image are approximately aligned in the dimension and direction, the edges of objects in the two images and the edges of all parts in the objects have obvious corresponding relations, a super-pixel algorithm SEEDS can divide the images into a plurality of homogeneous super-pixels, the boundaries of the super-pixels are overlapped with the edges of the objects or all parts in the objects in the images, meanwhile, the dimensions of the two images are corrected, the spatial resolution is basically consistent, the SEEDS algorithm has accurate corresponding relations on super-pixel edge points generated on the same object in the two images, and accurate image transformation relations can be estimated by associating the coordinates of the key points;
(3) And (5) extracting key point features of the images shot by the unmanned aerial vehicle. And (2) detecting a large number of key points in the images shot by the unmanned aerial vehicle and the satellite map images, and estimating the transformation relation between the two images by associating the key points in the two images. The relevance of the key points in the two images is measured through the similarity of the key point features, the SIFT feature description is used for extracting the appearance information of the key point neighborhood in the images, the SIFT feature has certain rotation invariance and scale invariance, certain tolerance is provided for the preprocessing error in the step (1), and stable and effective key point feature description is provided;
(4) The method comprises the steps of matching the features of images shot by an unmanned aerial vehicle and satellite map images, performing feature matching on key point feature sets extracted from the two images by using a SuperGlue algorithm, aggregating key point features through a graph neural network, and generating feature description containing key point appearance information, key point spatial position information and adjacent key point information for each key point. Constructing a similarity matrix for the key point characteristics obtained by polymerization, and using a Sinkhorn algorithm to iteratively optimize the incidence relation to obtain the optimal matching result of the key points in the two images;
(5) Estimating a spatial transformation from the unmanned aerial vehicle captured image to the map image. And (4) determining the corresponding relation of the midpoints of the two images obtained according to feature matching, and solving the homography which is best fitted to the corresponding relation of the key points by using an RANSAC algorithm which is stable to noise, can filter error matching and output an accurate homography. And calculating the coordinates of the center of the image shot by the unmanned aerial vehicle in the map image according to the homography matrix estimated by RANSAC, and obtaining the longitude and latitude of the point as the current longitude and latitude coordinates of the unmanned aerial vehicle by combining the geographic information of the map.
Compared with the prior art, the unmanned aerial vehicle positioning method has the advantages that the images shot by the unmanned aerial vehicle are simply preprocessed and then are directly registered with the map images, the unmanned aerial vehicle positioning method is different from a coarse-to-fine unmanned aerial vehicle visual positioning method, the initial position of the unmanned aerial vehicle is not required to be given, the position of the unmanned aerial vehicle is not required to be roughly estimated through sensors such as a visual odometer or an IMU (inertial measurement unit), the visual positioning of the unmanned aerial vehicle can be completed through one-time image registration, and the use limit of the image registration-based air unmanned aerial vehicle positioning method is effectively reduced.
Drawings
FIG. 1 is a flow chart of a method for positioning an UAV based on image registration;
FIG. 2 is a schematic diagram showing the relationship between the flying height of the unmanned aerial vehicle and the spatial resolution of the captured image;
FIG. 3 is a diagram illustrating the detection result of the image key points;
FIG. 4 is a flowchart of the SuperGlue feature matching algorithm.
Detailed Description
In order to make the objects, technical solutions and advantages of the present method more clearly apparent, the following detailed description is made with reference to the accompanying drawings and embodiments.
As shown in fig. 1, the unmanned aerial vehicle positioning method based on image registration of the present invention includes the following steps:
(1) The method comprises the steps of preprocessing a shot image of the unmanned aerial vehicle, acquiring the flight height of the unmanned aerial vehicle from a height sensor carried by the unmanned aerial vehicle, acquiring the flight direction of the unmanned aerial vehicle from a course sensor carried by the unmanned aerial vehicle, obtaining the spatial resolution difference and the direction difference between the shot image of the unmanned aerial vehicle and a satellite map according to the marking information of the satellite map image, and performing rotation transformation and scale transformation on the shot image of the unmanned aerial vehicle to enable the shot image of the unmanned aerial vehicle and the map image to have the same direction and scale;
(2) Detecting key points of the unmanned aerial vehicle shot image, segmenting the preprocessed unmanned aerial vehicle shot image by using an SEEDS super-pixel segmentation algorithm to generate super-pixel division, and selecting boundary points of all super-pixels as key points;
(3) Extracting key point features of the unmanned aerial vehicle shot image, namely extracting SIFT features of the key points detected in the unmanned aerial vehicle shot image;
(4) Matching the features of the images shot by the unmanned aerial vehicle and the map images, performing feature matching on key point feature sets extracted from the two images by using a SuperGlue algorithm, and obtaining the corresponding relation of the coordinates of the key point images in the two images according to the corresponding relation of the features;
(5) Estimating the spatial transformation from the image shot by the unmanned aerial vehicle to the satellite map image, using a RANSAC algorithm to solve a homography matrix for transforming the image shot by the unmanned aerial vehicle to the satellite map image according to the corresponding relation of the middle points of the two images obtained by feature matching, calculating the coordinate of the center of the image shot by the unmanned aerial vehicle in the map image, and combining the geographic information of the map to obtain the longitude and latitude of the point as the current longitude and latitude coordinates of the unmanned aerial vehicle.
According to the step (1), the spatial resolution and the standard direction of the satellite map image are known, the heading angle of the unmanned aerial vehicle in flight can be determined by reading sensor information, and the image shot by the unmanned aerial vehicle can be aligned with the direction of the map by rotating the image. The spatial resolution that unmanned aerial vehicle shot the image then can calculate through its flying height, like fig. 2, to the same camera that unmanned aerial vehicle carried on, when the vertical ground image that shoots downwards of different flying heights, there is similar relation in the field of vision of camera, promptly:
Figure BDA0002771939130000051
wherein h is 1 、h 2 Are different flying heights (unit: meter) a of the unmanned aerial vehicle 1 、a 2 The space length (unit: meter) of the image shot by the unmanned plane at different flight heights, b 1 、b 2 The space width (unit: meter) for shooting images by the unmanned aerial vehicle at different flying heights. And testing the camera at a given height in advance, determining the spatial resolution of the image shot by the camera at the given height, reading the flight height of the unmanned aerial vehicle during positioning in flight, calculating the spatial resolution of the image shot by the unmanned aerial vehicle according to the similarity relation, and zooming the image shot by the unmanned aerial vehicle to make the image reach the spatial resolution consistent with the satellite map image.
According to the step (2), the edge of the object in the image and the edge of each part in the object can be used as effective key points for image registration, as shown in fig. 3, the image is segmented into a plurality of homogeneous superpixels by the SEEDS superpixel segmentation algorithm, the boundary of the superpixel is matched with the edge of the object and each part in the object in the image, the two images have consistent scale and direction after the step (1), and the boundary points of the superpixels divided in the two images by the SEEDS algorithm have accurate corresponding relation. The SEEDS algorithm receives the number of superpixels generated by the parameter setting algorithm, in the key point detection step of the invention, the parameter is calculated according to the size of the image, and the value of the parameter is set as follows:
Figure BDA0002771939130000052
wherein h is i 、w i The height and width (unit: pixel) of the image,
Figure BDA0002771939130000053
is a rounding down operation. For the satellite map image, image key point detection is carried out before the unmanned aerial vehicle takes off, the key point set detected for the first time is always used in the flight process of the unmanned aerial vehicle, and repeated detection is not carried out any moreAnd when the unmanned aerial vehicle is positioned every time, key point detection is only carried out on the preprocessed unmanned aerial vehicle shooting image, the parameters of the SEEDS algorithm are set by using the same method in the key point detection of the two images, and the same superpixel division is ensured to be generated on the same object of the algorithm.
According to the step (3), SIFT features of the detected key points in the image are extracted, the SIFT features calculate the gradient direction of the image in the neighborhood of the key points, feature description containing neighborhood information is constructed, and the method has certain rotation invariance and scale invariance and certain fault tolerance on preprocessed errors. Before the unmanned aerial vehicle takes off, after the key point detection, the SIFT features of the satellite map image at the key point are extracted in advance and stored, the SIFT feature set extracted for the first time is used all the time in the flight process of the unmanned aerial vehicle, and repeated calculation is not performed any more. When positioning is carried out in flight, key point detection is carried out on the ground image shot by the unmanned aerial vehicle only, and the SIFT feature set of the ground image shot by the unmanned aerial vehicle at the key point is extracted.
According to the step (4), SIFT feature sets extracted from the two images are matched, as shown in fig. 4, edges are established between key points in the two images and between the key points between the two images to form a graph, the node features of the graph are composed of image coordinates of the key points and SIFT feature descriptions, the characteristics of each node in the graph are aggregated by using a graph neural network through a SuperGlue algorithm, feature transformation is carried out by integrating the neighborhood appearance, the spatial position and the adjacent key point information of the key points, the node features are updated, and the key point features with higher significance are obtained. Recording the number of key points in the image shot by the unmanned aerial vehicle as M, the number of key points in the satellite map image as N, calculating the similarity of every two key point features in the two images, establishing an MxN similarity matrix, performing iterative optimization by using a Sinkhorn algorithm, setting the maximum iteration time T as 100 to obtain the MxN correlation matrix, wherein each row of the correlation matrix represents the correlation between the feature of the corresponding point in the image shot by the unmanned aerial vehicle and the feature of each key point in the satellite map image, and selecting the key point with the highest correlation in the satellite map image as a matching point for the key point in the image shot by the unmanned aerial vehicle according to the value of the correlation matrix.
And (5) fitting the corresponding relation of the key point image coordinates obtained in the step (4) by using a RANSAC algorithm, setting the error threshold of the RANSAC algorithm to be 1 pixel, setting the maximum iteration number to be 2048, solving a homography matrix with the best fitting effect on the key point matching result, and taking the homography matrix as a spatial transformation relation from the image shot by the unmanned aerial vehicle to the satellite map image. The longitude and latitude of a central point of an image shot by the unmanned aerial vehicle are taken as the longitude and latitude of the current position of the unmanned aerial vehicle, the coordinate of the point in the satellite map image is solved according to the homography matrix, and the longitude and latitude of the central point are calculated according to the longitude and latitude marks of the satellite map.
Through the steps, the influence of a large amount of noise contained in the satellite map image on the registration result in the registration of the unmanned aerial vehicle shooting image and the satellite map image is reduced by combining the simple sensor carried by the unmanned aerial vehicle; compared with the coarse-to-fine visual positioning method, the position of the unmanned aerial vehicle is roughly estimated without depending on hardware such as a visual odometer or an IMU (inertial measurement unit), the position of the unmanned aerial vehicle in the area can be determined only by one-time image registration, and the limitation of the unmanned aerial vehicle visual positioning method in application is reduced. Parts of the invention not described in detail are well known in the art.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but various changes may be apparent to those skilled in the art, and it is intended that all inventive concepts utilizing the inventive concepts set forth herein be protected without departing from the spirit and scope of the present invention as defined and limited by the appended claims.

Claims (4)

1. An unmanned aerial vehicle positioning method based on image registration is characterized by comprising the following steps:
(1) The method comprises the steps that an unmanned aerial vehicle shooting image is preprocessed, the flying height of the unmanned aerial vehicle is obtained from a height sensor carried by the unmanned aerial vehicle, the flying direction of the unmanned aerial vehicle is obtained from a heading sensor carried by the unmanned aerial vehicle, the spatial resolution difference and the direction difference between a shooting image of the unmanned aerial vehicle and a satellite map image are obtained according to mark information of the satellite map image, and the shooting image of the unmanned aerial vehicle is subjected to rotation transformation and scale transformation to enable the shooting image of the unmanned aerial vehicle and the satellite map image to have the same direction and scale;
(2) Detecting key points of the unmanned aerial vehicle shot image, segmenting the preprocessed unmanned aerial vehicle shot image by using an SEEDS super-pixel segmentation algorithm to generate super-pixel division, and selecting boundary points of all super-pixels as key points;
(3) Extracting key point features of the unmanned aerial vehicle shot image, namely extracting SIFT features of the key points detected in the unmanned aerial vehicle shot image;
(4) Matching the features of the images shot by the unmanned aerial vehicle and the satellite map images, performing feature matching on key point feature sets extracted from the two images by using a SuperGlue algorithm, and obtaining the corresponding relation of the coordinates of the key point images in the two images according to the corresponding relation of the features;
(5) Estimating the space transformation from the image shot by the unmanned aerial vehicle to the satellite map image, solving a homography matrix of the image shot by the unmanned aerial vehicle and transformed to the satellite map image by using a RANSAC algorithm according to the corresponding relation of points in the two images obtained by feature matching, calculating the coordinate of the center of the image shot by the unmanned aerial vehicle in the satellite map image, and obtaining the longitude and latitude of the point by combining the geographic information of the satellite map image to be used as the current longitude and latitude coordinate of the unmanned aerial vehicle;
the step (1) specifically comprises:
the spatial resolution and the standard direction of the satellite map image are known, the heading angle of the unmanned aerial vehicle in flight is determined by reading sensor information, and the image shot by the unmanned aerial vehicle is rotated to be aligned with the direction of the satellite map image; the spatial resolution who unmanned aerial vehicle shot the image then calculates through its flying height, to the same camera that unmanned aerial vehicle carried on, when the vertical ground image that shoots downwards of different flying heights, there is similar relation in the field of vision of camera, promptly:
Figure FDA0003930382200000011
wherein h is 1 、h 2 For different flying heights of the drone, a 1 、a 2 For unmanned aerial vehicles, the spatial length of the images taken at different flight heights, b 1 、b 2 The space width of the image shot by the unmanned aerial vehicle at different flight heights is obtained; testing the camera at a given height in advance, determining the spatial resolution of the image shot by the camera at the given height, reading the flight height of the unmanned aerial vehicle during positioning in flight, calculating the spatial resolution of the image shot by the unmanned aerial vehicle according to the similarity relation, and zooming the image shot by the unmanned aerial vehicle to achieve the spatial resolution consistent with the satellite map image;
the step (2) of detecting the key points specifically comprises the following steps:
using the edge of an object in an image and the edge of each part in the object as effective key points for image registration, using an SEEDS super-pixel segmentation algorithm to segment the image into a plurality of homogeneous super-pixels, wherein the boundary of the super-pixels is matched with the edge of the object in the image and each part in the object, the two images have consistent scale and direction after the step (1), the super-pixel boundary points divided in the two images by the SEEDS algorithm have accurate corresponding relation, the SEEDS algorithm receives the number of the super-pixels generated by a parameter setting algorithm, in the key point detection step, the parameter is calculated according to the size of the image, and the value of the parameter is set as
Figure FDA0003930382200000021
Wherein h is i 、w i Are the height and width of the image,
Figure FDA0003930382200000022
is a rounding-down operation;
the step (3) specifically comprises:
SIFT features of the key points detected in the image are extracted, SIFT features of the satellite map image at the key points are extracted and stored in advance after the key points are detected before the unmanned aerial vehicle takes off, the SIFT feature set extracted for the first time is used all the time in the flying process of the unmanned aerial vehicle, repeated calculation is not carried out, when positioning is carried out in the flying process, key point detection is carried out on the ground image shot by the unmanned aerial vehicle, and the SIFT feature set of the ground image shot by the unmanned aerial vehicle at the key point is extracted.
2. The unmanned aerial vehicle positioning method based on image registration as claimed in claim 1, wherein for the satellite map image, image key point detection is performed before the unmanned aerial vehicle takes off, the first detected key point set is always used in the flight process of the unmanned aerial vehicle, repeated detection is not performed, only the key point detection is performed on the preprocessed unmanned aerial vehicle photographed image every time the unmanned aerial vehicle is positioned, the parameters of the SEEDS algorithm are set by using the same method in the two image key point detection, and the same superpixel division is ensured to be generated on the same object of the algorithm.
3. The method for unmanned aerial vehicle positioning based on image registration as claimed in claim 1, wherein the step (4) comprises:
matching SIFT feature sets extracted from the two images, establishing edges between key points in the two images and key points between the two images to form a graph, wherein the node features of the graph are composed of image coordinates of the key points and SIFT feature descriptions, the SuperGlue algorithm uses a graph neural network to aggregate the features of each node in the graph, feature transformation is carried out by integrating the neighborhood appearance, the spatial position and the adjacent key point information of the key points, the node features are updated to obtain key point features with higher significance, the number of the key points in the image shot by the unmanned aerial vehicle is recorded as M, the number of the key points in the satellite map image is N, the pairwise similarity of the key point features in the two images is calculated, an MxN similarity matrix is established, iterative optimization is carried out by using a Sinkhorn algorithm, the maximum iteration number is T, an MxN correlation matrix is obtained, each row of the correlation matrix represents the correlation degree of the features of the corresponding points in the image shot by the unmanned aerial vehicle and the features in the satellite map image, and the key points with the highest correlation degree in the satellite map image are selected according to the value of the correlation matrix.
4. The method for unmanned aerial vehicle positioning based on image registration as claimed in claim 1, wherein the step (5) comprises:
and (3) fitting the corresponding relation of the image coordinates of the key points obtained in the step (4) by using a RANSAC algorithm, setting an error threshold value and the maximum iteration times of the RANSAC algorithm, solving a homography matrix with the best fitting effect on the matching result of the key points, taking the homography matrix as a space transformation relation from the image shot by the unmanned aerial vehicle to the satellite map image, selecting the longitude and latitude of the central point of the image shot by the unmanned aerial vehicle as the longitude and latitude of the current position of the unmanned aerial vehicle, solving the coordinates of the point in the satellite map image according to the homography matrix, and calculating the longitude and latitude of the central point according to the longitude and latitude range of the satellite map.
CN202011252158.XA 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration Active CN112419374B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011252158.XA CN112419374B (en) 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011252158.XA CN112419374B (en) 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration

Publications (2)

Publication Number Publication Date
CN112419374A CN112419374A (en) 2021-02-26
CN112419374B true CN112419374B (en) 2022-12-27

Family

ID=74781858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011252158.XA Active CN112419374B (en) 2020-11-11 2020-11-11 Unmanned aerial vehicle positioning method based on image registration

Country Status (1)

Country Link
CN (1) CN112419374B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113432594A (en) * 2021-07-05 2021-09-24 北京鑫海宜科技有限公司 Unmanned aerial vehicle automatic navigation system based on map and environment
CN113838126B (en) * 2021-09-27 2022-05-10 广州赋安数字科技有限公司 Video monitoring and unmanned aerial vehicle image alignment method
CN114201633B (en) * 2022-02-17 2022-05-17 四川腾盾科技有限公司 Self-adaptive satellite image generation method for unmanned aerial vehicle visual positioning
CN114612788B (en) * 2022-03-22 2023-04-07 东北林业大学 Urban landscape plant diversity monitoring method based on neural network
CN115495611B (en) * 2022-11-18 2023-03-24 中国电子科技集团公司第五十四研究所 Space scene retrieval method oriented to autonomous positioning of unmanned aerial vehicle
CN115861591B (en) * 2022-12-09 2024-02-02 南京航空航天大学 Unmanned aerial vehicle positioning method based on transformer key texture coding matching
CN117274391B (en) * 2023-11-23 2024-02-06 长春通视光电技术股份有限公司 Digital map matching target positioning method based on graphic neural network

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485655A (en) * 2015-09-01 2017-03-08 张长隆 A kind of taken photo by plane map generation system and method based on quadrotor
CN109460046A (en) * 2018-10-17 2019-03-12 吉林大学 A kind of unmanned plane identify naturally not with independent landing method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111028292A (en) * 2019-12-13 2020-04-17 中国电子科技集团公司第二十研究所 Sub-pixel level image matching navigation positioning method
US20200226352A1 (en) * 2012-06-13 2020-07-16 San Diego State University Research Foundation Wide area intermittent video using non-orthorectified feature matching in long period aerial image capture with pixel-based georeferencing
CN111666882A (en) * 2020-06-08 2020-09-15 武汉唯理科技有限公司 Method for extracting answers of handwritten test questions
CN111780764A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200226352A1 (en) * 2012-06-13 2020-07-16 San Diego State University Research Foundation Wide area intermittent video using non-orthorectified feature matching in long period aerial image capture with pixel-based georeferencing
CN106485655A (en) * 2015-09-01 2017-03-08 张长隆 A kind of taken photo by plane map generation system and method based on quadrotor
CN109460046A (en) * 2018-10-17 2019-03-12 吉林大学 A kind of unmanned plane identify naturally not with independent landing method
CN110111372A (en) * 2019-04-16 2019-08-09 昆明理工大学 Medical figure registration and fusion method based on SIFT+RANSAC algorithm
CN110569861A (en) * 2019-09-01 2019-12-13 中国电子科技集团公司第二十研究所 Image matching positioning method based on point feature and contour feature fusion
CN111028292A (en) * 2019-12-13 2020-04-17 中国电子科技集团公司第二十研究所 Sub-pixel level image matching navigation positioning method
CN111666882A (en) * 2020-06-08 2020-09-15 武汉唯理科技有限公司 Method for extracting answers of handwritten test questions
CN111780764A (en) * 2020-06-30 2020-10-16 杭州海康机器人技术有限公司 Visual positioning method and device based on visual map

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A novel approach for coarse-to-fine windthrown tree extraction based on unmanned aerial vehicle images;Duan FuZhou 等;《remote sensing》;20170524;全文 *
基于SIFT特征的无人机航拍图像拼接;乔文治;《教练机》;20111231;第9-12页 *
基于SIFT算法的无人机视觉导航研究;刘学等;《无线电工程》;20170505(第05期);第19-22页 *
基于摄像测量的无人机对地面目标精确定位关键技术研究;龙古灿;《中国优秀博硕士学位论文全文数据库(博士) 工程科技Ⅱ辑》;20190115;全文 *

Also Published As

Publication number Publication date
CN112419374A (en) 2021-02-26

Similar Documents

Publication Publication Date Title
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
Nassar et al. A deep CNN-based framework for enhanced aerial imagery registration with applications to UAV geolocalization
US8437501B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
CN105352509B (en) Unmanned plane motion target tracking and localization method under geography information space-time restriction
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
CN109099929B (en) Intelligent vehicle positioning device and method based on scene fingerprints
Dabeer et al. An end-to-end system for crowdsourced 3D maps for autonomous vehicles: The mapping component
US11748449B2 (en) Data processing method, data processing apparatus, electronic device and storage medium
Dawood et al. Harris, SIFT and SURF features comparison for vehicle localization based on virtual 3D model and camera
CN107560603A (en) A kind of unmanned plane oblique photograph measuring system and measuring method
CN113340312A (en) AR indoor live-action navigation method and system
Müller et al. Squeezeposenet: Image based pose regression with small convolutional neural networks for real time uas navigation
CN113838129B (en) Method, device and system for obtaining pose information
Hou et al. UAV pose estimation in GNSS-denied environment assisted by satellite imagery deep learning features
Majdik et al. Micro air vehicle localization and position tracking from textured 3d cadastral models
Zahedian et al. Localization of autonomous vehicles: proof of concept for a computer vision approach
KR102249381B1 (en) System for generating spatial information of mobile device using 3D image information and method therefor
Tsao et al. Stitching aerial images for vehicle positioning and tracking
CN117073669A (en) Aircraft positioning method
Zhang et al. An UAV navigation aided with computer vision
Shukla et al. Automatic geolocation of targets tracked by aerial imaging platforms using satellite imagery
Han et al. Uav vision: Feature based accurate ground target localization through propagated initializations and interframe homographies
Chen et al. An oblique-robust absolute visual localization method for GPS-denied UAV with satellite imagery
KR20220062709A (en) System for detecting disaster situation by clustering of spatial information based an image of a mobile device and method therefor
CN115597592B (en) Comprehensive positioning method applied to unmanned aerial vehicle inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant