CN117274375A - Target positioning method and system based on transfer learning network model and image matching - Google Patents

Target positioning method and system based on transfer learning network model and image matching Download PDF

Info

Publication number
CN117274375A
CN117274375A CN202311056871.0A CN202311056871A CN117274375A CN 117274375 A CN117274375 A CN 117274375A CN 202311056871 A CN202311056871 A CN 202311056871A CN 117274375 A CN117274375 A CN 117274375A
Authority
CN
China
Prior art keywords
image
feature
map
target
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311056871.0A
Other languages
Chinese (zh)
Inventor
吕梅柏
白坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical University filed Critical Northwestern Polytechnical University
Priority to CN202311056871.0A priority Critical patent/CN117274375A/en
Publication of CN117274375A publication Critical patent/CN117274375A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Astronomy & Astrophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The application provides a target positioning method and system based on transfer learning network model and image matching, wherein the method can acquire an aerial image to be positioned, which is shot by a target to be positioned, and extract an image feature vector. And acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images. And extracting a target map area image matched with the aerial image to be positioned from the satellite map image set according to the image feature vector and the map feature vector. And inputting the aerial image to be positioned and the target map area image into a feature extraction description model to obtain a matched feature point pair, wherein the matched feature point pair comprises the feature points matched in the aerial image to be positioned and the target map area image. And finally, positioning longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs. According to the method, the aerial image information acquired by the target to be positioned is matched with the airborne satellite map, so that the real-time positioning of the target without external signal support is realized, and the positioning accuracy is ensured.

Description

Target positioning method and system based on transfer learning network model and image matching
Technical Field
The application relates to the technical field of target positioning, in particular to a target positioning method and system based on transfer learning network model and image matching.
Background
The target location is the actual location where the specified target is located. For example, for autonomous positioning of an aircraft, a target position of the aircraft may be positioned by target positioning, which may be applied to various fields, for example, agricultural plant protection, aerial photography, security patrol, earthquake rescue, and the like. As an important condition for ensuring the safety of the aircraft to reach a target position quickly, the accuracy of the autonomous positioning system of the aircraft directly determines the reliability of the current route planning, thereby influencing success and failure of a flight task and the safety of the aircraft.
In a low-altitude short-period flight mission, an autonomous aircraft positioning system can be divided into two parts according to information sources: one part is that the aircraft performs synchronous positioning and mapping (SLAM) on the environment, so that accurate position information is obtained; another part is to provide accurate position information based on external devices, such as an optical motion capture system to obtain position and attitude information of the aircraft. The system can effectively complete the accurate positioning work relative to the ground station coordinate system in a short period, but the method is difficult to overcome the long-period large-range positioning task relative to the ground coordinate system due to the influence of communication and accumulated error factors.
In a flight task with a large range and a long period, the target positioning method mainly comprises satellite positioning navigation, and combines methods such as inertial navigation and radar positioning to realize real-time accurate perception of the position information of the aircraft, but the aircraft cannot receive satellite signals under extreme conditions such as weather influence and satellite disconnection, so that the accuracy of a positioning system is reduced, and the autonomous navigation requirement of the aircraft is difficult to meet.
Disclosure of Invention
The application provides a target positioning method and system based on transfer learning network model and image matching, which are used for solving the problem of low positioning accuracy of target positioning under the support of no external information (such as satellite signals).
In a first aspect, the present application provides a target positioning method based on matching of a migration learning network model with an image, including:
acquiring an aerial image to be positioned, wherein the aerial image to be positioned is an image shot by a target to be positioned;
extracting an image feature vector of the aerial image to be positioned;
acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images;
extracting a target map area image matched with the to-be-positioned aerial image from the satellite map image set according to the image feature vector and the map feature vector;
inputting the aerial image to be positioned and the target map area image into a feature extraction description model to obtain a matched feature point pair output by the feature extraction description model, wherein the matched feature point pair comprises feature points matched in the aerial image to be positioned and the target map area image, the feature extraction description model comprises a feature point position extraction model and a feature description model, the feature point position extraction model is used for extracting feature points and confidence of the image, and the feature description model is used for generating descriptors of the feature points;
and positioning longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs.
Optionally, the step of acquiring the satellite map image set includes:
acquiring a satellite map image;
dividing the satellite map image into a plurality of map area images;
and extracting one-dimensional feature vectors of the map area image to obtain a satellite map image set.
Optionally, the step of extracting, from the satellite map image set, a target map area image matched with the to-be-positioned aerial image according to the image feature vector and the map feature vector includes:
traversing a plurality of map feature vectors in the satellite map image set;
calculating the matching similarity of the image feature vector and the map feature vector, wherein the matching similarity is calculated according to cosine similarity, divergence and Euclidean distance;
and extracting a target map area image in the satellite map image set, wherein the target map area image is the map area image with the matching similarity larger than a similarity threshold value.
Optionally, the method further comprises:
calculating cosine similarity, divergence and Euclidean distance of the image feature vector and the map feature vector;
acquiring weights of the cosine similarity, the divergence and the Euclidean distance;
and weighting and summing the cosine similarity, the divergence and the Euclidean distance according to the weight to obtain the matching similarity.
Optionally, the method further comprises:
acquiring a synthetic dataset comprising geometric images of a plurality of marker feature points;
constructing a full convolution network, and training the full convolution network based on the synthetic data set to obtain a basic feature extraction model;
and obtaining a non-calibrated scene image, and performing scene feature point extraction training on the basic feature extraction model based on the non-calibrated scene image to obtain a feature point extraction model.
Optionally, the method further comprises:
performing image transformation on the non-calibration scene image to obtain a non-calibration scene transformation image;
acquiring the corresponding relation between the characteristic points of the non-calibrated scene image and the non-calibrated scene change image;
and taking the corresponding relation of the feature points as a label, and carrying out feature point description matching training on the basic feature extraction model to obtain a feature description model.
Optionally, the step of locating the longitude and latitude coordinates of the target to be located according to the matching feature point pairs includes:
resolving the matched characteristic point pairs to obtain coordinate transformation matrixes of the to-be-positioned aerial image and the target map area image;
acquiring a longitude and latitude coordinate system of the target map area image and a pixel coordinate system of the to-be-positioned aerial image;
and converting the matched characteristic points in the pixel coordinate system into the longitude and latitude coordinate system according to the coordinate transformation matrix so as to position the longitude and latitude coordinates of the target to be positioned.
Optionally, the matching feature point pair is calculated according to the following formula:
wherein H is a coordinate transformation matrix,for the pixel coordinates of the matching feature points, n represents an image number, 1 represents an aerial image to be positioned, 2 represents a target map area image, and m represents a pair group number of the matching feature point pair.
Optionally, the method further comprises:
acquiring a historical map area image, wherein the historical map area image is a target map area image matched with the previous key frame aerial image of the aerial image to be positioned;
acquiring the regional position coordinates of the aerial image to be positioned;
if the regional position coordinates are located in the historical map regional image, inputting the to-be-positioned aerial image and the historical map regional image into a feature extraction description model;
and if the regional position coordinates are not positioned in the historical map regional image, executing the step of extracting the image feature vector of the aerial image to be positioned.
In a second aspect, the present application provides an object positioning system based on matching a migration learning network model with an image, including:
the image acquisition module is used for acquiring an aerial image to be positioned, wherein the aerial image to be positioned is an image shot by a target to be positioned;
the feature vector extraction module is used for extracting the image feature vector of the to-be-positioned aerial image;
the image matching module is used for acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images, and extracting a target map area image matched with the to-be-positioned aerial image from the satellite map image set according to the image feature vectors and the map feature vectors;
the feature point matching module is used for inputting the image to be positioned and the target map area image into a feature extraction description model so as to obtain a matched feature point pair output by the feature extraction description model, wherein the matched feature point pair comprises feature points matched in the image to be positioned and the target map area image, the feature extraction description model comprises a feature point position extraction model and a feature description model, the feature point position extraction model is used for extracting feature points and confidence of the image, and the feature description model is used for generating descriptors of the feature points;
and the positioning module is used for positioning the longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs.
According to the technical scheme, the application provides the target positioning method and the target positioning system based on the transfer learning network model and image matching, and the method can acquire the aerial image to be positioned, which is shot by the target to be positioned, and extract the image feature vector. And acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images. And extracting a target map area image matched with the aerial image to be positioned from the satellite map image set according to the image feature vector and the map feature vector. And inputting the aerial image to be positioned and the target map area image into a feature extraction description model to obtain a matched feature point pair, wherein the matched feature point pair comprises the feature points matched in the aerial image to be positioned and the target map area image. And finally, positioning longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs. According to the method, the aerial image information acquired by the target to be positioned is matched with the airborne satellite map, so that the real-time positioning of the target without external signal support is realized, and the positioning accuracy is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a target positioning method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a learning network model for migration provided in an embodiment of the present application;
fig. 3 is a schematic flow chart of matching a map area image of a target according to an embodiment of the present application;
fig. 4 is a schematic diagram of a network structure of a feature extraction description network model according to an embodiment of the present application;
FIG. 5 is a schematic diagram of a training process of a feature extraction description model according to an embodiment of the present application;
fig. 6 is a schematic diagram of an image mapping relationship provided in an embodiment of the present application;
fig. 7 is a schematic diagram of a positioning result provided in an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the present application. Merely as examples of systems and methods consistent with some aspects of the present application as detailed in the claims.
The target location is the actual location where the specified target is located. The object to be positioned refers to an object to be positioned, such as an aircraft, a unmanned plane, and the like. The longitude and latitude positions of the equipment such as the aircraft, the unmanned plane and the like can be positioned through target positioning, and further, route planning can be provided for the fields such as agricultural plant protection, aviation shooting, safety patrol, earthquake rescue and the like.
In some embodiments, the object location system may be divided into two parts by information source: one part is that the target to be positioned carries out synchronous positioning and mapping (SLAM) on the environment, so as to obtain accurate position information; the other part is to provide accurate position information based on external equipment, such as an optical motion capture system to obtain the position and posture information of the target to be positioned. The system can effectively complete the accurate positioning work relative to the ground station coordinate system in a short period, but the method is difficult to overcome the long-period large-range positioning task relative to the ground coordinate system due to the influence of communication and accumulated error factors.
In some embodiments, the target positioning method can mainly use satellite positioning navigation, and combines methods such as inertial navigation and radar positioning to realize real-time accurate sensing of the position information of the target to be positioned, but under extreme conditions such as weather influence and satellite disconnection, the target to be positioned cannot receive satellite signals, so that the accuracy of a positioning system is reduced, and the autonomous navigation requirement of the target to be positioned is difficult to meet.
In order to solve the problem of low positioning accuracy of target positioning, the embodiment of the application provides a target positioning method based on matching of a transfer learning network model and an image, and a matching technology based on a neural network is used for completing a matching region retrieval task of a wide-area satellite map so as to effectively solve the problems of overlarge satellite map volume, overabundance of details, low direct matching efficiency and shortage of algorithm resources, and the detection of the matching region of a complex satellite map is realized by taking a real machine image as a template and by means of the characteristic of strong adaptability of the neural network, so that the positioning accuracy is improved. The target positioning method may be applied to a target positioning system configured to perform the target positioning method, as shown in fig. 1, the target positioning method comprising:
s100: and acquiring an aerial image to be positioned.
The aerial image to be positioned is an image shot by the target to be positioned. The real-time positioning of the target to be positioned is realized by utilizing the aerial image information acquired by the target to be positioned to be matched with a satellite map. It should be noted that, the target positioning method provided by the application can be applied to a plurality of fields, for example, to the autonomous positioning field of an aircraft, and can acquire aerial images of the aircraft overlook, and the aerial images are matched with a satellite map through overlook, so that real-time positioning of the aircraft is realized.
S200: and extracting an image feature vector of the aerial image to be positioned.
In this embodiment, the image feature vector of the aerial image to be positioned may be extracted by the transfer learning network model. For example, as shown in fig. 2, a schematic structural diagram of a transition learning network model provided in an embodiment of the present application is shown, where a backbone network of the model is a residual network (res net), that is, a res net-152 network model, 152 is a layer number (1 layers) representing a network, and the transition learning is adopted to output to-be-positioned aerial image processing as a 2048-bit feature vector as an image feature vector. Where 7×7conv,64 represents a 7×7 convolution kernel and the output channel is 64.fc represents a full connected layer (full connected).
S300: a satellite map image set is acquired.
Wherein the satellite map image set includes map feature vectors of a plurality of map area images.
When the satellite map image set is acquired, the satellite map image can be acquired, the satellite map image is divided into a plurality of map area images, and one-dimensional feature vectors of the map area images are extracted to obtain the satellite map image set. That is, the satellite map image is subjected to the segmentation process, and each of the obtained map area images is subjected to the neural network feature extraction to obtain its feature vector, and is stored to obtain a satellite map image set, which may be stored as a feature file in the "txt" format, for example. The target positioning system can directly carry the characteristic file to execute real-time positioning of the target to be positioned.
S400: and extracting a target map area image matched with the aerial image to be positioned from the satellite map image set according to the image feature vector and the map feature vector.
After the image feature vector and the map feature vectors are obtained, the map feature vectors in the satellite map image set can be traversed, the image feature vector and each map feature vector are matched, and the matching similarity of the image feature vector and the map feature vector is calculated so as to screen out a target map area image matched with the to-be-positioned aerial image in the satellite map image set. The target map area image is a map area image with matching similarity larger than a similarity threshold value.
In some embodiments, the matching similarity is calculated from cosine similarity, divergence and euclidean distance. Cosine similarity, divergence and euclidean distance of the image feature vector and the map feature vector can be calculated. And obtaining weights of cosine similarity, divergence and Euclidean distance. And the cosine similarity, the divergence and the Euclidean distance are weighted and summed according to the weight to obtain the matching similarity.
For example, the matching similarity may be calculated as follows:
L=k 1 L cosin +k 2 L lk +k 3 L dis
wherein,respectively map feature vector and image feature vector, L is matching similarity, which is based on cosine similarity L cosin Divergence L lk Distance L of Europe dis And its corresponding weight (k 1 、k 2 、k 3 ) And (5) obtaining weighted summation.
It can be understood that the established feature vector matching similarity evaluation criteria consist of cosine similarity, divergence and euclidean distance. Compared with an evaluation system such as image quality and image fusion, the similarity evaluation standard considers the influence of factors such as environment, scale, rotation and image quality of a satellite map image and a real-time aerial image while ensuring the matching accuracy, and does not add parameters such as information entropy and structural similarity, so that the sensitivity of the matching method to rotation, scale change and image quality loss is effectively reduced, and further the influence on the aerial environment can be more effectively responded.
In some embodiments, as shown in fig. 3, a flow chart of matching a target map area image provided in the embodiments of the present application may be shown, where before performing feature vector matching similarity calculation, a history map area image may be obtained, where the history map area image is a target map area image that is to be matched with a previous key frame aerial image of an aerial image to be positioned, so as to improve positioning efficiency. And carrying out coordinate calculation on the aerial image to be positioned so as to obtain the regional position coordinate of the aerial image to be positioned, wherein the regional position coordinate is the longitude and latitude coordinate corresponding to the aerial image to be positioned.
And carrying out position judgment on the region position coordinates, if the region position coordinates are positioned in the historical map region image, indicating that the target to be positioned is not separated from the region range corresponding to the historical map region image, executing a subsequent matching positioning step according to the aerial image to be positioned and the historical map region image, namely inputting the aerial image to be positioned and the historical map region image into the feature extraction description model. If the regional position coordinates are not located in the historical map regional image, the step of extracting the image feature vector of the aerial image to be positioned is executed, and the target map regional image is matched in the satellite map image set again, so that the subsequent matching positioning step is executed according to the aerial image to be positioned and the target map regional image.
In this embodiment, the large-volume satellite map original file is processed into the small-volume feature file set, so that the volume of the algorithm system can meet the disk capacity limitation condition of the embedded processor. Compared with a characteristic point descriptor matching scheme and a neural network matching scheme of a direct input image group, the real-time image is subjected to the scheme of feature extraction and satellite map feature matching through the neural network, and the time delay error of the platform positioning coordinates, caused by map matching area updating, of the system is reduced with small operation amount.
S500: and inputting the aerial image to be positioned and the target map area image into a feature extraction description model to obtain a matched feature point pair output by the feature extraction description model.
After the target map region image corresponding to the current down-looking scene of the target to be positioned is obtained, the matching positioning of the down-looking image of the target to be positioned and the satellite map can be realized based on the scene matching technology of the feature extraction description network model. As shown in fig. 4, a network structure diagram of a feature extraction description network model provided in an embodiment of the present application is shown, where the feature extraction description model includes a feature extraction network and two branches, that is, a feature point location extraction model and a feature description model, respectively, and one branch outputs a location and a score (confidence) of a feature point, and the other branch outputs a descriptor of the feature point. That is, a feature point position extraction model is used to extract feature points and confidence of an image, and a feature description model is used to generate descriptors of the feature points.
As shown in fig. 4, the aerial image to be positioned and the target map area image are used as input of a feature extraction description model, a feature image is extracted through a feature extraction network, and the feature image is respectively input into a feature point position extraction model and a feature description model. For characteristic point positionTaking a model whose convolution layer (Conv) can detect feature points, i.e. as shown in FIG. 4And then the feature point positions and the confidence degrees are output through an activation function (Softmax function) and a matrix shape transformation function (Reshape function). For the feature description model, its convolution layer (Conv) can generate the descriptors of feature points, i.e. +.>And outputting the descriptors of the feature points through bicubic interpolation (Bi-Cubic Interpolation) and regularization algorithm (L2-Norm).
For training of the feature extraction description model, training data needs to be acquired, and feature points are marked on a natural image by the training model. As shown in fig. 5, the training of the feature extraction description model includes three parts, namely, a feature point extraction preliminary training, a scene feature point extraction training and a matching training.
Wherein, extracting the characteristic points for preliminary training. A synthetic dataset may be acquired, wherein the synthetic dataset comprises geometric images of a plurality of marker feature points. For example, the standard corner image shown in fig. 5, which is a geometric image marked with corner coordinates. And constructing a full convolution network, and training the full convolution network through a neural network Train training function based on the synthetic data set to obtain a basic feature extraction model.
For scene feature point extraction training, in order to make up for performance gaps on real images, a calibration-free scene image can be obtained, and scene feature point extraction training is performed on the basic feature extraction model based on the calibration-free scene image to obtain a feature point extraction model, namely, training of a feature point detection part of a feature extraction description network is completed. For example, using homography adaptation (Homographic Adaptation) techniques, self-supervised training of the model is achieved, performance of the underlying feature extraction model is improved, and pseudo-tag feature points (pseudo-ground truth interest points) can be generated on non-scaled scene images.
For matching training, image transformation (Warp) may be performed on the uncalibrated scene images to obtain uncalibrated scene transformed images. And obtaining the corresponding relation of the characteristic points of the non-calibration scene image and the non-calibration scene change image. And taking the corresponding relation of the feature points as a label, namely a real label (group-trunk), and carrying out feature point description matching training on the basic feature extraction model by detecting the feature point positions of the non-calibration scene image and the non-calibration scene change image and carrying out feature point description matching on the feature points so as to obtain a feature description model. Thus, training of the descriptor section of the feature extraction description network is completed.
The feature extraction description model obtained through training can detect the feature point positions of the aerial image to be positioned and the target map region image respectively, and performs feature point description matching on the feature points, so that feature point extraction and matching work of the lower-view aerial image and the satellite map is realized, and a feature point matching relation between the aerial image to be positioned and the target map region image, namely a matching feature point pair, is obtained. The matched characteristic point pairs comprise characteristic points matched in the aerial image to be positioned and the target map area image.
S600: and positioning longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs.
After the matched feature points output by the feature extraction description model are obtained, the matched feature point pairs with high confidence coefficient can be selected by using the statistical model to carry out resolving according to the obtained matched feature point pairs, and a coordinate transformation matrix between images is obtained. And calculating the longitude and latitude coordinates of the target to be positioned by combining the longitude and latitude coordinate information of the satellite map so as to finish the autonomous positioning of the target to be positioned.
In some embodiments, as shown in fig. 6, an image mapping relationship schematic diagram provided in the embodiments of the present application may be converted according to a coordinate transformation matrix H between images, so as to map feature points on one image to another image. Therefore, the matching characteristic point pairs can be solved to obtain the coordinate transformation matrix of the aerial image to be positioned and the target map area image. And acquiring a longitude and latitude coordinate system of the target map area image and a pixel coordinate system of the aerial image to be positioned. And converting the matched characteristic points in the pixel coordinate system into a longitude and latitude coordinate system according to the coordinate transformation matrix so as to position the longitude and latitude coordinates of the target to be positioned.
In some embodiments, the matching feature point pairs may be solved by coordinate solving a coordinate transformation matrix of the aerial image to the satellite image from the matching feature point pairs:
wherein H is a coordinate transformation matrix,for the pixel coordinates of the matching feature points, n represents an image number, 1 represents an aerial image to be positioned, 2 represents a target map area image, and m represents a pair group number of the matching feature point pair.
Fig. 7 is a schematic diagram of a positioning result provided in an embodiment of the present application, in the field of autonomous positioning of an aircraft, an aerial image acquired by the aircraft in an external field is acquired, the aircraft is positioned according to the acquired aerial image, and the result is visualized, as shown in fig. 7, the upper left is the aerial image, the circle corresponds to the relative position of the aircraft, the point is a selected point display pixel coordinate, the lower left is a satellite area map, the longitude and latitude coordinates of the selected point are displayed, and the positioning result parameter and the area route map are displayed on the right side.
In the embodiment, a scene matching and positioning framework of a target to be positioned is built based on satellite ground so as to cope with the situations of complex meteorological conditions, failure of positioning means based on external information input such as satellites and radars in a strong-range limited environment, and aims to realize accurate identification and position matching of a target area by means of satellite map and aerial scene, so that an empty window area of satellite system positioning is complemented by the method as a visualized real-time method, and accurate and effective positioning support is provided for scenes lacking in satellite information support.
Compared with a machine vision matching positioning method, the target positioning method provided by the embodiment of the application adopts a matching region retrieval framework to be miscarried in a tracking algorithm, a wide-area satellite map is divided into blocks with a certain size, each block is extracted into one-dimensional feature vectors through a migration learning network model, and a corresponding region map is found through similarity calculation. In the running process, the target positioning system only needs to extract the preloaded characteristic files for calculation, so that the calculation amount is reduced compared with a method of directly loading and compressing a wide area map for rough matching, a large amount of information loss caused by image compression can be avoided, and the positioning efficiency of the system is effectively improved. And the feature point extraction matching between images is realized by adopting a feature extraction description network method based on deep learning, and each module is designed end to end without adjusting parameters by a user, so that the method has good adaptability and can effectively influence the change of various scenes (such as scenes with insignificant features of deserts, mining areas and the like).
Based on the above target positioning method, the embodiment of the application also provides a target positioning system based on the transfer learning network model and image matching, wherein the target positioning system comprises an image acquisition module, a feature vector extraction module, an image matching module, a feature point matching module and a positioning module.
The image acquisition module is used for acquiring an aerial image to be positioned, wherein the aerial image to be positioned is an image shot by a target to be positioned.
And the feature vector extraction module is used for extracting the image feature vector of the to-be-positioned aerial image.
The image matching module is used for acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images, and extracting target map area images matched with the to-be-positioned aerial image from the satellite map image set according to the image feature vectors and the map feature vectors.
The feature point matching module is used for inputting the image to be positioned and the image of the target map area into a feature extraction description model so as to obtain a matched feature point pair output by the feature extraction description model, wherein the matched feature point pair is a feature point matched in the image to be positioned and the image of the target map area, the feature extraction description model comprises a feature point position extraction model and a feature description model, the feature point position extraction model is used for extracting feature points and confidence of the image, and the feature description model is used for generating descriptors of the feature points.
And the positioning module is used for positioning the longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs.
According to the technical scheme, the target positioning method and the target positioning system based on the transfer learning network model and the image matching can acquire the aerial image to be positioned, which is shot by the target to be positioned, and extract the image feature vector. And acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images. And extracting a target map area image matched with the aerial image to be positioned from the satellite map image set according to the image feature vector and the map feature vector. And inputting the aerial image to be positioned and the target map area image into a feature extraction description model to obtain a matched feature point pair, wherein the matched feature point pair comprises the feature points matched in the aerial image to be positioned and the target map area image. And finally, positioning longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs. According to the method, the aerial image information acquired by the target to be positioned is matched with the airborne satellite map, so that the real-time positioning of the target without external signal support is realized, and the positioning accuracy is ensured.
The foregoing detailed description of the embodiments is merely illustrative of the general principles of the present application and should not be taken in any way as limiting the scope of the invention. Any other embodiments developed in accordance with the present application without inventive effort are within the scope of the present application for those skilled in the art.

Claims (10)

1. The target positioning method based on the matching of the transfer learning network model and the image is characterized by comprising the following steps of:
acquiring an aerial image to be positioned, wherein the aerial image to be positioned is an image shot by a target to be positioned;
extracting an image feature vector of the aerial image to be positioned;
acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images;
extracting a target map area image matched with the to-be-positioned aerial image from the satellite map image set according to the image feature vector and the map feature vector;
inputting the aerial image to be positioned and the target map area image into a feature extraction description model to obtain a matched feature point pair output by the feature extraction description model, wherein the matched feature point pair comprises feature points matched in the aerial image to be positioned and the target map area image, the feature extraction description model comprises a feature point position extraction model and a feature description model, the feature point position extraction model is used for extracting feature points and confidence of the image, and the feature description model is used for generating descriptors of the feature points;
and positioning longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs.
2. The method for locating an object based on matching between a learning network model and an image according to claim 1, wherein the step of acquiring a satellite map image set comprises:
acquiring a satellite map image;
dividing the satellite map image into a plurality of map area images;
and extracting one-dimensional feature vectors of the map area image to obtain a satellite map image set.
3. The method for locating an object based on a model-to-image matching of a learning network according to claim 1, wherein the step of extracting the image of the object map region matching the image to be located from the satellite map image set based on the image feature vector and the map feature vector comprises:
traversing a plurality of map feature vectors in the satellite map image set;
calculating the matching similarity of the image feature vector and the map feature vector, wherein the matching similarity is calculated according to cosine similarity, divergence and Euclidean distance;
and extracting a target map area image in the satellite map image set, wherein the target map area image is the map area image with the matching similarity larger than a similarity threshold value.
4. The method for target positioning based on matching of a transition learning network model with an image according to claim 3, further comprising:
calculating cosine similarity, divergence and Euclidean distance of the image feature vector and the map feature vector;
acquiring weights of the cosine similarity, the divergence and the Euclidean distance;
and weighting and summing the cosine similarity, the divergence and the Euclidean distance according to the weight to obtain the matching similarity.
5. The method for target localization based on matching of a transition learning network model to an image of claim 1, further comprising:
acquiring a synthetic dataset comprising geometric images of a plurality of marker feature points;
constructing a full convolution network, and training the full convolution network based on the synthetic data set to obtain a basic feature extraction model;
and obtaining a non-calibrated scene image, and performing scene feature point extraction training on the basic feature extraction model based on the non-calibrated scene image to obtain a feature point extraction model.
6. The method for target positioning based on matching of a transition learning network model with an image according to claim 5, further comprising:
performing image transformation on the non-calibration scene image to obtain a non-calibration scene transformation image;
acquiring the corresponding relation between the characteristic points of the non-calibrated scene image and the non-calibrated scene change image;
and taking the corresponding relation of the feature points as a label, and carrying out feature point description matching training on the basic feature extraction model to obtain a feature description model.
7. The method for positioning a target based on matching between a transfer learning network model and an image according to claim 1, wherein the step of positioning longitude and latitude coordinates of the target to be positioned according to the matching feature point pair comprises:
resolving the matched characteristic point pairs to obtain coordinate transformation matrixes of the to-be-positioned aerial image and the target map area image;
acquiring a longitude and latitude coordinate system of the target map area image and a pixel coordinate system of the to-be-positioned aerial image;
and converting the matched characteristic points in the pixel coordinate system into the longitude and latitude coordinate system according to the coordinate transformation matrix so as to position the longitude and latitude coordinates of the target to be positioned.
8. The method for target positioning based on matching of a transfer learning network model with an image according to claim 7, wherein the matching feature point pairs are calculated according to the following formula:
wherein H is a coordinate transformation matrix,for the pixel coordinates of the matching feature points, n represents an image number, 1 represents an aerial image to be positioned, 2 represents a target map area image, and m represents a pair group number of the matching feature point pair.
9. The method for target localization based on matching of a transition learning network model to an image of claim 1, further comprising:
acquiring a historical map area image, wherein the historical map area image is a target map area image matched with the previous key frame aerial image of the aerial image to be positioned;
acquiring the regional position coordinates of the aerial image to be positioned;
if the regional position coordinates are located in the historical map regional image, inputting the to-be-positioned aerial image and the historical map regional image into a feature extraction description model;
and if the regional position coordinates are not positioned in the historical map regional image, executing the step of extracting the image feature vector of the aerial image to be positioned.
10. A target positioning system based on matching of a transfer learning network model with an image, comprising:
the image acquisition module is used for acquiring an aerial image to be positioned, wherein the aerial image to be positioned is an image shot by a target to be positioned;
the feature vector extraction module is used for extracting the image feature vector of the to-be-positioned aerial image;
the image matching module is used for acquiring a satellite map image set, wherein the satellite map image set comprises map feature vectors of a plurality of map area images, and extracting a target map area image matched with the to-be-positioned aerial image from the satellite map image set according to the image feature vectors and the map feature vectors;
the feature point matching module is used for inputting the image to be positioned and the target map area image into a feature extraction description model so as to obtain a matched feature point pair output by the feature extraction description model, wherein the matched feature point pair comprises feature points matched in the image to be positioned and the target map area image, the feature extraction description model comprises a feature point position extraction model and a feature description model, the feature point position extraction model is used for extracting feature points and confidence of the image, and the feature description model is used for generating descriptors of the feature points;
and the positioning module is used for positioning the longitude and latitude coordinates of the target to be positioned according to the matched characteristic point pairs.
CN202311056871.0A 2023-08-22 2023-08-22 Target positioning method and system based on transfer learning network model and image matching Pending CN117274375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311056871.0A CN117274375A (en) 2023-08-22 2023-08-22 Target positioning method and system based on transfer learning network model and image matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311056871.0A CN117274375A (en) 2023-08-22 2023-08-22 Target positioning method and system based on transfer learning network model and image matching

Publications (1)

Publication Number Publication Date
CN117274375A true CN117274375A (en) 2023-12-22

Family

ID=89216756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311056871.0A Pending CN117274375A (en) 2023-08-22 2023-08-22 Target positioning method and system based on transfer learning network model and image matching

Country Status (1)

Country Link
CN (1) CN117274375A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118196196A (en) * 2024-03-27 2024-06-14 北京大希科技有限公司 Indoor image positioning method based on feature matching
CN118521764A (en) * 2024-07-23 2024-08-20 西北工业大学 Unmanned aerial vehicle to ground target combined positioning method, device and system under refusing environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118196196A (en) * 2024-03-27 2024-06-14 北京大希科技有限公司 Indoor image positioning method based on feature matching
CN118521764A (en) * 2024-07-23 2024-08-20 西北工业大学 Unmanned aerial vehicle to ground target combined positioning method, device and system under refusing environment

Similar Documents

Publication Publication Date Title
CN111862126B (en) Non-cooperative target relative pose estimation method combining deep learning and geometric algorithm
CN113359810B (en) Unmanned aerial vehicle landing area identification method based on multiple sensors
EP4318397A2 (en) Method of computer vision based localisation and navigation and system for performing the same
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN101598556B (en) Unmanned aerial vehicle vision/inertia integrated navigation method in unknown environment
CN111213155A (en) Image processing method, device, movable platform, unmanned aerial vehicle and storage medium
CN112419374B (en) Unmanned aerial vehicle positioning method based on image registration
CN110770791A (en) Image boundary acquisition method and device based on point cloud map and aircraft
CN103679674A (en) Method and system for splicing images of unmanned aircrafts in real time
Nagy Digital image-processing activities in remote sensing for earth resources
CN115331130B (en) Unmanned aerial vehicle inspection method based on geographical marker assisted navigation and unmanned aerial vehicle
CN114241464A (en) Cross-view image real-time matching geographic positioning method and system based on deep learning
CN113239864A (en) Route planning method of unmanned aerial vehicle suitable for agricultural investigation
CN115861591B (en) Unmanned aerial vehicle positioning method based on transformer key texture coding matching
CN117274375A (en) Target positioning method and system based on transfer learning network model and image matching
CN113012215A (en) Method, system and equipment for space positioning
CN113449692A (en) Map lane information updating method and system based on unmanned aerial vehicle
CN117372875A (en) Aerial remote sensing target identification method
CN117808689A (en) Depth complement method based on fusion of millimeter wave radar and camera
CN110927765B (en) Laser radar and satellite navigation fused target online positioning method
JP2022039188A (en) Position attitude calculation method and position attitude calculation program
CN116912786A (en) Intelligent network-connected automobile multi-mode fusion detection method based on vehicle-road cooperation
CN109341685B (en) Fixed wing aircraft vision auxiliary landing navigation method based on homography transformation
CN117437523B (en) Weak trace detection method combining SAR CCD and global information capture
CN116704037B (en) Satellite lock-losing repositioning method and system based on image processing technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination