CN115100449A - Remote sensing data multi-target relevance matching and track generation method and equipment - Google Patents

Remote sensing data multi-target relevance matching and track generation method and equipment Download PDF

Info

Publication number
CN115100449A
CN115100449A CN202210921778.0A CN202210921778A CN115100449A CN 115100449 A CN115100449 A CN 115100449A CN 202210921778 A CN202210921778 A CN 202210921778A CN 115100449 A CN115100449 A CN 115100449A
Authority
CN
China
Prior art keywords
image
interest
association
target
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210921778.0A
Other languages
Chinese (zh)
Other versions
CN115100449B (en
Inventor
常江
贺广均
冯鹏铭
金世超
刘世烁
梁银川
莫毅君
符晗
邹同元
韩昱
张鹏
车程安
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Satellite Information Engineering
Original Assignee
Beijing Institute of Satellite Information Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Satellite Information Engineering filed Critical Beijing Institute of Satellite Information Engineering
Priority to CN202210921778.0A priority Critical patent/CN115100449B/en
Publication of CN115100449A publication Critical patent/CN115100449A/en
Application granted granted Critical
Publication of CN115100449B publication Critical patent/CN115100449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a method and equipment for multi-target association matching and track generation of remote sensing data, which are used for carrying out spatial registration on multi-mode sequence remote sensing images by utilizing the characteristic of invariant feature transform (SIFT) scale and then carrying out association matching on target information in the images by utilizing a multi-target association matching method based on topological feature similarity matching.

Description

Remote sensing data multi-target correlation matching and track generation method and equipment
Technical Field
The invention relates to a method and equipment for remote sensing data multi-target association matching and track generation.
Background
In recent years, with the development of remote sensing technology, earth observation has the characteristics of multiple platforms, multiple sensors, high time resolution and the like, and the multimode and multi-temporal remote sensing data provides a very rich spatial information source for resource investigation, environment monitoring, construction planning and other applications. However, remote sensing images of different sensors and different tenses have certain differences in the aspects of spatial features, textural features and the like, and difficulty is caused in comprehensive utilization of multi-modal sequence remote sensing data. Therefore, how to perform multi-modal sequence remote sensing data fusion, the advantages of multi-platform and multi-sensor complementary observation are fully exerted, more accurate and comprehensive target information is obtained, and the method is one of the hot problems in the field of remote sensing.
Remote sensing information fusion is generally divided into three levels: pixel level fusion, feature level fusion, and decision level fusion. At present, a deep learning method is mostly adopted aiming at pixel level fusion, and good effects are obtained, however, the method is mainly oriented to homogeneous remote sensing data fusion and is difficult to be well applied to heterogeneous data fusion scenes, and the deep learning-based method usually needs a large amount of training data which are interpreted and labeled by experts, and has high requirements on data quality. The feature level fusion and the decision level fusion are more suitable for heterogeneous data fusion scenes, the existing research is still in a starting stage, and a method with high efficiency, high precision and good popularization applicability is lacked.
Disclosure of Invention
In view of the technical problems, the invention utilizes a topological characteristic similarity matching method to perform multi-target association matching and target track generation on multi-source heterogeneous remote sensing data, and provides a feasible technical scheme for multi-modal sequence remote sensing data fusion application. The invention aims to provide a remote sensing data multi-target association matching and track generating method, which utilizes the characteristic of invariant scale transform (SIFT) scale to carry out spatial registration on multi-modal sequence remote sensing images, and then utilizes a multi-target association matching method based on topological feature similarity matching to carry out association matching on target information in the images.
The technical solution for realizing the purpose of the invention is as follows: a remote sensing data multi-target association matching and track generating method comprises the following steps:
s1, obtaining multi-mode multi-time-phase sequence remote sensing image data of a preset area, and extracting interesting target information in an image sequence;
s2, carrying out spatial registration on the obtained multi-modal sequence remote sensing image;
step S3, carrying out association matching on a plurality of groups of interested targets on any two images in the sequence, and fusing the corresponding interested targets according to the association result;
and step S4, calculating all coordinate positions of the fused multiple interest targets in the whole image sequence through spatial registration, and fitting to obtain motion tracks of the multiple interest targets.
According to one aspect of the invention, in the step 2, the spatial registration of the obtained multi-modal sequence remote sensing images is performed by using a method based on SIFT feature matching, which specifically includes:
step S21, selecting any image in the image sequence as a reference image for space registration of the whole image sequence, and taking the rest images as images to be registered;
s22, extracting scale-invariant feature points and description vectors thereof in all images according to an SIFT algorithm;
step S23, calculating K feature points with the highest matching degree in the reference image by using a KNN algorithm for each feature point in the image to be registered, and obtaining a nearest neighbor matching point and a next nearest neighbor matching point when K = 2;
s24, fitting an affine transformation matrix by using a least square method based on the feature point matching result of the image to be registered and the reference image;
and step S25, according to the affine transformation matrix obtained by fitting, transforming the target position of interest in the image to be registered into a coordinate system consistent with the reference image, and completing the spatial registration of the image sequence.
According to an aspect of the present invention, in step S23, when the ratio r of the distance of any feature point matching with its nearest neighbor to the distance of the next nearest neighbor is greater than a preset threshold T, the matching result is rejected.
According to an aspect of the present invention, in step S24, when the sum of squared errors of the least-squares fitting is greater than the preset error threshold δ, it discards the corresponding image to be registered.
According to an aspect of the present invention, in step 3, a method of matching topological feature similarities is used to perform multiple groups of interested target association matching on any two images in the sequence, and corresponding interested targets are fused according to association results, which specifically includes:
step S31, for any image in the image sequence, extracting the scale-invariant feature points of the interested target by using a scale-invariant feature transformation algorithm;
step S32, calculating two-dimensional topological feature description vectors of feature point sets of all interested targets;
step S33, calculating the topological feature similarity of the interested target between any two images according to the description vector;
and step S34, performing correlation of the interested target between the two images according to the topological feature similarity, and fusing the corresponding interested targets according to the correlation result.
According to an aspect of the present invention, in step S32, calculating a two-dimensional topological feature description vector of the feature point set of all the objects of interest specifically includes:
establishing a polar coordinate system by taking the centroid of the target of interest as a pole, dividing all feature points into 8 intervals according to the size of a polar angle, wherein each interval is a sector area with pi/4 radian; and calculating the topological characteristic value of each interval, wherein the formula is as follows:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE002
is as follows
Figure DEST_PATH_IMAGE003
The value of the topological characteristic of each interval,
Figure 100002_DEST_PATH_IMAGE004
is a first
Figure 169064DEST_PATH_IMAGE003
The number of feature points of each interval,
Figure DEST_PATH_IMAGE005
is as follows
Figure 617363DEST_PATH_IMAGE003
In the first interval
Figure 100002_DEST_PATH_IMAGE006
The polar diameter of each characteristic point; the topological feature description vector for the object of interest is represented as
Figure DEST_PATH_IMAGE007
According to an aspect of the present invention, in step S33, calculating the similarity of the topological features of the target of interest between any two images according to the description vector includes:
Figure 100002_DEST_PATH_IMAGE008
Figure DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE010
is the similarity of the topological features of the two objects of interest, D and Sum are intermediate quantities used to calculate the similarity of the topological features,
Figure DEST_PATH_IMAGE011
for an object of interest in a first image
Figure 100002_DEST_PATH_IMAGE012
The topological feature of (a) describes a vector,
Figure DEST_PATH_IMAGE013
for an object of interest in the second image
Figure 100002_DEST_PATH_IMAGE014
I and j are polar coordinate system interval indexes, abs is an absolute value function, max is a maximum value function,
Figure DEST_PATH_IMAGE015
is an average filter for eliminating the difference of target directionsThe impact of the process.
According to an aspect of the present invention, in step S34, the associating the object of interest between the two images according to the similarity of the topological features specifically includes:
step S341, calculating a topological feature similarity matrix of the two images, wherein the formula is as follows:
Figure 100002_DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE017
for the number of objects of interest in the previous image,
Figure 100002_DEST_PATH_IMAGE018
for the number of objects of interest in the latter image,
Figure DEST_PATH_IMAGE019
is the first image in the previous image
Figure 849630DEST_PATH_IMAGE003
An object of interest and the first image in the subsequent image
Figure 775997DEST_PATH_IMAGE006
Similarity of topological features of the individual objects of interest;
step S342, finding the maximum element in the matrix
Figure 100002_DEST_PATH_IMAGE020
If, if
Figure DEST_PATH_IMAGE021
Then the previous image is the first
Figure 100002_DEST_PATH_IMAGE022
Object and the first image in the next image
Figure DEST_PATH_IMAGE023
The targets are associated with an association confidence of
Figure 715003DEST_PATH_IMAGE020
And delete the first from the matrix
Figure 803045DEST_PATH_IMAGE022
And row and column
Figure 335658DEST_PATH_IMAGE023
A column;
repeating the above process until all elements in the matrix are smaller than the preset correlation confidence threshold
Figure 800137DEST_PATH_IMAGE024
Wherein
Figure DEST_PATH_IMAGE025
According to an aspect of the present invention, in step S34, fusing the corresponding objects of interest by using a method for recursively building an association tree group, specifically including:
s343, selecting any interested target as a root node to establish a correlation tree, and taking all correlation targets of the interested target as first-level child nodes;
step S344, a target which is not present in the association tree in the association targets of the first-level child nodes is taken as a second-level child node of the first-level child nodes, and so on, so that a complete association tree is obtained; repeating the step for any interested target which does not appear in any associated tree, and obtaining an associated tree group which is composed of a plurality of independent associated trees and covers all the interested targets;
step S345, judging whether two nodes exist in any association tree or not to correspond to different interested targets on the same image, if so, deleting the connection with the lowest association confidence coefficient on the path between the two nodes, and splitting the original association tree into two new association trees;
in step S346, the obtained association tree group represents the final association result, and a node included in any one of the association trees is the target of interest to be fused.
According to an aspect of the present invention, there is provided an apparatus comprising a storage medium and a processor, the storage medium storing a computer program which, when executed by the processor, implements the method of any of the above aspects.
According to the concept of the invention, a multi-mode multi-temporal sequence remote sensing image data of a preset area in a period of time is obtained, interested target information in an image sequence is extracted, the obtained multi-mode sequence remote sensing image is subjected to spatial registration by using a method based on feature matching, any two images in the sequence are subjected to multi-group interested target association matching by using a topological feature similarity matching method, corresponding interested targets are fused according to association results, all coordinate positions of the fused multiple interested targets subjected to spatial registration in the whole image sequence are calculated, the motion tracks of the multiple interested targets are obtained by fitting, and the spatial and temporal redundancy or complementary information of different sensors is combined, so that the spatial features, the temporal redundancy and the spatial features of the remote sensing images of different sensors and different temporal states are avoided, The influence of the difference in the aspects of textural features and the like on the processing of the multi-modal sequence remote sensing data is obtained, the target track information which is more perfect and more accurate than single-phase data of a single sensor is obtained, and the efficiency and the precision of the processing of the multi-modal sequence remote sensing data are improved.
Drawings
FIG. 1 is a flow chart schematically illustrating a method for multi-target relevance matching and trajectory generation of remote sensing data according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a relational tree sample graph for multi-objective relational matching in accordance with one embodiment of the invention;
FIG. 3 is a flow chart of a method for multi-target relevance matching and trajectory generation of remote sensing data according to another embodiment of the invention;
FIG. 4 schematically shows a flowchart of step S2 according to one embodiment of the present invention;
FIG. 5 schematically shows a flowchart of step S3 according to an embodiment of the present invention;
FIG. 6 schematically shows a flowchart of step S34 according to one embodiment of the present invention;
fig. 7 schematically shows a flowchart of step S34 according to another embodiment of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can also be derived from them without inventive effort.
The present invention is described in detail below with reference to the drawings and the specific embodiments, which are not repeated herein, but the embodiments of the present invention are not limited to the following embodiments.
As shown in fig. 1 to 7, the method for multi-target association matching and trajectory generation of remote sensing data of the present invention includes the following steps:
s1, obtaining multi-mode multi-time-phase sequence remote sensing image data of a preset area, and extracting interested target information in an image sequence;
s2, carrying out spatial registration on the obtained multi-modal sequence remote sensing image;
step S3, performing multi-group interesting target correlation matching on any two images in the sequence, and fusing corresponding interesting targets according to correlation results;
and step S4, calculating all coordinate positions of the fused multiple interest targets in the whole image sequence through spatial registration, and fitting to obtain motion tracks of the multiple interest targets.
In this embodiment, after multi-modal multi-time-phase sequence remote sensing image data of a preset area are acquired within a certain time period, target information of interest in an image sequence is extracted, and therefore the target of interest is confirmed, positioning accuracy errors of multi-modal sequence remote sensing images are different due to differences of satellite orbits, attitudes, shooting angles and the like, spatial registration is performed on the image sequence acquired in the step S1 to eliminate the influence of the positioning accuracy errors of each image on a fitted target track, multiple groups of target of interest are associated, matched and fused on any two images in the sequence, all coordinate positions of the fused multiple targets of interest in the whole image sequence subjected to spatial registration are calculated by using affine transformation parameters obtained in the step S2, and polynomial fitting is performed by using a least square method to obtain motion tracks of the multiple targets.
If any two images in the same time period refer to images on two time nodes, for example, the first image is an image at a time, the second image is an image at B time, the first image includes 10 interested targets in total, i.e., a1, a2 and A3 … a10, the second image includes 12 interested targets in total, i.e., B1, B2 and B3 … B12, then a1 is associated and matched with B1, B2 and B3 … B12, and similarly, a2 is associated and matched with B1, B2 and B3 … B12, and corresponding interested targets are fused according to the association result.
As shown in fig. 3 and 4, in an embodiment of the present invention, preferably, in step 2, the spatial registration of the obtained multimodal sequence remote sensing images is performed by using a method based on SIFT feature matching, which specifically includes:
step S21, selecting any image in the image sequence as a reference image for spatial registration of the whole image sequence, and using the rest images as images to be registered;
step S22, extracting Scale Invariant Feature points and description vectors thereof in all images according to SIFT (Scale Invariant Feature Transform) algorithm;
step S23, calculating K feature points which are the highest in matching degree with each feature point in the image to be registered in the reference image by using a KNN (K-Nearest Neighbor) algorithm, and obtaining a Nearest Neighbor matching point and a next Nearest Neighbor matching point when K = 2;
s24, fitting an affine transformation matrix by using a least square method based on the feature point matching result of the image to be registered and the reference image;
and step S25, according to the affine transformation matrix obtained by fitting, transforming the target position of interest in the image to be registered into a coordinate system consistent with the reference image, and completing the spatial registration of the image sequence.
In one embodiment of the present invention, preferably, in step S23, when the ratio r of the distance that any feature point matches with its nearest neighbor to the distance that any feature point matches with its next nearest neighbor is greater than a preset threshold T, the matching result is rejected.
In this embodiment, when feature point matching is performed on a reference image and an image to be registered, since a remote sensing image has a complex background, and feature points with the closest euclidean distance are selected by violence as matching results, which easily causes a large number of false matches, K feature points with the highest matching degree in the reference image are calculated for each feature point in the image to be registered by using a KNN algorithm, where K =2, that is, a nearest neighbor matching point and a next nearest neighbor matching point are obtained for each feature point, due to the high dimensionality of an image feature space, a large number of other false matches may exist at a close distance of a certain false matching result, and a distance of a correct matching result should be substantially smaller than those of other false matching results. Based on this, when the ratio r of the distance of the nearest neighbor to the distance of the next nearest neighbor in the matching result is greater than the preset threshold value T, the matching result is considered as an error matching and is eliminated, the problem of error matching is avoided, the accuracy of spatial registration is ensured, and therefore the accuracy of multi-target correlation matching of multi-modal sequence remote sensing data and the accuracy of a track generation result are improved, and the preset threshold value T = 0.4.
In one embodiment of the present invention, preferably, in step S24, when the sum of squared errors of the least squares fitting is greater than the preset error threshold δ, it discards the corresponding image to be registered.
In this embodiment, when the sum of squares of the errors of the least square fitting is greater than a preset error threshold δ set in advance, the registration degree of the image to be registered and the reference image is considered to be low and discarded, which is also beneficial to improving the matching accuracy.
As shown in fig. 3 and fig. 5, in an embodiment of the present invention, preferably, in the step 3, performing multi-group interesting object association matching on any two images in the sequence by using a topological feature similarity matching method, and fusing corresponding interesting objects according to an association result, specifically including:
step S31, for any image in the image sequence, extracting the scale-invariant feature points of the interested target by using a scale-invariant feature transformation algorithm;
step S32, calculating two-dimensional topological feature description vectors of feature point sets of all interested targets;
step S33, calculating the topological feature similarity of the interested target between any two images according to the description vector;
and step S34, performing the correlation of the interested target between the two images according to the topological feature similarity, and fusing the corresponding interested target according to the correlation result.
In this embodiment, after the object of interest is determined, a scale invariant feature transformation algorithm is used to extract scale invariant feature points of the object of interest, the feature points of the same object of interest are combined into a feature point set, the same image includes a plurality of objects of interest, each object of interest has a corresponding feature point set, two-dimensional topological feature description vectors of the feature point sets of all the objects of interest are calculated, and then multiple sets of objects of interest in two images are associated and matched, for example, when the first image is an image at a time and the second image is an image at B time, the first image includes 10 objects of interest including a1, a2 and A3 … a10, the second image includes 12 objects of interest including B1, B2 and B3 … B12, a1 is associated and matched with B1, B2 and B3 … B12, a2 is associated and matched with B1, B2 and B3 … B12, the accuracy of the correlation matching is ensured.
As shown in fig. 3, fig. 5 and fig. 6, in an embodiment of the present invention, preferably, in step S34, the fusing the corresponding objects of interest by using a method of recursively building associated tree groups, specifically including:
s343, selecting any interested target as a root node to establish a correlation tree, and taking all correlation targets of the interested target as first-level child nodes;
step S344, a target which is not present in the association tree in the association targets of the first-level child nodes is taken as a second-level child node of the first-level child nodes, and so on, so that a complete association tree is obtained; repeating the step for any interested target which does not appear in any associated tree, and obtaining an associated tree group which is composed of a plurality of independent associated trees and covers all the interested targets;
step S345, judging whether two nodes in any one association tree correspond to different interested targets on the same image, if so, deleting the connection with the lowest association confidence coefficient on the path between the two nodes, and splitting the original association tree into two new association trees;
in step S346, the obtained association tree group represents a final association result, where a node included in any one of the association trees is an interested target that needs to be fused.
In the embodiment, a correlation tree group is established by using a recursive method, an object which does not appear in the correlation tree in the correlation target of each sub-node is taken as the sub-node of the sub-node to ensure the integrity of target data, meanwhile, whether two nodes exist in any correlation tree or not is judged to correspond to different interested objects on the same image, if yes, the part is deleted, and the original correlation tree is split into two new correlation trees to avoid forming the correlation of the two interested objects on the same image.
For example, the first image includes objects of interest a1, a2, A3 … A8, the second image includes objects of interest B1, B2, B3 … B10, the third image includes objects of interest C1, C2, C3 … C15, the fourth image includes objects of interest D1, D2, D3 … D13, the fifth image includes objects of interest E1, E2, E3 … D20, the sixth image includes objects of interest F1, F2, F3 … F18, the seventh image includes objects of interest G1, G2, G3 … G17, and the eighth image includes objects of interest H1, H2, H3 … H11. The following associations exist in the association tree: A2-D10-B5-E20, when it is confirmed that D10 is associated with E20 and D10 is already present in the association tree, the addition of D10 is not repeated any more to avoid endless repetition of the association tree.
For another example, as shown in fig. 2, the following associations exist in the association tree: A3-C5-D7-B8-E20-C7, the connection with the lowest association confidence degree on the path from C5 to C7 is deleted, and B8-E20 is deleted when the association confidence degree between B8-E20 is lowest.
In an embodiment of the present invention, preferably, in step S32, calculating a two-dimensional topological feature description vector of the feature point set of all the objects of interest specifically includes:
establishing a polar coordinate system by taking the centroid of the target of interest as a pole, and dividing all the characteristic points into 8 intervals according to the size of a polar angle, wherein each interval is a sector area with pi/4 radian; and calculating the topological characteristic value of each interval, wherein the formula is as follows:
Figure DEST_PATH_IMAGE026
wherein, the first and the second end of the pipe are connected with each other,
Figure 174DEST_PATH_IMAGE002
is as follows
Figure 259117DEST_PATH_IMAGE003
The value of the topological characteristic of each interval,
Figure 279026DEST_PATH_IMAGE004
is as follows
Figure 547196DEST_PATH_IMAGE003
The number of feature points of each interval,
Figure 601740DEST_PATH_IMAGE005
is a first
Figure 297163DEST_PATH_IMAGE003
In the first interval
Figure 804368DEST_PATH_IMAGE006
The polar diameter of each characteristic point; the topological feature description vector for the object of interest is represented as
Figure 345071DEST_PATH_IMAGE007
In an embodiment of the present invention, preferably, in step S33, calculating a similarity of topological features of the object of interest between any two images according to the description vector includes:
Figure 254121DEST_PATH_IMAGE008
Figure 854867DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 849367DEST_PATH_IMAGE010
is the similarity of the topological features of the two objects of interest, D and Sum are intermediate quantities used to calculate the similarity of the topological features,
Figure 193761DEST_PATH_IMAGE011
for an object of interest in a first image
Figure 957318DEST_PATH_IMAGE012
The topological feature of (a) describes a vector,
Figure 994544DEST_PATH_IMAGE013
for an object of interest in the second image
Figure 210762DEST_PATH_IMAGE014
I and j are polar coordinate system interval indexes, abs is an absolute value function, max is a maximum value function,
Figure 358846DEST_PATH_IMAGE015
the mean filter is used for eliminating the influence caused by the target direction difference.
As shown in fig. 3, fig. 5 and fig. 7, in one embodiment of the present invention, preferably, in step S34, the associating the object of interest between the two images according to the similarity of the topological features specifically includes:
step S341, calculating a topological feature similarity matrix of the two images, where the formula is:
Figure 976909DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 185037DEST_PATH_IMAGE017
for the number of objects of interest in the previous image,
Figure 888551DEST_PATH_IMAGE018
for the number of objects of interest in the latter image,
Figure 309168DEST_PATH_IMAGE019
is the first image in the previous image
Figure 312896DEST_PATH_IMAGE003
An object of interest and the first image in the subsequent image
Figure 691924DEST_PATH_IMAGE006
The topological feature similarity of each interested target;
step S342, searching the maximum element in the matrix
Figure 617155DEST_PATH_IMAGE020
If, if
Figure 107042DEST_PATH_IMAGE021
In the previous image, is the first
Figure 230856DEST_PATH_IMAGE022
Object and the first image in the next image
Figure 780786DEST_PATH_IMAGE023
The objects are associated with an association confidence of
Figure 458892DEST_PATH_IMAGE020
And delete the first from the matrix
Figure 486891DEST_PATH_IMAGE022
And row and column
Figure 934053DEST_PATH_IMAGE023
Columns;
repeating the above process until all elements in the matrix are less than the preset correlation confidence threshold
Figure DEST_PATH_IMAGE027
In which
Figure 654884DEST_PATH_IMAGE025
In the embodiment, the maximum element is searched in the matrix, if the maximum element is greater than the preset association confidence threshold, the target of the previous image is associated with the corresponding target in the next image, the elements of the same row and the same column at the corresponding positions are deleted from the matrix, and the operation is repeated for multiple times until the maximum element in the matrix is less than the preset association confidence threshold
Figure 820286DEST_PATH_IMAGE024
And confirming that the correlation matching of all the interested targets in the two images is completed.
The device comprises a storage medium and a processor, wherein the storage medium stores a computer program, and the computer program is executed by the processor to realize the multi-target correlation matching and track generation method of the remote sensing data.
In summary, the invention provides a method and a device for multi-target correlation matching and track generation of remote sensing data, which are used for acquiring multi-modal multi-temporal sequence remote sensing image data of a preset area in a period of time, extracting interesting target information in an image sequence, performing spatial registration on the acquired multi-modal sequence remote sensing images by using a method based on feature matching, performing multi-group interesting target correlation matching on any two images in the sequence by using a topological feature similarity matching method, fusing corresponding interesting targets according to correlation results, calculating all coordinate positions of the fused interesting targets subjected to spatial registration in the whole image sequence, fitting to obtain motion tracks of the interesting targets, and combining redundant or complementary information of different sensors in space and time to avoid different sensors, The influence of the difference of the remote sensing images in different tenses on the aspects of spatial features, textural features and the like on the processing of the multi-modal sequence remote sensing data is obtained, target track information which is more perfect and accurate than single-time phase data of a single sensor is obtained, and the efficiency and the precision of the processing of the multi-modal sequence remote sensing data are improved.
The above description is only one embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A remote sensing data multi-target association matching and track generating method comprises the following steps:
s1, obtaining multi-mode multi-time-phase sequence remote sensing image data of a preset area, and extracting interested target information in an image sequence;
s2, carrying out spatial registration on the obtained multi-modal sequence remote sensing image;
step S3, carrying out association matching on a plurality of groups of interested targets on any two images in the sequence, and fusing the corresponding interested targets according to the association result;
and step S4, calculating all coordinate positions of the fused multiple interest targets in the whole image sequence through spatial registration, and fitting to obtain motion tracks of the multiple interest targets.
2. The method according to claim 1, wherein in the step 2, spatial registration is performed on the obtained multi-modal sequence remote sensing images by using a method based on SIFT feature matching, and specifically comprises:
step S21, selecting any image in the image sequence as a reference image for space registration of the whole image sequence, and taking the rest images as images to be registered;
step S22, extracting scale invariant feature points and description vectors thereof in all images according to a SIFT algorithm;
step S23, calculating K feature points with the highest matching degree in the reference image by using a KNN algorithm for each feature point in the image to be registered, and obtaining a nearest neighbor matching point and a next nearest neighbor matching point when K = 2;
s24, fitting an affine transformation matrix by using a least square method based on the feature point matching result of the image to be registered and the reference image;
and step S25, transforming the target position of interest in the image to be registered into a coordinate system consistent with the reference image according to the affine transformation matrix obtained by fitting, and completing the spatial registration of the image sequence.
3. The method according to claim 2, wherein in step S23, when the ratio r of the distance between any feature point and its nearest neighbor match to the distance between any feature point and its next neighbor match is greater than a preset threshold T, the matching result is rejected.
4. The method of claim 2, wherein in step S24, when the sum of squared errors of the least squares fitting is greater than a predetermined error threshold δ, the method discards the corresponding image to be registered.
5. The method according to claim 1, wherein in the step 3, a topological feature similarity matching method is used to perform multi-group interesting object association matching on any two images in the sequence, and corresponding interesting objects are fused according to an association result, specifically including:
step S31, for any image in the image sequence, extracting the scale invariant feature points of the interested target by using a scale invariant feature transform algorithm;
step S32, calculating two-dimensional topological feature description vectors of feature point sets of all interested targets;
step S33, calculating the topological feature similarity of the interested target between any two images according to the description vector;
and step S34, performing correlation of the interested target between the two images according to the topological feature similarity, and fusing the corresponding interested targets according to the correlation result.
6. The method according to claim 5, wherein in step S34, the associating of the object of interest between the two images according to the topological feature similarity includes:
step S341, calculating a topological feature similarity matrix of the two images, wherein the formula is as follows:
Figure 221541DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE002
for the number of objects of interest in the previous image,
Figure 54368DEST_PATH_IMAGE003
for the number of objects of interest in the latter image,
Figure DEST_PATH_IMAGE004
is the first image in the previous image
Figure 946100DEST_PATH_IMAGE005
An object of interest and the first image in the subsequent image
Figure DEST_PATH_IMAGE006
Topological features of individual objects of interestSimilarity;
step S342, searching the maximum element in the matrix
Figure 864378DEST_PATH_IMAGE007
If at all
Figure DEST_PATH_IMAGE008
Then the previous image is the first
Figure 499758DEST_PATH_IMAGE009
Object and the first image in the next image
Figure DEST_PATH_IMAGE010
The targets are associated with an association confidence of
Figure 921513DEST_PATH_IMAGE007
And delete the first from the matrix
Figure 984146DEST_PATH_IMAGE009
And row and column
Figure 858562DEST_PATH_IMAGE010
A column;
the above process is repeated until all elements in the matrix are less than a preset association confidence threshold, epsilon, where epsilon = 0.7.
7. The method according to claim 5, wherein in step S34, fusing the corresponding objects of interest by using a method for recursively building associated tree groups, specifically comprising:
s343, selecting any interested target as a root node to establish a correlation tree, and taking all correlation targets of the interested target as first-level child nodes;
step S344, a target which is not present in the association tree in the association targets of the first-level child nodes is taken as a second-level child node of the first-level child nodes, and so on, so that a complete association tree is obtained; repeating the step for any interested target which does not appear in any associated tree to obtain an associated tree group which is composed of a plurality of independent associated trees and covers all the interested targets;
step S345, judging whether two nodes in any one association tree correspond to different interested targets on the same image, if so, deleting the connection with the lowest association confidence coefficient on the path between the two nodes, and splitting the original association tree into two new association trees;
in step S346, the obtained association tree group represents a final association result, where a node included in any one of the association trees is an interested target that needs to be fused.
8. The method according to claim 5, wherein in the step S32, calculating a two-dimensional topological feature description vector of the feature point set of all the objects of interest specifically includes:
establishing a polar coordinate system by taking the centroid of the target of interest as a pole, and dividing all the characteristic points into 8 intervals according to the size of a polar angle, wherein each interval is a sector area with pi/4 radian; and calculating the topological characteristic value of each interval, wherein the formula is as follows:
Figure 297633DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE012
is as follows
Figure 573894DEST_PATH_IMAGE005
The value of the topological characteristic of each interval,
Figure 807429DEST_PATH_IMAGE013
is as follows
Figure 169140DEST_PATH_IMAGE005
The number of feature points of each interval,
Figure DEST_PATH_IMAGE014
is as follows
Figure 411903DEST_PATH_IMAGE005
In the first interval
Figure 277090DEST_PATH_IMAGE006
The polar diameter of each characteristic point; the topological feature description vector V of the object of interest is then expressed as:
Figure 681527DEST_PATH_IMAGE015
9. the method according to claim 5, wherein in step S33, calculating the similarity of the topological features of the object of interest between any two images according to the description vector includes:
Figure DEST_PATH_IMAGE016
Figure 796114DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE018
for the similarity of topological features of the two objects of interest, D and Sum are intermediate quantities for calculating the similarity of topological features,
Figure 576988DEST_PATH_IMAGE019
for an object of interest in a first image
Figure DEST_PATH_IMAGE020
The topological feature of (a) describes a vector,
Figure 827841DEST_PATH_IMAGE021
for an object of interest in the second image
Figure DEST_PATH_IMAGE022
I and j are polar coordinate system interval indexes, abs is an absolute value function, max is a maximum value function,
Figure 668758DEST_PATH_IMAGE023
the mean filter is used for eliminating the influence caused by the target direction difference.
10. An apparatus comprising a storage medium and a processor, the storage medium storing a computer program, wherein the computer program, when executed by the processor, implements the method of any of claims 1-9.
CN202210921778.0A 2022-08-02 2022-08-02 Method and equipment for generating multi-target Guan Lianpi matching and track of remote sensing data Active CN115100449B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210921778.0A CN115100449B (en) 2022-08-02 2022-08-02 Method and equipment for generating multi-target Guan Lianpi matching and track of remote sensing data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210921778.0A CN115100449B (en) 2022-08-02 2022-08-02 Method and equipment for generating multi-target Guan Lianpi matching and track of remote sensing data

Publications (2)

Publication Number Publication Date
CN115100449A true CN115100449A (en) 2022-09-23
CN115100449B CN115100449B (en) 2023-04-14

Family

ID=83300744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210921778.0A Active CN115100449B (en) 2022-08-02 2022-08-02 Method and equipment for generating multi-target Guan Lianpi matching and track of remote sensing data

Country Status (1)

Country Link
CN (1) CN115100449B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522548A (en) * 2023-02-24 2023-08-01 中国人民解放军国防科技大学 Multi-target association method for air-ground unmanned system based on triangular topological structure
CN116778292A (en) * 2023-08-18 2023-09-19 深圳前海中电慧安科技有限公司 Method, device, equipment and storage medium for fusing space-time trajectories of multi-mode vehicles

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064554A1 (en) * 2011-11-14 2014-03-06 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
CN110533695A (en) * 2019-09-04 2019-12-03 深圳市唯特视科技有限公司 A kind of trajectory predictions device and method based on DS evidence theory
CN112396643A (en) * 2020-12-08 2021-02-23 兰州交通大学 Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused
CN113936037A (en) * 2021-10-13 2022-01-14 重庆邮电大学 Extended target tracking method, medium, and system introducing topological features between targets
CN114494378A (en) * 2022-02-16 2022-05-13 国网江苏省电力有限公司无锡供电分公司 Multi-temporal remote sensing image automatic registration method based on improved SIFT algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140064554A1 (en) * 2011-11-14 2014-03-06 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
CN110533695A (en) * 2019-09-04 2019-12-03 深圳市唯特视科技有限公司 A kind of trajectory predictions device and method based on DS evidence theory
CN112396643A (en) * 2020-12-08 2021-02-23 兰州交通大学 Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused
CN113936037A (en) * 2021-10-13 2022-01-14 重庆邮电大学 Extended target tracking method, medium, and system introducing topological features between targets
CN114494378A (en) * 2022-02-16 2022-05-13 国网江苏省电力有限公司无锡供电分公司 Multi-temporal remote sensing image automatic registration method based on improved SIFT algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
樊东昊;朱建军;郭南男;周璀;周靖鸿;: "一种结合区域选择和SIFT算法的遥感图像配准方法" *
陈天泽;李燕;: "一种高性能SAR图像边缘点特征匹配方法" *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116522548A (en) * 2023-02-24 2023-08-01 中国人民解放军国防科技大学 Multi-target association method for air-ground unmanned system based on triangular topological structure
CN116522548B (en) * 2023-02-24 2024-03-26 中国人民解放军国防科技大学 Multi-target association method for air-ground unmanned system based on triangular topological structure
CN116778292A (en) * 2023-08-18 2023-09-19 深圳前海中电慧安科技有限公司 Method, device, equipment and storage medium for fusing space-time trajectories of multi-mode vehicles
CN116778292B (en) * 2023-08-18 2023-11-28 深圳前海中电慧安科技有限公司 Method, device, equipment and storage medium for fusing space-time trajectories of multi-mode vehicles

Also Published As

Publication number Publication date
CN115100449B (en) 2023-04-14

Similar Documents

Publication Publication Date Title
CN115100449B (en) Method and equipment for generating multi-target Guan Lianpi matching and track of remote sensing data
Zhang et al. Self-training with progressive augmentation for unsupervised cross-domain person re-identification
CN107886129B (en) Mobile robot map closed-loop detection method based on visual word bag
US8798357B2 (en) Image-based localization
CN108376408B (en) Three-dimensional point cloud data rapid weighting registration method based on curvature features
Liu et al. Indexing visual features: Real-time loop closure detection using a tree structure
Fischer et al. Qdtrack: Quasi-dense similarity learning for appearance-only multiple object tracking
Guclu et al. Fast and effective loop closure detection to improve SLAM performance
CN107832778B (en) Same target identification method based on spatial comprehensive similarity
Li et al. IPJC: The incremental posterior joint compatibility test for fast feature cloud matching
CN113822368A (en) Anchor-free incremental target detection method
CN114417048A (en) Unmanned aerial vehicle positioning method without positioning equipment based on image semantic guidance
Zeng et al. Robust multivehicle tracking with wasserstein association metric in surveillance videos
Cheng et al. A two-stage outlier filtering framework for city-scale localization using 3D SfM point clouds
CN116128944A (en) Three-dimensional point cloud registration method based on feature interaction and reliable corresponding relation estimation
CN116071667A (en) Method and system for detecting abnormal aircraft targets in specified area based on historical data
Hughes et al. A semi-supervised approach to SAR-optical image matching
Xie et al. Hierarchical forest based fast online loop closure for low-latency consistent visual-inertial SLAM
CN113327271B (en) Decision-level target tracking method and system based on double-optical twin network and storage medium
CN112729277B (en) Star sensor star map identification method based on dynamic included angle matching
Xie et al. An autonomous star identification algorithm based on the directed circularity pattern
JP5182762B2 (en) Two-dimensional figure matching method
Shi et al. Fast and Accurate Deep Loop Closing and Relocalization for Reliable LiDAR SLAM
Wu et al. Visual loop closure detection by matching binary visual features using locality sensitive hashing
Temir et al. Image classification by distortion-free graph embedding and KNN-random forest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant