CN107862319B - Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting - Google Patents

Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting Download PDF

Info

Publication number
CN107862319B
CN107862319B CN201711152569.XA CN201711152569A CN107862319B CN 107862319 B CN107862319 B CN 107862319B CN 201711152569 A CN201711152569 A CN 201711152569A CN 107862319 B CN107862319 B CN 107862319B
Authority
CN
China
Prior art keywords
matching
matching point
inner product
image
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711152569.XA
Other languages
Chinese (zh)
Other versions
CN107862319A (en
Inventor
李百寿
沈宇臻
谢跃辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN201711152569.XA priority Critical patent/CN107862319B/en
Publication of CN107862319A publication Critical patent/CN107862319A/en
Application granted granted Critical
Publication of CN107862319B publication Critical patent/CN107862319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)

Abstract

The invention discloses a heterogeneous high-light optical image matching error eliminating method based on neighborhood voting. The method comprises the steps of taking a heterogeneous high-score image as an image matching difference elimination research object, taking a distance module value and a main direction included angle between rough matching points as a neighborhood difference elimination basis, calculating a distance inner product value and a direction inner product value between the rough matching point pairs, finally determining a matching point pair with a large threshold value elimination error through neighborhood voting, reserving an accurate matching point pair, reselecting a correct matching point pair, and realizing high-score image matching difference elimination. The method fully considers the influence of other matching points around the matching point on the matching point, takes the relation of all other matching points in the neighborhood to the matching point as the second-degree constraint of the extraction of the matching point, improves the image matching precision, and solves the problems of low matching precision and poor robustness of the heterogeneous high-light optical image in the prior method.

Description

Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
Technical Field
The invention relates to the field of high-resolution remote sensing image processing, in particular to a heterogeneous high-resolution optical image matching error eliminating method based on neighborhood voting.
Background
High-resolution remote sensing image matching is an important basis involved in high-resolution image processing, and the image matching quality has a key influence on subsequent processing and application such as image fusion, mosaic, segmentation, object identification, change detection and the like.
In recent years, comprehensive use of heterogeneous images (remote sensing images from various different sources) becomes an effective way for deep application of remote sensing data, and heterogeneous image matching also becomes one of important links and key technologies in high-resolution remote sensing image processing. With the diversification of sensors and platforms, the realization of high-precision image matching technology is more and more difficult. The accuracy is low when the different-source optical image matching algorithm meets the matching among the different-source images with large differences of resolution, time phase and space rotation, a large number of error matching points can be generated in the matching, and the error matching points can have important influences on positioning, splicing and image feature library establishment of high-resolution images such as unmanned planes, airborne planes and satellite-borne planes. Therefore, mismatching elimination in the image matching process is a non-negligible problem in the fields of photogrammetry and high-resolution remote sensing, and the fields of heterogeneous image remote sensing and computer vision have wide market demands.
At present, a statistical model, spatial information and a function fitting method are commonly adopted. The method based on the statistical model is typically a random sampling consistency algorithm, and the method is a robust parameter estimation method, but the algorithm is repeated and iterated, and continuous testing is performed, so that the time consumption is high. Meanwhile, the method can remove all the error matching points when the number of the error matching points is less, and can not completely remove the error matching points when the number of the error matching points is increased and is close to or exceeds the total number of points. The elimination method based on the spatial distribution descriptor can eliminate the error matching points with simple characteristics, but the algorithm is complex to realize and does not have rotation resistance. The method based on function fitting is that all correct matching points are supposed to meet a certain function model according to the known image condition to be matched, the method is simple in calculation and high in speed, but if more abnormal points with larger errors exist, the difference between the coefficient of the fitted function model and the actual coefficient is larger, and the matching precision is lower.
The key method for solving the problems is that before adjustment processing, conditions and similarity criteria for screening correct point pairs are established, the clustering performance of the correct point pairs is fully considered, according to the correlation of image characteristics, namely, any pixel point on an image is influenced by other pixel points in the neighborhood of the image, and the influence is determined according to the distance from the point. Therefore, the point is subjected to two-degree constraint by using all other matching points in the image neighborhood, all rough matching points on the image are attached with a unique label by the other matching points, and the label is used for calculating whether the matching points are matched to set a threshold value or not, so that simple and rapid rough difference elimination is performed.
Disclosure of Invention
The invention aims to provide a heterogeneous high-resolution optical image matching error eliminating method based on neighborhood voting, which can judge whether to eliminate or store a rough matching point pair obtained by a matching operator according to a set distance threshold and a direction threshold by fully utilizing neighborhood information of matching points so as to solve the problems of poor matching precision and low robustness caused by the fact that the characteristic points are uniquely described and matched only by means of a characteristic point descriptor Euclidean distance in the existing scheme.
The method comprises the following specific steps:
(1) backing up two heterogeneous high-light optical images with same name areas and different resolutions, cutting one of the heterogeneous high-light optical images, and cutting different types of ground object objects in the same name areas; the two cut different ground feature images need the same-name areas, and the selected ground feature object is most representative.
(2) The SIFT matching operator or other feature detection and description operators with scale invariance and rotation invariance are used to perform feature point detection and feature point matching operation on the data to obtain an initial rough matching point pair, and the image plane coordinates (x, y) and the main direction theta of each matching point can be obtained, the image plane coordinates and the main direction of the matching point on the reference image are (x, y) and theta, and the image plane coordinates and the main direction of the matching point on the image to be matched are (x ', y ') and theta ', which will not be described in detail later.
(3) Respectively calculating the distance module values d of any matching point on the reference image and the image to be matched and other matching points in the neighborhoodn,mAngle theta with main directionn,mThe matrices D, D ', θ' are obtained.
(4) And (4) respectively carrying out [0,1] normalization on the obtained four matrixes according to the row vectors to obtain four normalized matrixes.
(5) And respectively calculating the distance inner product value and the direction inner product value of two matching points in any matching point pair to obtain an n multiplied by 2 matrix, wherein n is the logarithm of the matching points, one column in two columns is the distance inner product value, and the other column is the direction inner product value.
(6) Taking a distance threshold TdAnd a direction threshold TθThe threshold value dereferencing rule is to obtain an optimal threshold value for an experimental method, to obtain a plurality of groups of different distance threshold values and direction threshold values, to judge the optimal threshold value by a plurality of groups of experiments, and to determine the optimal threshold value according to the actual experimental conditions, so that better matching precision and appropriate matching point pairs are required to be stored, and the situation that only few matching point pairs are reserved due to excessive improvement of the matching precision is avoided. After the two thresholds are determined, traversing the distance inner product value and the direction inner product value of each matching point pair, comparing the corresponding distance inner product value with the distance threshold, comparing the direction inner product value with the direction threshold, if the distance inner product value and the direction inner product value are both within the set distance threshold and direction threshold, considering that the matching point pair has good precision, otherwise, considering that the matching point pair has large error, and rejecting the matching point pair.
(7) And (3) after the matching point pairs which do not meet the set threshold value are removed, carrying out inner product on any one coarse matching point and other coarse matching points of the removed coarse matching points according to the four normalized matrixes in the step (4), obtaining corresponding distance inner product values and direction inner product values, comparing the distance inner product values and the direction inner product values with the set threshold value, if the distance inner product values and the direction inner product values are within the range of the corresponding set threshold value, saving the distance inner product values and the direction inner product values as new matching point pairs, and otherwise, removing the new matching point pairs.
(8) And (4) forming the matching point pair stored in the step (6) and the matching point pair newly stored in the step (7) into a matching point pair on two images, wherein the matching point pair is a fine matching point pair after neighborhood voting is carried out.
(9) And establishing a coordinate conversion model between the images by adopting an empirical judgment method, and determining a mapping relation parameter between the two images through the model.
Let the reference image have a matching point A (x)A,yA) There is a corresponding matching point a (x) on the image to be matcheda,ya) If two points A and a are considered to be correct matching points, then there is a mapping relation A (x)A,yA)=Ra(xa,ya) After the empirical coordinate conversion model is determined, respectively bringing the matching point pairs corresponding to the reference image in the rough matching point pair into the model to obtain corresponding new coordinate values, and substituting the obtained new coordinate values and the rough matching points in the image to be matched in the original rough matching point pair into a medium error calculation formula as follows:
Figure BDA0001473518220000031
where n is the number of coarse matching point pairs, (x)i,yi)、
Figure BDA0001473518220000032
The image matching method comprises the steps of respectively obtaining a matching point i on a reference image and a point plane coordinate of the matching point i on an image to be matched, wherein the point plane coordinate is mapped to the reference image, and the error unit is a pixel. Thus, the matching precision of the precise matching point pair is obtained.
And similarly, repeating the operation on the accurate matching point pair after the neighborhood voting error is eliminated to obtain the accuracy of the matching point after the difference is eliminated.
(10) And respectively carrying out the whole neighborhood voting difference elimination process on the image pairs of different ground objects in the data II, and obtaining a group of fine matching point pairs for each image pair.
(11) And carrying out precision analysis on the accurate matching point pairs of the data, judging the improvement degree of the matching precision of each image pair, considering factors such as accidental errors and the like, and analyzing the robustness of the neighborhood voting difference eliminating method.
The invention has the beneficial effects that:
the invention fully considers the clustering property of the correct matching points, namely in the image matching, the probability of the correct matching points existing around the correct matching points is far greater than the probability of the correct matching points existing around the wrong matching points, and according to the correlation of the matching information in the neighborhood of the image, any pixel point on the image can be influenced by other pixel points in the neighborhood, and the influence is determined according to the distance from the point. Therefore, the point is subjected to two-degree constraint by using other matching points in the neighborhood of the image, a 'unique' label is attached to all rough matching points on the image by the 'other' matching points, then the 'labels' are used for calculating whether the matching points are matched with a set threshold value, Matlab programming is used for realizing the heterogeneous high-resolution optical image matching error eliminating method based on neighborhood voting, and the implementation result proves that the matching precision can be improved.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Fig. 2 is a schematic diagram of coarse matching points in the embodiment of the present invention.
FIG. 3 is a schematic diagram of distance moduli and direction angles of coarse matching points in an embodiment of the present invention.
FIG. 4 is a cut-out area of a building provided by an embodiment of the present invention; a: QuickBird-building, b: GF-2-buildings.
FIG. 5 is a block diagram illustrating the matching result of the cut-out area of the building according to an embodiment of the present invention; a, SIFT method building object pair matching result, b: the method of the invention is used for matching the building image pairs.
FIG. 6 illustrates a water cutting area provided in accordance with an embodiment of the present invention; a, Quickbird-water body, b: GF-2-water body.
Fig. 7 shows a water cutting area matching result a, a water image pair matching result by an SIFT method, and b: the method of the invention matches the water body image with the matching result.
FIG. 8 illustrates a road cutting area according to an embodiment of the present invention; a: QuickBird-road, b: GF-2-road.
FIG. 9 shows a road cropping zone matching result according to an embodiment of the present invention; a, SIFT method road image pair matching result, b: the method of the invention matches the road image with the matching result.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Example (b):
fig. 1 is a technical flowchart of a method for rejecting an error in a heterogeneous high-resolution optical image based on neighborhood voting, which includes the following steps:
(1) two heterogeneous high-resolution images with the same name area are used as input data, the two images have differences in imaging time, resolution, satellite platform and the like, one image A is assumed as a reference image, the image is considered to be processed basically, the other image B is an image to be matched, and the image B is matched on the basis of the image A.
(2) The two images are respectively extracted through the feature point extraction operators, and because the two images have differences in the aspects of resolution, imaging mode, rotation and the like, in order to have a better effect, the feature point extraction can be carried out on the two images by adopting the feature transformation with invariable scale, the extracted feature points have the advantages of rotation, no deformation, affine no deformation and the like, are very suitable for the matching of heterogeneous image data, have unique feature descriptors and have a main gradient direction.
(3) And performing feature matching on the extracted feature points, wherein the feature matching is to traverse the feature points of the image to be matched, find the feature points with the feature points on the reference image with the similar features, if the similarity of the features of the two points meets the threshold set by the matching operator, the two points are considered to be a pair of matching points, the obtained matching point pair is a coarse matching point pair without difference removal processing, and a group of coarse matching point pairs between the two images can be obtained through algorithm traversal, as shown in fig. 2, a and a, B and B, C and C are corresponding coarse matching point pairs, theta is the main direction included angle of the corresponding coarse matching point, and (x, y) are the image plane coordinates of the coarse matching points.
(4) Respectively calculating distance module values and main direction included angles of the neighborhood matching points of the rough matching points on the reference image and the image to be matched, taking the reference image as an example, assuming that the rough matching point A, B, C exists on the reference image, and the image plane coordinate and the main direction of the corresponding matching point are (x)A、yA)、(xB、yB)、(xC、yC) And thetaA、θB、θC. Respectively calculating distance module values and main direction included angles of the neighborhood matching points corresponding to A, B, C three points, taking point a as an example, the distance module values and main direction included angles of the neighborhood matching points of point a are respectively dAB, dAC, Δ θ AB, and Δ θ AC, as shown in fig. 3. Wherein:
Figure BDA0001473518220000051
θAB=|θBA|、θAC=|θCA|
and calculating distance module values and main direction included angles of all rough matching points and other matching points on the two images to obtain four matrixes, wherein D is the distance module value between each rough matching point on the reference image, and D 'is the distance module value theta and theta' between each rough matching point on the image to be matched, and the corresponding main direction included angles are obtained.
Figure BDA0001473518220000052
Figure BDA0001473518220000053
(5) The four matrices obtained above are normalized by [0,1] and each matrix is normalized by the row vector, here by means of the self-contained function mapminmax in Matlab. This step results in four matrices normalized by the row vector. As follows:
Figure BDA0001473518220000054
Figure BDA0001473518220000055
(6) if n pairs of coarse matching point pairs are provided, performing inner product calculation on the n pairs of coarse matching point pairs, and calculating the distance inner product value and the direction inner product value of two points in the coarse matching point pairs, in (4), if the reference image matching point 1 and the image point 2 to be matched are a pair of coarse matching points, calculating the distance inner product value dot1 and the direction inner product value dot2 of the two points. Then there are:
Figure BDA0001473518220000056
Figure BDA0001473518220000057
as shown above, each pair of coarse matching point pairs is traversed to calculate the distance inner product value and the direction inner product value, and an n × 2 matrix is obtained, where n is the number of pairs of coarse matching points, one column is the corresponding distance inner product value dot1, and the other column is the corresponding direction inner product value dot 2.
(7) According to the actual high-score image condition, the number of rough matching point pairs extracted by a matching operator and other reasons, a plurality of groups of experiments determine a distance threshold value and a direction threshold value, and the matching precision should be improved as much as possible under the condition of ensuring the matching point pairs in principle, namely, the number of the matching point pairs cannot be greatly reduced for improving the matching precision, and the likeSince the matching accuracy cannot be improved by keeping many matching point pairs, it is necessary to obtain the optimal distance threshold T according to actual needs and purpose through multiple experimentsdAnd a direction threshold Tθ
(8) After two thresholds are obtained, each pair of matching points in the n multiplied by 2 matrix is compared with the two thresholds, when the distance inner product value and the direction inner product value of one matching point pair are both smaller than or equal to the thresholds, the matching point pair is considered to be an accurate matching point pair, and otherwise, the matching point pair is removed. As follows:
when dot1 is less than or equal to Td,dot2≤TθThe matching point pair is retained.
When dot1<Td,dot2>Tθ、dot1>Td,dot2<Tθ、dot1>Td,dot2>TθAnd rejecting the matching point pair.
(9) Traversing the removed rough matching points and the unmatched feature points on the reference image and the image to be matched, similarly performing inner product operation, searching whether a matching point pair meeting two thresholds exists, and if so, storing the matching point pair as an accurate matching point pair.
(10) And obtaining an accurate matching point pair from the rough matching point pair and reselecting the accurate matching point pair from the wrong matching point and the unmatched feature point through the operation of the steps, so that the difference elimination of the high-score image matching points is completed, and the accurate matching point pair after the difference elimination is carried out through the two-degree constraint of the neighborhood voting is obtained.
(11) And after the steps are completed, analyzing whether the obtained accurate matching point logarithm is proper or not, and calculating the image matching precision before and after difference elimination.
Taking several different image pairs with different ground objects in the same name area as an example, specific embodiments (1-3) implement high-resolution optical image matching error elimination by using the method.
Specific example 1:
a Quickbird image in a certain area of the Guangdong is selected, the spatial resolution is 0.61m, the imaging time is 2016, the spatial resolution is 0.8m, the imaging time is 2015, the Quickbird image is used as a reference image, and the GF-2 image is used as an image to be matched, and analysis is carried out. The two different high-resolution optical images with the same name area are cut, and different types of ground object objects are cut in the same name area, for example, a building object is cut, as shown in fig. 4.
And then, extracting the characteristic points of the reference image and the image to be matched by using a characteristic point detection operator, matching the two images by using a matching operator according to a corresponding matching principle, and selecting a rough matching point pair, wherein each rough matching point has a unique pixel plane coordinate and a unique main direction. And then calculating the direction and distance difference between a certain rough matching point and other rough matching points on the same image so as to obtain the distance module value and the main direction included angle value of any two rough matching points on the same image. Then, the distance module value and the main direction included angle value between any two rough matching points are processed according to the row vector [0, 1%]And (6) normalization processing. And then calculating the inner product value of the direction included angle and the inner product value of the distance between two points in any coarse matching point pair on the two images. Finally, according to the actual situation of the difference elimination, multiple groups of experiments prove that the time T isd、TθWhen T is 0.5, there are still more pairs of mismatching points after the difference is removed, and T isd、TθWhen the value is 0.3, the number of the remaining matching point pairs after the difference is eliminated is very small, so T is taken in this embodimentd、Tθ0.4. And comparing the distance with the inner product value of the direction included angle of the distance, if the distance and the inner product value of the direction included angle of the distance are both within the set corresponding threshold value, storing the matching point pair, otherwise, rejecting the matching point pair, reselecting the rough matching point which meets the threshold value, and recombining the matching point pair, wherein the matching result of the method is shown in fig. 5(b), and fig. 5(a) is the road matching result of the SIFT method.
In the specific embodiment 2 and the specific embodiment 3, the matching precision analysis and research is also performed on the water body, the road and other places in the image, and the precision and the performance evaluation of the method are performed. The clipping region and the matching result are shown in fig. 6, 7, 8, and 9, respectively.
The results were analyzed as shown in Table 1. The result shows that the conventional SIFT method utilizes scale-invariant feature transformation to extract feature points of the QuickBird image and the GF-2 image of the building from 338 and 326 feature points respectively, the feature points of the QuickBird image and the GF-2 image of the water body are 100 and 485 feature points respectively, and the feature points of the QuickBird image and the GF-2 image of the road are 226 and 662 feature points respectively. And the characteristic points are matched by using the Euclidean distance, and the matching point pairs obtained by the buildings, the water bodies and the roads are respectively 85 pairs, 14 pairs and 5 pairs. After the matching is finished, precision analysis shows that the matching errors of the buildings, the water bodies and the roads are 20.0340, 61.8433 and 67.6633 pixels, the matching precision is low, and the wrong matching rate is high.
The matching result of the method shows that the matching point pairs behind the building, water body and road error removal and correction matching point pairs are 42, 11 and 5 pairs, the corresponding matching errors are 4.6417, 39.4758 and 10.2053 pixels, and the error removal consumption time is 0.012s, 0.0086s and 0.0014s respectively. The matching precision is improved, and meanwhile, the matching time is within an acceptable range.
TABLE 1 SIFT-comparison of matching precision of images of different ground objects by the method of the present invention
Figure BDA0001473518220000071
Compared with the traditional scale invariant feature transform algorithm (SIFT), the embodiment shows that the building matching precision is improved by 77%, the road class is improved by 36%, the water body is improved by 85%, and the average is improved by 66%. The new method can carry out more precise difference elimination processing on the SIFT matching result, thereby improving the matching precision of the image and simultaneously maintaining the characteristics of scale invariance and the like of the original algorithm.
Therefore, the invention fully considers that any pixel point in the image is influenced by surrounding pixel points deeply, and the influence size is in inverse proportion to the influence radius by taking the pixel point as the center of a circle. On the basis of considering the influence of the matching points around the rough matching points on the rough matching points, rejecting rough matching point pairs which do not accord with the threshold according to a threshold judgment rule, and reconsidering whether the rough matching point pairs accord with the set threshold. The method solves the problems of insufficient precision and low robustness of the heterogeneous high-light-splitting optical image in image matching, and compared with other poor rejecting methods, the method has the advantages of low complexity and high precision.
While the invention has been described in connection with specific embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

Claims (8)

1. A heterogeneous high-light optical image matching error eliminating method based on neighborhood voting is characterized by comprising the following steps:
step 1: backing up two heterogeneous high-light optical images with same name areas and different resolutions, cutting one of the heterogeneous high-light optical images, and cutting different types of ground object objects in the same name areas;
step 2: carrying out feature point detection and feature point matching operation on the data by using an SIFT matching operator or other feature detection and description operators with scale invariance and rotation invariance to obtain an initial coarse matching point pair;
and step 3: respectively calculating the distance module values d of any matching point on the reference image and the image to be matched and other matching points in the neighborhoodn,mAngle theta with main directionn,mObtaining matrixes D, D ', theta and theta', wherein D is a distance module value between each rough matching point on the reference image, D 'is a distance module value between each rough matching point on the image to be matched, and theta' are corresponding main direction included angles; the obtained four matrixes are respectively processed by [0,1] according to row vectors]Normalizing to obtain four normalized matrixes;
Figure FDA0003272104580000011
Figure FDA0003272104580000012
and 4, step 4: respectively calculating the distance inner product value and the direction inner product value of two matching points in any matching point pair, thereby obtaining an n multiplied by 2 matrix, wherein n is the logarithm of the matching points, one of two columns is the distance inner product value, and the other column is the direction inner product value;
and 5: obtaining an optimal distance threshold TdAnd a direction threshold Tθ
Step 6: and (6) voting for difference.
2. The heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1, wherein:
in step 3, the distance module value and the principal direction included angle of the neighborhood matching point of the rough matching point on the reference image and the image to be matched are calculated, taking the reference image as an example, assuming that the rough matching point A, B, C exists on the reference image, and the image plane coordinate and the principal direction of the corresponding matching point are (x)A、yA)、(xB、yB)、(xC、yC) And thetaA、θB、θC(ii) a Respectively calculating distance module values and main direction included angles of the neighborhood matching points corresponding to A, B, C three points, taking point A as an example, the distance module values and the main direction included angles of the neighborhood matching points of point A are respectively dAB、dAC、θAB、θAC(ii) a Wherein:
Figure FDA0003272104580000021
θAB=|θBA|、θAC=|θCA|
performing [0,1] normalization processing on the four matrixes obtained above, and normalizing each matrix according to a row vector, wherein the function mapminmax in Matlab is used; this step obtains four matrices normalized by the row vector; as follows:
Figure FDA0003272104580000022
Figure FDA0003272104580000023
3. the heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1, wherein:
step 4, calculating the distance inner product value and the direction inner product value of two matching points in any matching point pair, and adopting the following formula:
Figure FDA0003272104580000024
Figure FDA0003272104580000025
in the formula, the reference image matching point 1 and the image point 2 to be matched are a pair of rough matching points, dot1 is the distance inner product value of the two points, dot2 is the direction inner product value of the two points, and n pairs of rough matching points are provided.
4. The heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1, wherein:
distance threshold T in step 5dAnd a direction threshold TθThe threshold value dereferencing rule is to obtain an optimal threshold value for an experimental method, to obtain a plurality of groups of different distance threshold values and direction threshold values, to judge the optimal threshold value by a plurality of groups of experiments, and to determine the optimal threshold value according to the actual experimental conditions, so that better matching precision and appropriate matching point pairs should be obtained and the situation that only few matching point pairs are reserved due to excessive improvement of the matching precision is avoided.
5. The heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1, wherein:
step 6, the voting difference elimination needs to traverse the distance inner product value and the direction inner product value of each matching point pair, and takes the corresponding distance inner product value to compare with the distance threshold value, and takes the direction inner product value to compare with the direction threshold value, if the distance inner product value and the direction inner product value are both within the set distance threshold value and direction threshold value, the matching point pair is considered to have good precision, otherwise, the matching point pair is considered to have larger error, and the matching point pair is rejected; the adopted judgment criteria are as follows:
(1) when dot1 is less than or equal to Td,dot2≤TθIf yes, the matching point pair is reserved;
(2) when dot1<Td,dot2>Tθ、dot1>Td,dot2<Tθ、dot1>Td,dot2>TθAnd rejecting the matching point pair.
6. The heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1 or 5, wherein:
regarding step 6, claim 5 performs inner product on any one coarse matching point and other coarse matching points according to the four normalized matrices in step 3 for the rejected coarse matching points, and obtains corresponding distance inner product value and direction inner product value, and compares the distance inner product value and the direction inner product value with the set threshold, if the distance inner product value and the direction inner product value are both within the range of the corresponding set threshold, the distance inner product value and the direction inner product value are stored as a new matching point pair, otherwise, the rejection is performed.
7. The heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1, wherein:
regarding step 6, the matching point pair stored in claim 5 and the matching point pair newly stored in claim 6 are combined into a matching point pair on two images, and the matching point pair is the fine matching point pair after the neighborhood voting is performed for difference elimination.
8. The heterogeneous high-beam optical image matching error elimination method based on neighborhood voting according to claim 1, wherein:
regarding step 6, establishing a coordinate conversion model between the images by adopting an empirical judgment method, and determining a mapping relation parameter between the two images through the model; let the reference image have a matching point A (x)A,yA) There is a corresponding matching point a (x) on the image to be matcheda,ya) If two points A and a are considered to be correct matching points, then there is a mapping relation A (x)A,yA)=Ra(xa,ya) After the empirical coordinate conversion model is determined, respectively bringing the matching point pairs corresponding to the reference image in the rough matching point pair into the model to obtain corresponding new coordinate values, and substituting the obtained new coordinate values and the rough matching points in the image to be matched in the original rough matching point pair into a medium error calculation formula as follows:
Figure FDA0003272104580000031
where n is the number of coarse matching point pairs, (x)i,yi)、
Figure FDA0003272104580000032
The matching points i on the reference image and the point plane coordinates of the matching points on the image to be matched, which are mapped to the reference image, are respectively, and the error unit is a pixel, so that the matching precision of the precise matching point pair is obtained.
CN201711152569.XA 2017-11-19 2017-11-19 Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting Active CN107862319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711152569.XA CN107862319B (en) 2017-11-19 2017-11-19 Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711152569.XA CN107862319B (en) 2017-11-19 2017-11-19 Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting

Publications (2)

Publication Number Publication Date
CN107862319A CN107862319A (en) 2018-03-30
CN107862319B true CN107862319B (en) 2021-11-16

Family

ID=61703148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711152569.XA Active CN107862319B (en) 2017-11-19 2017-11-19 Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting

Country Status (1)

Country Link
CN (1) CN107862319B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110634149B (en) * 2018-06-22 2022-03-18 湖南大学 Non-rigid target characteristic point matching method for optical motion capture system
CN109117854B (en) * 2018-07-25 2021-01-29 北京达佳互联信息技术有限公司 Key point matching method and device, electronic equipment and storage medium
CN109948624A (en) * 2019-02-18 2019-06-28 北京旷视科技有限公司 Method, apparatus, electronic equipment and the computer storage medium of feature extraction
CN110992263B (en) * 2019-11-27 2023-07-11 国网智能科技股份有限公司 Image stitching method and system
CN113496183B (en) 2020-04-08 2023-02-03 上海耕岩智能科技有限公司 Fingerprint matching method and device, electronic equipment and readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7767395B2 (en) * 2005-04-15 2010-08-03 Becton, Dickinson And Company Diagnosis of sepsis
CN102693542B (en) * 2012-05-18 2015-02-11 中国人民解放军信息工程大学 Image characteristic matching method
CN103389074A (en) * 2013-07-18 2013-11-13 河南科技大学 Multi-scale scene matching area selecting method
CN104200461B (en) * 2014-08-04 2017-08-25 西安电子科技大学 The remote sensing image registration method of block and sift features is selected based on mutual information image

Also Published As

Publication number Publication date
CN107862319A (en) 2018-03-30

Similar Documents

Publication Publication Date Title
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN111028277B (en) SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network
CN110097093B (en) Method for accurately matching heterogeneous images
CN109145915B (en) Rapid distortion correction method for license plate under complex scene
CN109064502B (en) Multi-source image registration method based on combination of deep learning and artificial design features
CN104574421B (en) Large-breadth small-overlapping-area high-precision multispectral image registration method and device
WO2019042232A1 (en) Fast and robust multimodal remote sensing image matching method and system
CN109409208A (en) A kind of vehicle characteristics extraction and matching process based on video
CN109118528A (en) Singular value decomposition image matching algorithm based on area dividing
CN108052904B (en) Method and device for acquiring lane line
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN104751465A (en) ORB (oriented brief) image feature registration method based on LK (Lucas-Kanade) optical flow constraint
CN107610166B (en) Planar map image registration method based on linear feature region segmentation
CN110866430A (en) License plate recognition method and device
CN107240130B (en) Remote sensing image registration method, device and system
CN105787943B (en) SAR image registration method based on multi-scale image block feature and rarefaction representation
CN107862708A (en) A kind of SAR and visible light image registration method
CN108537832B (en) Image registration method and image processing system based on local invariant gray feature
CN106096497B (en) A kind of house vectorization method for polynary remotely-sensed data
CN105869168A (en) Multi-source remote sensing image shape registering method based on polynomial fitting
CN110675442A (en) Local stereo matching method and system combined with target identification technology
CN103353941A (en) Natural marker registration method based on viewpoint classification
KR20140093407A (en) Recognition device, vehicle model recognition apparatus and method
CN113095385A (en) Multimode image matching method based on global and local feature description
CN104484647B (en) A kind of high-resolution remote sensing image cloud height detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant