CN109409292B - Heterogeneous image matching method based on refined feature optimization extraction - Google Patents

Heterogeneous image matching method based on refined feature optimization extraction Download PDF

Info

Publication number
CN109409292B
CN109409292B CN201811260078.1A CN201811260078A CN109409292B CN 109409292 B CN109409292 B CN 109409292B CN 201811260078 A CN201811260078 A CN 201811260078A CN 109409292 B CN109409292 B CN 109409292B
Authority
CN
China
Prior art keywords
image
matching
point
target
optical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811260078.1A
Other languages
Chinese (zh)
Other versions
CN109409292A (en
Inventor
李亚超
胡思茹
朱天启
全英汇
项宇泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201811260078.1A priority Critical patent/CN109409292B/en
Publication of CN109409292A publication Critical patent/CN109409292A/en
Application granted granted Critical
Publication of CN109409292B publication Critical patent/CN109409292B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a heterogeneous image matching method based on refined feature optimization extraction, which mainly solves the problem of low matching precision in the prior art, and adopts the technical scheme that: 1. calculating gradient modulus values and directions of the optical image and the SAR image by adopting different gradient operators respectively; 2. completing region-of-interest calibration by using the characteristic property of the target in the image; 3. according to the result of the step 1, obtaining the edge characteristics of the interested areas of the two images, and obtaining a rough matching affine transformation matrix between the two images through template matching; 4. obtaining a transformed optical image by using a rough matching affine transformation matrix, and extracting refined features; 5. performing preliminary matching on the refined features, and eliminating singular point interference to obtain a refined matching affine transformation matrix; 6. and (5) obtaining a final affine transformation relation matrix between the optical image and the SAR image according to the results of the steps 3 and 5. The method can realize the accurate registration of the heterogeneous images and can be used for aircraft guidance.

Description

Heterogeneous image matching method based on refined feature optimization extraction
Technical Field
The invention belongs to the technical field of radar image interpretation, and particularly relates to a method for registering a heterogeneous image, which is suitable for providing more accurate target information for aircraft guidance.
Background
Satellite remote sensing is an effective means for human beings to observe, analyze and describe the living earth environment, along with the development of remote sensing technology, the types of sensors are more and more, the resolution is higher and higher, and remote sensing image registration is also developed from the previous image registration of the same waveband and the same resolution to the current image registration of multiband, multiresolution and multi-imaging mode. Information obtained by a single sensor cannot meet the application requirements, for example, in geological disaster treatment, information of various sensors such as a Synthetic Aperture Radar (SAR), a hyper-spectral imaging system, an infrared night vision device and the like is often required to be synthesized to sense the place of a geological disaster and determine a disaster relief circuit in time, so that research on multi-source image matching has very important practical significance.
Because the radiation characteristics of ground objects in different wave bands are generally different, remote sensing images of the same region obtained by different sensors have different resolution, gray values, spectra, time phases, scene characteristics and the like, so that the conventional image registration method cannot meet the requirements of high speed and high precision and cannot be applied to the conventional stage. The heterogeneous image matching method based on refined feature optimization extraction adopts a two-step strategy, namely, the ground objects with obvious features in the images are firstly utilized to realize the initial registration between the images, and then the refined features are continuously extracted to improve the image registration precision, so that the image registration precision is ensured, and the matching operation amount is not increased.
In the document 'SAR image and optical image matching based on improved Hausdorff measure and genetic algorithm' of autumn and the like, a heterogeneous image matching method based on edge feature is mentioned, the method utilizes the improved Hausdorff measure to carry out similarity analysis on the edge features of two images to realize image matching, although the method is simple, the improved Hausdorff measure does not have rotation invariance, so that the method is limited in application condition and poor in matching timeliness.
Zhang Cheng et al in the document "automatic matching method of optical and SAR images based on surface features" mention that the area growth method is used to extract the closed area in the optical and SAR images as the surface features, and a cost function comprehensively considering attributes such as area, perimeter, central point position and the like is designed to carry out cross matching on the surface features, but the extraction of the closed area requires that the edge features are extracted without fracture, which is almost difficult to realize in the SAR images, so the method can only stay at the theoretical level at present, and the actual application effect is poor.
Ling just et al introduced phase consistency transformation in the literature "a robust multisource remote sensing image feature matching method" to eliminate the influence of image gray scale and contrast difference, and then realized feature point matching by Zernike moment reconstruction cross-correlation function; the method adopts a method of iteratively correcting transformation parameters to realize automatic registration between images, but simultaneously brings huge time consumption to matching, and the phase consistency is seriously influenced by the parameters, so that the stability of the algorithm is deficient.
Van Denko et al in the literature "a multisource remote sensing image registration method based on phase consistency correlation" adopts multiscale Harris to extract corner features, uses pyramid hierarchical mapping as a search strategy, determines regions of the same name through phase consistency, and realizes heterogeneous image matching.
Disclosure of Invention
The invention aims to provide a heterogeneous image matching method based on refined feature optimization extraction, which solves the problems of poor matching timeliness, insufficient stability and low matching precision in the prior art and provides accurate target scene and position information for aircraft guidance.
The technical idea of the invention is as follows: the method adopts a two-step walking strategy, firstly realizes the preliminary registration between images by using the ground objects with obvious characteristics in the images, and then continuously extracts refined characteristics to improve the image registration precision, and the realization scheme comprises the following steps:
(1) for optical image IOCalculating gradient module value and direction of the SAR image by using Sobel operator, and for SAR image ISCalculating the gradient module value and the direction by adopting a Gaussian-Gamma double window method;
(2) judging the region of interest according to the result of (1), and rotating the two images to the same angle by using the corresponding straight line angle of the region of interest;
(3) extracting two edge features of the region of interest by using a Canny edge detection operator according to the results of (1) and (2) through a non-maximum suppression and hysteresis thresholding principle;
(4) calculating and improving Hausdorff distance of the two edge characteristic images obtained in the step (3), taking the Hausdorff distance as similarity measure, comparing the similarity of the two images, searching for the optimal matching position, and obtaining a rough matching affine transformation matrix T between the two imagesA1
(5) Optical image IOBy affine transformation of the matrix TA1Performing scale rotation translation to obtain a converted optical image IO' and then adopting the maximum inter-class variance method to convert the optical image IOCarrying out image segmentation, and removing false features from the segmented image by utilizing a shape factor to obtain refined features;
(6) calculating Euclidean distance D between refined feature pointsijObtaining a transformed optical image I according to a two-way matching principleO' AND SAR image ISThe Delaunay triangulation network mismatching rejection is carried out on the basis of the P matching point pairs to finally obtain F matching point pairs, and then the transformed optical image I is obtained according to the F matching point pairsO' AND SAR image ISFine matching affine transformation matrix T betweenA2
(7) Multiplying the rough matching affine transformation matrix and the fine matching affine transformation matrix to obtain an optical image IOAnd SAR image ISThe final affine transformation matrix between is:
Tfinal=TA1·TA2
compared with the prior art, the invention has the following beneficial effects:
firstly, the matching precision is further ensured by a two-step strategy while the effectiveness of the algorithm is ensured, namely, the best matching position is found by adopting a template matching method, the refined feature optimization matching result is extracted on the basis, and the matching precision is improved to meet the engineering application;
secondly, the method extracts the image region of interest by using the target known information with obvious characteristics in the image before matching, overcomes the defect that the existing improved Hausdorff distance has no rotation invariance, and has wider application range; meanwhile, compared with the search for the original image data, the search only for the region of interest reduces the operation amount, and has better application value;
thirdly, the image structure information is mainly used for registration, the problem of difficult registration caused by nonlinear difference between image gray levels in the existing heterogeneous image matching technology is solved, noise interference is not easy to occur, and the robustness and stability of matching are improved;
fourthly, different features are selected as matched feature spaces in the previous and later steps, the problem of difficulty in extracting homonymous features caused by large difference among heterogeneous images is effectively solved through a fine feature extraction mode, and more possibilities are provided for scene matching.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a flow chart of an implementation of the present invention;
FIG. 2 is a schematic diagram of mismatch culling in the present invention;
FIG. 3 is a diagram of an optical reference used in the simulation of the present invention;
FIG. 4 is a SAR image to be registered for use in the simulation of the present invention;
fig. 5 is a diagram of simulation results of final matching of an optical image and an SAR image using the present invention.
The specific implementation mode is as follows:
the examples and effects of the present invention are described in detail below with reference to the accompanying drawings:
referring to fig. 1, the implementation steps of the present invention include the following:
step 1, calculating gradient modulus and direction of two images by adopting different operators respectively.
(1a) Calculating the gradient module value and the direction of the optical image:
(1a1) optical image IORespectively aligned with Sobel template m in the x directionxAnd Sobel template m in the y-directionyConvolution is carried out to obtain the gradient M of the image in the x directionxAnd gradient M in y-directionyWherein
Figure BDA0001843699390000041
Figure BDA0001843699390000042
(1a2) Calculating to obtain an optical image gradient module value M according to the gradients of the optical image in the x direction and the y directionOAnd a direction thetaO
Figure BDA0001843699390000043
(2a) Calculating the gradient module value and the direction of the SAR image:
(2a1) setting the Gaussian-Gamma double window function under different direction angles as follows:
G(x,y|θ)=G(xcosθ-ysinθ,xsinθ+ycosθ)
where theta is the angle of the different directions,
Figure BDA0001843699390000044
sigma is a Gauss scale factor used for adjusting the length of the window, a and b are parameters of a Gamma function in the vertical edge direction, wherein a is used for adjusting the distance between the two windows, b is used for controlling the width of the window, Gamma (a) is the Gamma function of a, and theta is different direction angles;
(2a2) SAR image ISConvolving the upper half window and the lower half window of the Gaussian-Gamma double-window function with different direction angles respectively to obtain the local mean value information M of the upper half window1(x, y | θ) and lower half-window local mean information M2(x,y|θ):
(2a3) Calculating to obtain gradient module value M of optical imageSAnd a direction thetaS
Figure BDA0001843699390000045
Wherein n represents the precision of the gradient direction and takes a value of 8-16.
And 2, calibrating the region of interest according to the property of the target in the image and corresponding prior information.
(2a) Determining a linear target range through the target linear property and the known information;
(2b) determining a width threshold value according to the width of a target range in the determined linear target range, and taking a group of linear line pairs with the angle difference smaller than 5 and the distance within the width threshold value as candidate parallel line pairs;
(2c) selecting the middle point of the candidate parallel line centering line as a candidate seed point, comparing the neighborhood gray characteristic of the candidate seed point with the target gray characteristic, selecting the parallel line pair in the neighborhood of the candidate seed point closest to the target gray characteristic as a target straight line pair, and selecting the seed point in the neighborhood as a target seed point;
(2d) with the target straight line pair as a reference, reserving pixels of the image positioned in the target straight line pair, and setting the rest pixels to zero to finish the segmentation of a target area in the image;
(2e) and reserving pixel points close to the target gray characteristic to obtain a preliminarily segmented binary image, removing a connected domain which does not contain the target seed point in the preliminarily segmented binary image, and sequentially expanding and corroding the reserved connected domain to eliminate 'holes' caused by noise in the connected domain to finish the calibration of the region of interest of the image.
And 3, extracting the edge features of the region of interest.
(3a) Reserving the pixel with the maximum gradient value of the two images in different gradient directions as an edge, and deleting other pixels to obtain the image with the non-maximum value suppressed;
(3b) carrying out double-threshold detection on the image with the non-maximum value suppressed to obtain a binary edge feature matrix edge as follows:
Figure BDA0001843699390000051
wherein gradient (x, y) is gradient modulus value at pixel (x, y), thighIs an upper threshold value, tlowFor the lower threshold, 1 indicates an edge pixel, 0 indicates a non-edge pixel, and undetermined indicates a pending edge.
Step 4, roughly registering the optical image and the SAR image to obtain a rough matching affine transformation relation matrix TA1
(4a) Calculating an improved Hausdorff distance d by taking the edge features obtained in the step 3 as a feature spaceH
The improved Hausdorff distance is used for measuring the distance between proper subsets in the space, and the calculation formula is as follows:
Figure BDA0001843699390000052
wherein d isminB(ai) The method comprises the following steps of representing an ascending sequence obtained after sequencing the minimum value of the distances from each point in an optical image coordinate point set to the point in the SAR image coordinate point set, namely: dminB(a1)<dminB(a2)<...<dminB(aN),
Taking the first N distances to calculate the average value of the distances so as to effectively overcome the problems of noise influence and image shielding, wherein the value of N is 0.8-0.9 times of the point number in the optical image coordinate point set;
(4b) taking the improved Hausdorff distance as a similarity measure, searching in the optical image edge image by taking the SAR image edge image as a template, comparing the improved Hausdorff distance between the two images, and taking the region with the minimum improved Hausdorff distance as an optimal matching region so as to realize the coarse registration between the optical image and the SAR image;
(4c) coordinate transformation is carried out on the position coordinates of the optimal matching area to obtain a rough matching affine transformation relation matrix T between the optical image and the SAR imageA1
And 5, optimizing and extracting the image refined homonymous features.
(5a) Obtaining a rough matching affine transformation matrix T according to the step 4A1Performing translation scale rotation to obtain a converted optical image IO′;
(5b) Calculating the transformed optical image IO' normalized histogram piWherein i is 0,1,2, L-1, L is the number of image points;
(5c) from normalized histograms piCalculating a cumulative sum P1(k):
Figure BDA0001843699390000061
(5d) From normalized histograms piCalculating the cumulative mean m (k):
Figure BDA0001843699390000062
(5e) from normalized histograms piCalculating a global gray level mean mG
Figure BDA0001843699390000063
(5f) According to the global gray level mean mGAccumulated and P1(k) And the cumulative mean m (k) to calculate the variance between classes
Figure BDA0001843699390000064
Figure BDA0001843699390000071
(5g) Calculate and make
Figure BDA0001843699390000072
Maximum k value, let it be Otsu threshold k*And comparing each point in the image with the Ostu threshold, taking 1 for the point larger than the Ostu threshold, and taking 0 for the other points to obtain a binary image after the image is segmented.
(5h) Extracting refined homonymous features and calculating shape factor
Figure BDA0001843699390000073
(5i) Setting the threshold S according to the specific target sizemaxComparing the shape factor with a threshold value, if S > SmaxIf the target is a false feature, the false feature is removed, otherwise, the correct feature is reserved, a refined feature is obtained, and algorithm optimization is realized.
Step 6, obtaining the transformed optical image IO' AND SAR image ISFine matching affine transformation matrix T betweenA2
(6a) Calculating Euclidean distance D between featuresij
The Euclidean distance is the real distance between two points in space, and the calculation formula is as follows:
Figure BDA0001843699390000074
wherein (x)i,yi) For from transformed optical image IO' extracting a refined feature coordinate set FOAny one of (1) to (2)Characteristic coordinate (x)j,yj) For deriving from SAR images ISExtracting a refined feature coordinate set FSAny of the feature coordinates in (1);
(6b) obtaining a transformed optical image IO' AND SAR image ISThe matching point pair between:
obtaining a transformed optical image I according to the distance between the refined features calculated in the step (6a)O' AND SAR image ISP matching point pairs therebetween;
(6c) obtaining a fine matching affine transformation matrix TA2
And (4) further eliminating the P matched point pairs obtained in the step (6b) in order to avoid the interference of the individual singular point pairs on the overall matching result.
Referring to fig. 2, the specific implementation steps of the mismatch elimination are as follows:
(6c1) determining a reference triangle:
constructing virtual triangles from the P matching points to optionally select three pairs of matching points, and carrying out similarity judgment on the two virtual triangles through the following conditions:
Figure BDA0001843699390000081
if the condition is satisfied, the virtual triangle is set as the reference triangle,
otherwise, selecting other three points to continue the above-mentioned decision until a virtual triangle satisfies the condition,
wherein the content of the first and second substances,
Figure BDA0001843699390000082
three sides of a virtual triangle in the optical image,
Figure BDA0001843699390000083
three sides of a virtual triangle in the SAR image are provided;
(6c2) and (3) mismatching and removing:
connecting points around the reference triangle point by point with the reference triangle, constructing a newly added triangle to form a Delaunay triangulation network, and judging whether the newly added triangle on the optical image and the newly added triangle on the SAR image meet the similarity judgment condition:
if so, additionally storing the newly added matching point pair in the matching sequence, and deleting the point pair in the coordinate point set;
if not, continuing to judge the next point pair in the matching coordinate point set until all the point pairs in the set are traversed, and finally obtaining a matching sequence M;
(6c3) obtaining a fine matching affine transformation matrix TA2
In order to avoid the influence of the selection of the initial reference triangle on the final matching result, the remaining points in the coordinate point set are continuously carried out (6c1) - (6c2) until the number of the coordinate point set points is less than 3 or any three points can not meet the similarity judgment condition (6c1) to serve as the reference triangle;
taking the sequence with the largest number of the midpoints in the matching sequence M as an optimal matching point pair, and then carrying out coordinate transformation on the matching point pair according to the optimal matching point pair to obtain a fine matching affine transformation matrix TA2
Step 7, obtaining SAR image ISAnd an optical image IOFinal affine transformation matrix T betweenfinal
The rough matching affine transformation matrix T obtained in the step 4A1And the fine matching affine transformation matrix T obtained in the step 6A2Multiplying to obtain a final matching result Tfinal
Tfinal=TA1·TA2
The effects of the present invention can be further illustrated by the following simulation experiments.
Simulation conditions
The optical reference image used in the simulation experiment is fig. 3, the image size is 178 × 680, and the resolution is 3 m;
an airborne SAR image to be registered used in the simulation experiment is shown in FIG. 4, the size of the image is 465 x 348, the resolution is 3m, the speckle noise in the image is serious, and the geometric distortion exists;
the gray scale difference caused by the difference of the two imaging mechanisms is large, and the difference of the attitude angle and the imaging time is also large, so that the extraction of the same-name features in the group of data is very difficult.
(II) simulation content and results
Simulation 1, the method of the invention is used for carrying out fine matching on the optical image in the figure 3 and the SAR image shown in the figure 4, the obtained fine matching result is the figure 5, the optical image in the figure 5 is completely matched with the SAR image, the matching precision is improved, the simulation result shows that the method of the invention can obtain the high-precision matching effect, the matching error is not more than two pixel units at most, and the engineering application requirements are met.
And 2, evaluating the image registration effect before and after refined matching by using a root mean square error RMSE:
the RMSE is defined as the square root of the mean deviation between the positions of the plurality of feature points in the image to be registered after being transformed to the pixel points in the reference image and the pixel point positions in the reference image, and the following equation is used:
Figure BDA0001843699390000091
where N is the number of feature points, Δ xiFor horizontal errors, Δ yiIs the vertical direction error.
And selecting homonymous feature point pairs from the images before and after the refined matching, and counting the RMSE values of the homonymous feature point pairs, wherein the results are shown in Table 1.
TABLE 1 quantitative comparison of matching effects before and after refinement
Method RMSE
Before fine treatment 23.7323
After fine treatment 0.1573
Table 1 shows that the refinement process in the present invention can greatly improve the matching accuracy of the heterogeneous images.
And 3, performing simulation, namely performing quantitative comparison on different image matching methods:
the Matlab program is written to quantitatively compare the matching effect of the method and the SAR-SIFT algorithm in four aspects of feature point extraction, matching point number, correct matching point number and matching accuracy, and the result is shown in Table 2.
TABLE 2 matching methods quantitative comparison
Figure BDA0001843699390000092
Figure BDA0001843699390000101
As can be seen from the table 2, the method can effectively overcome the problem of difficulty in extracting homonymous features caused by large gray level difference of heterogeneous images, successfully realize matching, has matching precision not exceeding 2 pixel units, and meets the requirements of engineering application.
The foregoing description is only an example of the present invention and should not be construed as limiting the invention, as it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the principles and structures of the invention, but such changes and modifications are within the scope of the invention as defined by the appended claims.

Claims (6)

1. A heterogeneous image matching method based on refined feature optimization extraction comprises the following steps:
(1) for lightStudy image IOCalculating gradient module value and direction of the SAR image by using Sobel operator, and for SAR image ISCalculating the gradient module value and the direction by adopting a Gaussian-Gamma double window method;
(2) and (3) judging the region of interest according to the result of the step (1), and calibrating the region of interest of the SAR image by using the target property and the known information, wherein the method comprises the following steps:
(2a) determining a linear target range through the target linear property and the known information;
(2b) determining a width threshold value according to the width of a target range in the determined linear target range, and taking a group of linear line pairs with the angle difference smaller than 5 and the distance within the width threshold value as candidate parallel line pairs;
(2c) selecting the middle point of the candidate parallel line centering line as a candidate seed point, comparing the neighborhood gray characteristic of the candidate seed point with the target gray characteristic, selecting the parallel line pair in the neighborhood of the candidate seed point closest to the target gray characteristic as a target straight line pair, and taking the seed point in the neighborhood as a target seed point;
(2d) with the target straight line pair as a reference, reserving pixels of the image positioned in the target straight line pair, and setting the rest pixels to zero to finish the segmentation of a target area in the image;
(2e) reserving pixel points close to the target gray characteristic to obtain a preliminarily segmented binary image, removing a connected domain which does not contain the target seed point in the binary image after preliminary segmentation, and expanding, corroding, opening and closing the reserved connected domain to eliminate holes caused by noise in the connected domain to finish the calibration of the region of interest of the image;
(3) extracting two edge features of the region of interest by using a Canny edge detection operator according to the results of (1) and (2) through a non-maximum suppression and hysteresis thresholding principle;
(4) calculating and improving Hausdorff distance of the two edge characteristic images obtained in the step (3), taking the Hausdorff distance as similarity measure, comparing the similarity of the two images, searching for the optimal matching position, and obtaining a rough matching affine transformation matrix T between the two imagesA1(ii) a Wherein the improved Hausdorff distance is calculated by the following formula:
Figure FDA0003137598160000011
wherein d isminB(ai) The method comprises the following steps of representing an ascending sequence obtained after sequencing the minimum value of the distances from each point in an optical image coordinate point set to the point in the SAR image coordinate point set, namely: dminB(a1)<dminB(a2)<...<dminB(aN) N is 0.8-0.9 times of the point number in the optical image coordinate point set;
(5) optical image IOBy affine transformation of the matrix TA1Performing scale rotation translation to obtain a converted optical image IO' and then adopting the maximum inter-class variance method to convert the optical image IOCarrying out image segmentation, and removing false features from the segmented image by utilizing a shape factor to obtain refined features; the implementation is as follows:
(5g) calculating the shape factor:
Figure FDA0003137598160000021
(5h) setting the threshold S according to the specific target sizemaxComparing the shape factor with a threshold value, if S > SmaxIf the target is a false feature, the target is removed, otherwise, the target is a correct feature and is reserved;
(6) calculating Euclidean distance D between refined feature pointsijObtaining a transformed optical image I according to a two-way matching principleO' AND SAR image ISThe Delaunay triangulation network mismatching rejection is carried out on the basis of the P matching point pairs to finally obtain F matching point pairs, and then the transformed optical image I is obtained according to the F matching point pairsO' AND SAR image ISFine matching affine transformation matrix T betweenA2(ii) a The Delaunay triangulation network mismatching rejection is carried out on the basis of the P matching point pairs, and the following is realized:
(6a) constructing a virtual triangle from any three pairs of matching points in the P matching point pairs, and carrying out similarity judgment on the two virtual triangles:
if it is
Figure FDA0003137598160000022
The virtual triangle is set as the reference triangle,
otherwise, selecting other three points to continue the above-mentioned decision until a virtual triangle satisfies the condition,
wherein the content of the first and second substances,
Figure FDA0003137598160000023
three sides of a virtual triangle in the optical image,
Figure FDA0003137598160000024
three sides of a virtual triangle in the SAR image are provided;
(6b) connecting points around the reference triangle point by point with the reference triangle, constructing a newly added triangle to form a Delaunay triangulation network, and judging whether the newly added triangle on the optical image and the newly added triangle on the SAR image meet the similarity judgment condition (6 a):
if yes, storing the newly added matching point pair into another matching sequence M, and deleting the point pair in the image coordinate point set;
if not, continuing to judge the next point pair in the image coordinate point set until all the point pairs in the image coordinate point set are traversed;
(6c) continuing the steps (6a) - (6b) for the remaining points in the coordinate point set until the number of the points in the coordinate point set is less than 3 or any three points can not meet the similarity judgment condition (6a), and taking the sequence with the largest number of the points in the matching sequence M as an optimal matching point pair;
(7) multiplying the rough matching affine transformation matrix and the fine matching affine transformation matrix to obtain an optical image IOAnd SAR image ISThe final affine transformation matrix between is:
Tfinal=TA1·TA2
2. such as rightThe method of claim 1, wherein in (1) the optical image I is calculatedOThe gradient modulus and direction of (a), which is implemented as follows:
(1a1) optical image IORespectively aligned with Sobel template m in the x directionxAnd Sobel template m in the y-directionyConvolution is carried out to obtain the gradient M of the image in the x directionxAnd gradient M in y-directionyWherein
Figure FDA0003137598160000031
(1b) Calculating to obtain an optical image gradient module value M according to the gradients of the optical image in the x direction and the y directionOAnd a direction thetaO
Figure FDA0003137598160000032
θO=arctan(My/Mx)。
3. The method of claim 1, wherein the SAR image I is calculated in (1)SThe gradient modulus and direction of (a), which is implemented as follows:
(1c) setting the Gaussian-Gamma double window function under different direction angles as follows:
G(x,y|θ)=G(xcosθ-ysinθ,xsinθ+ycosθ)
wherein
Figure FDA0003137598160000033
x and y are respectively the horizontal and vertical coordinates of the image point, and sigma is a Gauss scale factor used for adjusting the window length; a. b is a parameter of a Gamma function in the vertical edge direction, a is used for adjusting the distance between the two windows, and b is used for controlling the width of the window; gamma (a) is a Gamma function of a, and theta is different direction angles;
(1d) convolving the SAR image with the upper half window and the lower half window of a Gaussian-Gamma double-window function G (x, y | theta) of different direction angles respectively to obtain the local mean value information M of the upper half window1(x, y | θ) and the lower halfWindow local mean information M2(x,y|θ);
(1e) Calculating to obtain a gradient module value M of the SAR image according to the convolution results of the upper half window and the lower half windowSAnd a direction thetaS
Figure FDA0003137598160000041
Figure FDA0003137598160000042
Wherein n represents the precision of the gradient direction and takes a value of 8-16.
4. The method according to claim 1, wherein two region of interest edge features are extracted using Canny edge operator in (3), which is implemented as follows:
(3a) reserving the pixel with the maximum gradient value of the two images in different gradient directions as an edge, and deleting other pixels to obtain the image with the non-maximum value suppressed;
(3b) carrying out double-threshold detection on the image with the non-maximum value suppressed to obtain a binary edge feature matrix M as follows:
Figure FDA0003137598160000043
wherein gradient (x, y) is gradient modulus value at pixel (x, y), thighIs an upper threshold value, tlowThe lower limit threshold value is 1, which represents an edge pixel, 0 represents a non-edge pixel, and the undetermined is an undetermined edge;
(3c) and continuously searching the edge to be determined in the 3x3 neighborhood and judging according to the matrix (3b), if the edge to be determined cannot be judged, expanding the searching range, and continuously searching in the 5x5 neighborhood to obtain a complete edge feature matrix.
5. The method of claim 1, wherein the most effective of (5) is employedMethod for measuring variance between classes of transformed optical images IO' performing image segmentation, which is implemented as follows:
(5a) computing a normalized histogram p of an input imageiWherein i is 0,1,2, L-1, L is the number of image points;
(5b) from normalized histograms piCalculating a cumulative sum P1(k):
Figure FDA0003137598160000051
(5c) From normalized histograms piCalculating the cumulative mean m (k):
Figure FDA0003137598160000052
(5d) from normalized histograms piCalculating a global gray level mean mG
Figure FDA0003137598160000053
(5e) According to the global gray level mean mGAccumulated and P1(k) And the cumulative mean m (k) to calculate the variance between classes
Figure FDA0003137598160000054
Figure FDA0003137598160000055
(5f) Calculate and make
Figure FDA0003137598160000056
Maximum k value, let it be Otsu threshold k*And comparing each point in the image with the Ostu threshold, taking 1 for the point larger than the Ostu threshold, and taking 0 for the other points to obtain a binary image after the image is segmented.
6. The method of claim 1, wherein the euclidean distances between feature points are calculated in (6) by the following equation:
Figure FDA0003137598160000057
wherein (x)i,yi) For from transformed optical image IO' extracting a refined feature coordinate set FO(x) any one of the characteristic coordinates of (a)j,yj) For deriving from SAR images ISExtracting a refined feature coordinate set FSAny feature coordinate of (1).
CN201811260078.1A 2018-10-26 2018-10-26 Heterogeneous image matching method based on refined feature optimization extraction Active CN109409292B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811260078.1A CN109409292B (en) 2018-10-26 2018-10-26 Heterogeneous image matching method based on refined feature optimization extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811260078.1A CN109409292B (en) 2018-10-26 2018-10-26 Heterogeneous image matching method based on refined feature optimization extraction

Publications (2)

Publication Number Publication Date
CN109409292A CN109409292A (en) 2019-03-01
CN109409292B true CN109409292B (en) 2021-09-03

Family

ID=65469290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811260078.1A Active CN109409292B (en) 2018-10-26 2018-10-26 Heterogeneous image matching method based on refined feature optimization extraction

Country Status (1)

Country Link
CN (1) CN109409292B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288620B (en) * 2019-05-07 2023-06-23 南京航空航天大学 Image matching method based on line segment geometric features and aircraft navigation method
CN110472543B (en) * 2019-08-05 2022-10-25 电子科技大学 Mechanical drawing comparison method based on local connection feature matching
CN110929598B (en) * 2019-11-07 2023-04-18 西安电子科技大学 Unmanned aerial vehicle-mounted SAR image matching method based on contour features
CN111311573B (en) * 2020-02-12 2024-01-30 贵州理工学院 Branch determination method and device and electronic equipment
WO2021248270A1 (en) * 2020-06-08 2021-12-16 上海交通大学 Heterogeneous image registration method and system
CN112017223B (en) * 2020-09-11 2024-01-30 西安电子科技大学 Heterologous image registration method based on improved SIFT-Delaunay
CN112307901A (en) * 2020-09-28 2021-02-02 国网浙江省电力有限公司电力科学研究院 Landslide detection-oriented SAR and optical image fusion method and system
CN112288009A (en) * 2020-10-29 2021-01-29 西安电子科技大学 R-SIFT chip hardware Trojan horse image registration method based on template matching
CN112465852B (en) * 2020-12-03 2024-01-30 国网山西省电力公司晋城供电公司 Improved region growing method for infrared image segmentation of power equipment
CN112712510B (en) * 2020-12-31 2022-12-30 中国电子科技集团公司第十四研究所 Different-source image matching method based on gradient and phase consistency
CN112801141B (en) * 2021-01-08 2022-12-06 吉林大学 Heterogeneous image matching method based on template matching and twin neural network optimization
CN112734816B (en) * 2021-01-13 2023-09-05 西安电子科技大学 Heterologous image registration method based on CSS-Delaunay
CN112907453B (en) * 2021-03-16 2022-02-01 中科海拓(无锡)科技有限公司 Image correction method for inner structure of notebook computer
CN113408370B (en) * 2021-05-31 2023-12-19 西安电子科技大学 Forest change remote sensing detection method based on adaptive parameter genetic algorithm
CN113470788B (en) * 2021-07-08 2023-11-24 山东志盈医学科技有限公司 Synchronous browsing method and device for multiple digital slices
CN114565653B (en) * 2022-03-02 2023-07-21 哈尔滨工业大学 Heterologous remote sensing image matching method with rotation change and scale difference
CN114445472B (en) * 2022-03-04 2023-05-26 山东胜算软件科技有限公司 Multi-step image registration method based on affine transformation and template matching
CN114723770B (en) * 2022-05-16 2022-08-09 中国人民解放军96901部队 Different-source image matching method based on characteristic spatial relationship
CN114878583B (en) * 2022-07-08 2022-09-20 四川大学 Image processing method and system for dark field imaging of distorted spot lighting defects
CN115599125B (en) * 2022-12-14 2023-04-07 电子科技大学 Navigation aid light control strategy selection method based on edge calculation
CN116129146B (en) * 2023-03-29 2023-09-01 中国工程物理研究院计算机应用研究所 Heterogeneous image matching method and system based on local feature consistency
CN117115242B (en) * 2023-10-17 2024-01-23 湖南视比特机器人有限公司 Identification method of mark point, computer storage medium and terminal equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339036A (en) * 2008-08-20 2009-01-07 北京航空航天大学 Terrain auxiliary navigation method and apparatus
CN102194225A (en) * 2010-03-17 2011-09-21 中国科学院电子学研究所 Automatic registering method for coarse-to-fine space-borne synthetic aperture radar image
CN102298779A (en) * 2011-08-16 2011-12-28 淮安盈科伟力科技有限公司 Image registering method for panoramic assisted parking system
CN103345757A (en) * 2013-07-19 2013-10-09 武汉大学 Optical image and SAR image automatic registration method within multilevel multi-feature constraint
CN103514606A (en) * 2013-10-14 2014-01-15 武汉大学 Heterology remote sensing image registration method
CN103903019A (en) * 2014-04-11 2014-07-02 北京工业大学 Automatic generating method for multi-lane vehicle track space-time diagram
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN105184801A (en) * 2015-09-28 2015-12-23 武汉大学 Optical and SAR image high-precision registration method based on multilevel strategy
CN106886794A (en) * 2017-02-14 2017-06-23 湖北工业大学 Take the heterologous remote sensing image homotopy mapping method of high-order structures feature into account
CN106960449A (en) * 2017-03-14 2017-07-18 西安电子科技大学 The heterologous method for registering constrained based on multiple features
CN108447016A (en) * 2018-02-05 2018-08-24 西安电子科技大学 The matching process of optical imagery and SAR image based on straight-line intersection
CN108446652A (en) * 2018-03-27 2018-08-24 武汉大学 Polarimetric SAR image terrain classification method based on dynamic texture feature

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2930190C (en) * 2015-05-16 2019-09-24 Tata Consultancy Services Limited Method and system for planogram compliance check based on visual analysis

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339036A (en) * 2008-08-20 2009-01-07 北京航空航天大学 Terrain auxiliary navigation method and apparatus
CN102194225A (en) * 2010-03-17 2011-09-21 中国科学院电子学研究所 Automatic registering method for coarse-to-fine space-borne synthetic aperture radar image
CN102298779A (en) * 2011-08-16 2011-12-28 淮安盈科伟力科技有限公司 Image registering method for panoramic assisted parking system
CN103345757A (en) * 2013-07-19 2013-10-09 武汉大学 Optical image and SAR image automatic registration method within multilevel multi-feature constraint
CN103514606A (en) * 2013-10-14 2014-01-15 武汉大学 Heterology remote sensing image registration method
CN104867126A (en) * 2014-02-25 2015-08-26 西安电子科技大学 Method for registering synthetic aperture radar image with change area based on point pair constraint and Delaunay
CN103903019A (en) * 2014-04-11 2014-07-02 北京工业大学 Automatic generating method for multi-lane vehicle track space-time diagram
CN105184801A (en) * 2015-09-28 2015-12-23 武汉大学 Optical and SAR image high-precision registration method based on multilevel strategy
CN106886794A (en) * 2017-02-14 2017-06-23 湖北工业大学 Take the heterologous remote sensing image homotopy mapping method of high-order structures feature into account
CN106960449A (en) * 2017-03-14 2017-07-18 西安电子科技大学 The heterologous method for registering constrained based on multiple features
CN108447016A (en) * 2018-02-05 2018-08-24 西安电子科技大学 The matching process of optical imagery and SAR image based on straight-line intersection
CN108446652A (en) * 2018-03-27 2018-08-24 武汉大学 Polarimetric SAR image terrain classification method based on dynamic texture feature

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Automatic target recognition of synthetic apertureradar (SAR) images based on optimal selection of Zernike moments features;Mehdi Amoon 等;《IET Computer Vision》;20140401;第8卷(第2期);第77-85页 *
Fast morphological pyramid matching algorithm based on the Hausdorff distance;Jie Kang 等;《2011 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems》;20110323;第288-292页 *
基于感兴趣区域的异源图像匹配;胡思茹;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20200215(第02期);第C028-142页 *
基于改进Hausdorff距离的图像配准方法;李伟峰 等;《国土资源遥感》;20140630;第26卷(第2期);第93-98页 *
基于模板匹配约束下的光学与SAR图像配准;杨勇 等;《系统工程与电子技术》;20191031;第41卷(第10期);第2235-2242页 *
基于粗大轮廓的异源图像匹配关键技术研究;赵妍;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150115(第01期);第I138-1481页 *
基于结构特征的光学与SAR图像匹配方法研究;项宇泽;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190215(第02期);第I138-1813页 *
结合二次Otsu和SIFT的光学和SAR水域图像快速配准;曹哲 等;《计算机辅助设计与图形学学报》;20171130;第29卷(第11期);第1963-1970页 *

Also Published As

Publication number Publication date
CN109409292A (en) 2019-03-01

Similar Documents

Publication Publication Date Title
CN109409292B (en) Heterogeneous image matching method based on refined feature optimization extraction
CN110097093B (en) Method for accurately matching heterogeneous images
CN107154040B (en) Tunnel lining surface image crack detection method
CN103345757B (en) Optics under multilevel multi-feature constraint and SAR image autoegistration method
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN111145228B (en) Heterologous image registration method based on fusion of local contour points and shape features
CN112085772B (en) Remote sensing image registration method and device
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN113256653B (en) Heterogeneous high-resolution remote sensing image registration method for high-rise ground object
CN110569861A (en) Image matching positioning method based on point feature and contour feature fusion
CN109978848A (en) Method based on hard exudate in multiple light courcess color constancy model inspection eye fundus image
Wang Automatic extraction of building outline from high resolution aerial imagery
CN114494371A (en) Optical image and SAR image registration method based on multi-scale phase consistency
Huang et al. SAR and optical images registration using shape context
CN108509835B (en) PolSAR image ground object classification method based on DFIC super-pixels
CN104700359A (en) Super-resolution reconstruction method of image sequence in different polar axis directions of image plane
CN109785318B (en) Remote sensing image change detection method based on facial line primitive association constraint
CN114565653B (en) Heterologous remote sensing image matching method with rotation change and scale difference
CN115131555A (en) Overlapping shadow detection method and device based on superpixel segmentation
CN111696054B (en) Rubber dam body detection method based on full-polarization SAR image
CN115035326A (en) Method for accurately matching radar image and optical image
Wang et al. Mapping road based on multiple features and B-GVF snake
CN114862883A (en) Target edge extraction method, image segmentation method and system
Xiong et al. A method of acquiring tie points based on closed regions in SAR images
Xu et al. An automatic optical and sar image registration method using iterative multi-level and refinement model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant