CN104766323B - A kind of Point matching method of remote sensing images - Google Patents
A kind of Point matching method of remote sensing images Download PDFInfo
- Publication number
- CN104766323B CN104766323B CN201510160933.1A CN201510160933A CN104766323B CN 104766323 B CN104766323 B CN 104766323B CN 201510160933 A CN201510160933 A CN 201510160933A CN 104766323 B CN104766323 B CN 104766323B
- Authority
- CN
- China
- Prior art keywords
- msub
- msup
- point
- mrow
- mtr
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000012216 screening Methods 0.000 abstract description 2
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000000605 extraction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种遥感图像的点匹配方法,用于在背景变化较大遥感图像的初始匹配点集中筛选出正确的点匹配集。本方法是应用了两图像中正确匹配点组成的全局结构的相似性。首先,由任意选取的三个点对组成许多对应三角形对,计算两三角形对应角度和对应边长比例之间的差值,并比较三个顶点相对于参考点的空间旋转次序,筛选出符合相似条件的三角形对。然后在这些相似三角形中统计每个点对出现的频率,建立直方图,设定阈值,并将频率小于某一阈值的点对去掉。最后,空间一致性测量被用来去除那些坐标极为相似的误匹配点。本发明提高了遥感图像的点匹配精度,同时提高了遥感图像的配准精度。
The invention discloses a point matching method of a remote sensing image, which is used for screening out a correct point matching set from an initial matching point set of a remote sensing image with a large background change. This method is based on the similarity of the global structure formed by the correct matching points in the two images. First, a number of corresponding triangle pairs are composed of three randomly selected point pairs, and the difference between the corresponding angles and the corresponding side length ratios of the two triangles is calculated, and the spatial rotation order of the three vertices relative to the reference point is compared, and the similar triangles are selected. Conditional pair of triangles. Then count the occurrence frequency of each point pair in these similar triangles, build a histogram, set a threshold, and remove the point pairs whose frequency is less than a certain threshold. Finally, a spatial consistency measure is used to remove mismatched points whose coordinates are very similar. The invention improves the point matching precision of the remote sensing image, and at the same time improves the registration precision of the remote sensing image.
Description
技术领域technical field
本发明涉及遥感图像处理技术领域,具体涉及一种遥感图像的点匹配方法。The invention relates to the technical field of remote sensing image processing, in particular to a point matching method of remote sensing images.
背景技术Background technique
图像配准在图像处理中是一种最基本的技术,已经被广泛应用在医学、模式识别、遥感和计算机视觉等领域。配准就是把不同的图像进行对齐,让其中的所有特征都能使用同一坐标系来进行表示。特征匹配是图像配准中最为重要的一个环节,如果在特征匹配集中有过多的误匹配,则会产生错误的转换参数。因此一个鲁棒并且准确的特征匹配方法是必须的。目前应用较多的特征匹配方法大多数为点匹配方法。点匹配方法主要分为两种:1)利用局部灰度信息进行匹配;2)利用局部邻域结构或全局结构进行匹配。SIFT是一种基于局部描述算子的点特征提取和点匹配方法,根据点描述符的相似性来寻找匹配点对。但是当此方法应用在遥感图像中时,特别是灾害前后的图像,局部灰度值变化较大,单纯依靠局部灰度信息非常不可靠,容易产生较多的误匹配点。RANSAC是一种经典的点匹配方法,它利用初始匹配点集来估算转换参数,同时去除匹配点集中的误匹配。但是当误匹配点对比例较大时,此方法效果不好。GTM是利用特征点K领域内的空间关系从初始匹配点集中寻找正确匹配,但是此方法无法排除具有相同邻域结构的误匹配。此外,被大量误匹配点围绕着的正确匹配点也有可能被错误地去除。WGTM算法是GTM的改进版,它使用连接特征点线段之间的角距当作权重。但是它只能去除一部分具有相似邻域结构的误匹配,而且也无法判别具有不同邻域结构的正确匹配点。Bi-SOGC算法是一种以GTM为基础的方法,并且主要应用在遥感图像领域。它使用有序线段代替无序线段,特征点的空间顺序被用来去除具有相似邻域结构的误匹配,而且一个恢复策略用来把错误去除的正确点对找回。这种方法对一些遥感图像是有效的,但是它无法处理邻域内没有任何特征点的孤立正确匹配点。Image registration is one of the most basic techniques in image processing, and it has been widely used in fields such as medicine, pattern recognition, remote sensing and computer vision. Registration is to align different images so that all the features can be expressed using the same coordinate system. Feature matching is the most important link in image registration. If there are too many mismatches in the feature matching set, wrong transformation parameters will be generated. Therefore a robust and accurate feature matching method is necessary. Most of the currently used feature matching methods are point matching methods. Point matching methods are mainly divided into two types: 1) matching using local gray information; 2) matching using local neighborhood structure or global structure. SIFT is a point feature extraction and point matching method based on local description operators, which find matching point pairs according to the similarity of point descriptors. However, when this method is applied to remote sensing images, especially images before and after disasters, the local gray value changes greatly, and it is very unreliable to rely solely on local gray information, and it is easy to generate more mismatching points. RANSAC is a classic point matching method, which uses the initial matching point set to estimate the transformation parameters, and removes the false matches in the matching point set. However, this method does not work well when the contrast ratio of mismatched points is large. GTM uses the spatial relationship in the field of feature points K to find the correct match from the initial matching point set, but this method cannot rule out the wrong match with the same neighborhood structure. In addition, correct matching points surrounded by a large number of false matching points may also be removed by mistake. The WGTM algorithm is an improved version of GTM, which uses the angular distance between the line segments connecting the feature points as the weight. But it can only remove some of the false matches with similar neighborhood structures, and it cannot distinguish the correct matching points with different neighborhood structures. The Bi-SOGC algorithm is a method based on GTM and is mainly used in the field of remote sensing images. It uses ordered line segments instead of unordered line segments, the spatial order of feature points is used to remove false matches with similar neighborhood structures, and a recovery strategy is used to retrieve correct point pairs that were removed by mistake. This method is effective for some remote sensing images, but it cannot deal with isolated correct matching points without any feature points in the neighborhood.
现有的点匹配方法有的是基于局部灰度信息进行匹配,有的是基于局部邻域结构进行匹配。但是当应用在遥感图像配准时,特别是灾害前后的遥感图像,由于背景变化较大,初始匹配集中往往存在的误匹配较多,而正确匹配很少,此时正确匹配分布稀疏,邻域内可以利用的正确结构信息较少,仅使用局部灰度信息或者邻域结构信息往往无法达到满意的效果。本发明(三角形转换匹配方法TTM)利用正确匹配点构成的全局结构的相似性来去除误匹配,保留正确匹配。灾害前后的遥感图像虽然灰度信息变化较大,但是由正确匹配点构成的全局结构还是相似的。本发明采用三角形作为全局结构的基元来比较其相似性,最终提取出正确匹配点集。Some of the existing point matching methods are based on local gray information, and some are based on local neighborhood structure. However, when it is applied to remote sensing image registration, especially the remote sensing images before and after disasters, due to the large background changes, there are often many false matches in the initial matching set, and there are few correct matches. At this time, the distribution of correct matches is sparse, and the neighborhood can There is less correct structural information to use, and only using local grayscale information or neighborhood structural information often cannot achieve satisfactory results. The present invention (Triangle Transformation Matching Method TTM) utilizes the similarity of the global structure formed by correct matching points to remove false matches and retain correct matches. Although the gray information of the remote sensing images before and after the disaster changes greatly, the global structure composed of correct matching points is still similar. The invention uses the triangle as the primitive of the global structure to compare its similarity, and finally extracts the correct matching point set.
发明内容Contents of the invention
本发明的目的在于:提供了一种遥感图像的点匹配方法,其提高了遥感图像的点匹配精度,同时提高了遥感图像的配准精度。The purpose of the present invention is to provide a point matching method of remote sensing images, which improves the point matching accuracy of remote sensing images and simultaneously improves the registration accuracy of remote sensing images.
本发明采用的技术方案为:一种遥感图像的点匹配方法,该方法包括如下步骤:The technical solution adopted in the present invention is: a method for point matching of remote sensing images, the method comprising the following steps:
步骤(1):初始匹配点集是在输入在参考图像中和感应图像中提取出来的两个对应点集,在初始匹配点集P={pi}和P'={pi'}中,i=1,2,3...N,N是点集中点的数量,任意选取三个点对组成许多对应三角形对,任意选取的三个点对表示为(pi,pj,pk)和(pi',pj',pk');Step (1): The initial matching point set is two corresponding point sets extracted from the input in the reference image and the sensing image, in the initial matching point set P={p i } and P'={p i '} , i=1,2,3...N, N is the number of points in the point set, three point pairs are selected arbitrarily to form many corresponding triangle pairs, and the three point pairs selected arbitrarily are expressed as (p i ,p j ,p k ) and (p i ', p j ', p k ');
步骤(2):计算组成的两个三角形之间的对应角度差值和对应边长比例差值:Step (2): Calculate the corresponding angle difference and the corresponding side length ratio difference between the two triangles formed:
步骤(3):和表示点pi和pi'的横纵坐标,分别在两三角形中设定参考点pd和pd',其坐标计算公式如下:Step (3): and Indicates the horizontal and vertical coordinates of points p i and p i ', respectively set the reference points p d and p d ' in the two triangles, and the coordinate calculation formula is as follows:
步骤(4):设定三角形相似阈值θb=5°,lb=0.1,然后根据三个顶点相对于参考点的次序一致性和条件△θ1,△θ2,△θ3∈[0,θb],△l1,△l2,△l3∈[0,lb]筛选出符合条件的三角形对;Step (4): Set triangle similarity threshold θ b = 5°, l b = 0.1, and then according to the order consistency and conditions of the three vertices relative to the reference point △θ 1 , △θ 2 , △θ 3 ∈[0 ,θ b ], △l 1 ,△l 2 ,△l 3 ∈[0,l b ] to filter out the qualified triangle pairs;
步骤(5):在选出的三角形对中,统计每个点对出现的次数并建立频率直方图;Step (5): In the selected triangle pair, count the number of occurrences of each point pair and establish a frequency histogram;
步骤(6):根据实验分析设定阈值kp,并去除出现次数少于kp的点对;Step (6): Set the threshold k p according to the experimental analysis, and remove the point pairs whose occurrence times are less than k p ;
步骤(7):根据此时的所有点对计算模型转换参数,点对表示为Po和Po',数量为n;Step (7): Calculate the model conversion parameters according to all point pairs at this time, the point pairs are expressed as P o and P o ', and the number is n;
T=argminE(T(θ)) (10)T=argminE(T(θ)) (10)
其中,表示均方根误差,T(θ)表示点集Po和Po'使用最小二乘法得到的转换参数,d(i)=||T(θ)(pi')-pi||,i=1,2,3,...n表示每一个点对的转换误差;in, Represents the root mean square error, T(θ) represents the conversion parameters obtained by the point set P o and P o 'using the least square method, d(i)=||T(θ)(p i ')-p i ||, i=1,2,3,...n represents the conversion error of each point pair;
步骤(8):设定阈值Et,当E(T(θ))>Et时,拥有最大d(i)值的点对将会被去除,同时更新点集,当满足条件E(T(θ))≤Et时,迭代结束,此时得到两个正确匹配点集。Step (8): Set the threshold E t , when E(T(θ))>E t , the point pair with the largest d(i) value will be removed, and the point set will be updated at the same time, when the condition E(T (θ))≤E t , the iteration ends, and two correct matching point sets are obtained at this time.
其中,该方法用于在背景变化较大遥感图像的初始匹配点集中筛选出正确的点匹配集。Among them, the method is used to filter out the correct point matching set from the initial matching point set of remote sensing images with large background changes.
本发明与现有技术相比的优点在于:The advantage of the present invention compared with prior art is:
(1)本发明利用全局结构作为比较元素,并使用三角形为基元,易于计算,并且能保证匹配结果的稳定性。(1) The present invention utilizes the global structure as a comparison element and uses a triangle as a primitive, which is easy to calculate and can ensure the stability of the matching result.
(2)本发明通过建立直方图,根据统计结果来进行筛选,保证了结果的鲁棒性。(2) The present invention ensures the robustness of the results by establishing a histogram and performing screening according to the statistical results.
(3)本发明使用全局一致性准则,去除坐标相似的误匹配,提高了正确匹配率。(3) The present invention uses the global consistency criterion to remove false matches with similar coordinates and improve the correct match rate.
附图说明Description of drawings
图1为正确匹配特征点构成的全局结构,其中图(a)为灾害发生前的图像及其中特征点构成的全局结构图,图(b)为灾害发生后的图像及其中对应正确匹配特征点构成的全局结构图;Figure 1 shows the global structure composed of correctly matched feature points, where (a) is the image before the disaster and the global structure of its feature points, and (b) is the image after the disaster and its corresponding correctly matched feature points Constituted global structure diagram;
图2为本发明一种遥感图像的点匹配方法的整体流程图;Fig. 2 is the whole flowchart of the point matching method of a kind of remote sensing image of the present invention;
图3为一种误匹配点实例;Figure 3 is an example of a mismatch point;
图4为匹配结果随kp变化图;Figure 4 is a graph showing the variation of matching results with kp ;
图5(a)为初始匹配结果,(b)为TTM匹配结果;Figure 5 (a) is the initial matching result, (b) is the TTM matching result;
图6为特征点的频数统计直方图(即频率直方图或频率分布直方图);Fig. 6 is the frequency statistics histogram (being frequency histogram or frequency distribution histogram) of feature point;
图7为(a)为初始匹配结果,(b)为RANSAC匹配结果,(c)为GTM匹配结果,(d)为Bi-SOGC匹配结果,(e)为TTM匹配结果;Figure 7 shows (a) the initial matching result, (b) the RANSAC matching result, (c) the GTM matching result, (d) the Bi-SOGC matching result, and (e) the TTM matching result;
图8为不同光谱图像匹配精度结果图;Fig. 8 is a graph of matching accuracy results of different spectral images;
图9(a)为初始匹配结果,(b)为RANSAC匹配结果,(c)为GTM匹配结果,(d)为Bi-SOGC匹配结果,(e)为TTM匹配结果;Figure 9(a) is the initial matching result, (b) is the RANSAC matching result, (c) is the GTM matching result, (d) is the Bi-SOGC matching result, (e) is the TTM matching result;
图10为灾害前后图像匹配精度结果图。Figure 10 shows the results of image matching accuracy before and after the disaster.
具体实施方式Detailed ways
下面结合附图以及具体实施例进一步说明本发明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
本发明的方法输入是在参考图像中和感应图像中提取出来的两个对应点集,它们可以被表示为P={pi}和P'={pi'}(i=1,2,3...N),(pi对应pi')。在初始匹配点集中,存在许多误匹配点,特别是在有背景变化的遥感图像中。TTM算法是基于由正确点构成的整体结构的相似性,它被用来去除大部分的误匹配点,并保留所有的正确点。由于在同一区域拍摄的不同遥感图像之间常存在仿射变换,所以选择三角形而不是四边形或其它多边形作为整体结构的基础元素,因为它对形状的保持性更好。图1是两幅灾难前后的遥感图像,可以看出由正确匹配点构成的整体结构是相似的。如图2所示,本方法包括三个步骤:步骤一、对应三角形的特征提取;步骤二、建立频率直方图;步骤三、去除残余的误匹配点。下面结合具体附图对发明详细描述如下:The input of the method of the present invention is two corresponding point sets extracted in the reference image and the sensing image, which can be expressed as P={p i } and P'={p i '}(i=1,2, 3...N), (p i corresponds to p i '). In the initial set of matching points, there are many mismatching points, especially in remote sensing images with background changes. The TTM algorithm is based on the similarity of the overall structure composed of correct points, and it is used to remove most of the mismatch points and keep all the correct points. Since there are often affine transformations between different remote sensing images taken in the same area, triangles are chosen instead of quadrilaterals or other polygons as the basic element of the overall structure because of their better shape retention. Figure 1 shows two remote sensing images before and after the disaster, and it can be seen that the overall structure composed of correct matching points is similar. As shown in FIG. 2 , the method includes three steps: Step 1, feature extraction corresponding to the triangle; Step 2, building a frequency histogram; Step 3, removing residual mismatch points. Below in conjunction with specific accompanying drawing, the invention is described in detail as follows:
一、对应三角形的特征提取1. Feature extraction corresponding to the triangle
在初始匹配点集中随意选取三个匹配点对,被描述为(pi,pj,pk)和(pi',pj',pk')。然后执行下面的步骤:Three matching point pairs are randomly selected in the initial matching point set, which are described as (p i , p j , p k ) and (p i ', p j ', p k '). Then perform the following steps:
(1)由(pi,pj,pk)和(pi',pj',pk')在两幅图像中组成两个三角形,然后计算对应角度和对应边长比例的差值:(1) Form two triangles in two images by (p i , p j , p k ) and (p i ', p j ', p k '), and then calculate the difference between the corresponding angle and the corresponding side length ratio :
如果三个点对都是正确匹配,则应该满足条件△θ1,△θ2,△θ3∈[0,θb],△l1,△l2,△l3∈[0,lb]。满足条件的三点对组合将会被记录下来。If the three point pairs are all correct matches, the conditions △θ 1 , △θ 2 , △θ 3 ∈ [0,θ b ], △l 1 , △l 2 , △l 3 ∈ [0,l b ]. Three-point pair combinations that meet the conditions will be recorded.
(2)如图3所示,(pa,pa')和(pb,pb')是两个正确匹配,(pc,pc')是误匹配。但是这种情况仍满足条件(1),(pc,pc')将会被错误地记录为正确匹配。因此定义参考点(pd,pd'),坐标计算公式如下:(2) As shown in Figure 3, (p a , p a ') and (p b , p b ') are two correct matches, and (p c , p c ') is a wrong match. But this case still satisfies condition (1), and (p c ,p c ') will be incorrectly recorded as a correct match. Therefore, define the reference point (p d ,p d '), and the coordinate calculation formula is as follows:
在图3中,让矢量围绕点pd,pd'顺时针旋转,如果三特征点对皆为正确匹配,则矢量经过三顶点的次序应该相同。如图3所示,左图经过的点次序为(pa,pc,pb),而右图经过的点次序为(pa',pb',pc'),因此(pc,pc')将不会被认为是正确匹配。同时满足条件1和条件2的三点对组合将被选入子集S,而类似(pa,pa')和(pb,pb')的点将会在和另一正确匹配组合时入选。由于正确匹配点构成的结构具有相似性,正确匹配将会被选入S(只要存在至少三对正确匹配),但是有一些误匹配也会被错误的入选,一个与子集S中每个点的数目相关的直方图将在(二)建立。In Figure 3, let the vector Rotate clockwise around the point p d , p d ', if the three feature point pairs are all correct matches, the order of the vector passing through the three vertices should be the same. As shown in Figure 3, the sequence of points passed by the left image is (p a , p c , p b ), while the sequence of points passed by the right image is (p a ', p b ', p c '), so (p c ,p c ') will not be considered a correct match. The combination of three point pairs satisfying both condition 1 and condition 2 will be selected into subset S, and points like (p a ,p a ') and (p b ,p b ') will be combined with another correct match Selected when. Due to the similarity of the structure formed by the correct matching points, the correct matching will be selected into S (as long as there are at least three pairs of correct matching), but some false matchings will also be wrongly selected, one with each point in the subset S The number-dependent histogram will be built in (b).
二、建立频率直方图2. Create a frequency histogram
当(一)部分的步骤1和步骤2对每一个三点对组合都执行完之后,S中每个点出现的次数将会被统计。理论上只要组合中存在一个误匹配点,此组合将不会被选入S。因此可得知,在S中,正确匹配的数目肯定远大于误匹配的数目。频率直方图建立如下:After step 1 and step 2 of part (1) are executed for each three-point pair combination, the number of occurrences of each point in S will be counted. Theoretically, as long as there is a mismatch point in the combination, this combination will not be selected into S. Therefore, it can be seen that in S, the number of correct matches must be much greater than the number of wrong matches. The frequency histogram is built as follows:
histP={k(pi),i=1,2,...,M} (5)hist P ={k(p i ),i=1,2,...,M} (5)
k(pi)表示子集中包含点(pi,pi')的组合的个数。M表示S中所有不重复点的个数。从中我们可以得出最大的入选次数为:k(p i ) represents the number of combinations containing points (p i , p i ') in the subset. M represents the number of all unique points in S. From this we can conclude that the maximum number of entries is:
kmax=max(k(p1),k(p2),...,k(pM)) (6)k max =max(k(p 1 ),k(p 2 ),...,k(p M )) (6)
为了确定正确的阈值kp,20对图像在不同阈值下的recall和precision精度如图4所示。In order to determine the correct threshold k p , the recall and precision accuracy of 20 pairs of images under different thresholds are shown in Figure 4.
recall和precision的计算公式如下:The calculation formulas of recall and precision are as follows:
recall=(提取的正确匹配点数)/(所有的正确匹配点数) (7)recall=(number of correct matching points extracted)/(all correct matching points) (7)
precision=(提取的正确匹配点数)/(提取出的所有点数) (8)precision=(extracted correct matching points)/(all extracted points) (8)
如果存在的正确点对数为m,则正确点对在S中的数目可达远大于误匹配的数目。结合图4,因此设定阈值:If the number of correct point pairs exists is m, the number of correct point pairs in S can reach much larger than the number of mismatches. Combined with Figure 4, so set the threshold:
同时将数目小于kp的点对剔除。图6为图5中所有特征点的频数(即频率)统计图,经核实,图中频数超过横线的点对皆为正确点,且没有正确点处于横线以下。At the same time, the point pairs whose number is less than k p are eliminated. Fig. 6 is a statistical diagram of the frequency (ie frequency) of all the feature points in Fig. 5. After verification, the point pairs whose frequency exceeds the horizontal line in the figure are all correct points, and no correct point is below the horizontal line.
三、去除残余的误匹配3. Remove residual mismatches
大部分的误匹配都将在前两部分中被除去。但是还存在类似于图3中(pe,pe')的点。(pe,pe')是误匹配,但是由于他们的坐标非常相近,诸如此类的误匹配无法在前面过程中被去除。因此空间一致性测量被用来判断一个点对是否误匹配。经过A部分和B部分,得出两个拥有点数为n的匹配点集Po和Po',图像之间的转换参数T可以通过如下公式计算:Most of the mismatches will be removed in the first two parts. But there are also points similar to ( pe , pe ') in Fig. 3 . (p e , pe ') is a mismatch, but because their coordinates are very similar, such mismatches cannot be removed in the previous process. Therefore the spatial consistency measure is used to judge whether a point pair is mismatched or not. After part A and part B, two matching point sets P o and P o ' with n points are obtained, and the conversion parameter T between images can be calculated by the following formula:
T=argminE(T(θ)) (10)T=argminE(T(θ)) (10)
表示均方根误差(RMSE),T(θ)表示点集Po和Po'使用最小二乘法得到的转换参数。d(i)=||T(θ)(pi')-pi||,i=1,2,3,...n。 Indicates the root mean square error (RMSE), and T(θ) indicates the conversion parameters of the point sets P o and P o ' using the least square method. d(i)=||T(θ)(p i ')-p i ||, i=1,2,3,...n.
为了去除拥有相似坐标的误匹配,并得到最优的转换参数,设定阈值Et。在每次迭代过程中,当E(T(θ))>Et时,拥有最大d(i)值的点对将会被去除,同时更新点集。当满足条件E(T(θ))≤Et时,迭代结束。此时可以得到两个正确匹配点集。In order to remove false matches with similar coordinates and obtain optimal transformation parameters, a threshold E t is set. During each iteration, when E(T(θ))>E t , the point pair with the largest d(i) value will be removed, and the point set will be updated at the same time. When the condition E(T(θ)) ≤Et is satisfied, the iteration ends. At this point, two correct matching point sets can be obtained.
图5是2010年海地地震前后拍摄的两幅遥感图像。(a)是初始匹配结果,(b)是应用TTM算法后的匹配结果。Figure 5 shows two remote sensing images taken before and after the 2010 Haiti earthquake. (a) is the initial matching result, and (b) is the matching result after applying the TTM algorithm.
将上述算法应用于遥感图像,我们将实验图像分为两组:1)不同光谱遥感图像;2)灾难前后的遥感图像。最后使用Precision,recall和RMSE指标来评价结果,并与经典的RANSAC,GTM和Bi-SOGC方法比较。Applying the above algorithm to remote sensing images, we divide the experimental images into two groups: 1) remote sensing images of different spectra; 2) remote sensing images before and after disasters. Finally, the Precision, recall and RMSE indicators are used to evaluate the results, and compared with the classic RANSAC, GTM and Bi-SOGC methods.
实施例1:不同光谱遥感图像Example 1: Different spectral remote sensing images
采用了15对不同光谱遥感图像,并设置其误匹配比例为5%-95%。匹配示例及精度结果如图7,图8所示。图7(b)、(c)、(d)、(e)中实线连接为正确匹配,虚线连接为误匹配。通过本发明与其它方法进行比较可以看出,本方法在Precision,recall和RMSE方面都取得了最高的精度。15 pairs of remote sensing images with different spectra are used, and the error matching ratio is set at 5%-95%. Matching examples and accuracy results are shown in Figure 7 and Figure 8. In Figure 7(b), (c), (d), and (e), the solid line connection is the correct match, and the dotted line connection is the wrong match. By comparing the present invention with other methods, it can be seen that the present method has achieved the highest precision in terms of Precision, recall and RMSE.
实施例2:灾害前后遥感图像Example 2: Remote sensing images before and after disasters
采用了15对灾害前后的遥感图像,并设置误匹配比例为5%-95%。匹配示例及精度结果如图9,图10所示。图9(b)(c)(d)(e)中实线连接为正确匹配,虚线连接为误匹配。15 pairs of remote sensing images before and after the disaster are used, and the error matching ratio is set at 5%-95%. Matching examples and accuracy results are shown in Figure 9 and Figure 10. In Figure 9(b)(c)(d)(e), the solid line connection is correct matching, and the dotted line connection is wrong matching.
通过对比实验可以证明,本发明方法在精度指标上具有最好的变现,特别是在正确匹配所占比例较小的情况下。Through comparative experiments, it can be proved that the method of the present invention has the best performance in terms of accuracy index, especially when the proportion of correct matching is small.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510160933.1A CN104766323B (en) | 2015-04-07 | 2015-04-07 | A kind of Point matching method of remote sensing images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510160933.1A CN104766323B (en) | 2015-04-07 | 2015-04-07 | A kind of Point matching method of remote sensing images |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104766323A CN104766323A (en) | 2015-07-08 |
CN104766323B true CN104766323B (en) | 2018-03-06 |
Family
ID=53648132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510160933.1A Active CN104766323B (en) | 2015-04-07 | 2015-04-07 | A kind of Point matching method of remote sensing images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104766323B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108446725A (en) * | 2018-03-12 | 2018-08-24 | 杭州师范大学 | A kind of Image Feature Matching method of feature based triangle |
CN108596962A (en) * | 2018-04-23 | 2018-09-28 | 武汉大学 | A kind of heterologous remote sensing image reliable matching method under iteration triangular network constraint |
WO2021070188A1 (en) | 2019-10-11 | 2021-04-15 | Beyeonics Surgical Ltd. | System and method for improved electronic assisted medical procedures |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184418A (en) * | 2011-06-10 | 2011-09-14 | 上海应用技术学院 | Triangle-area-representation-histogram-based image registration method |
CN103778626A (en) * | 2013-12-31 | 2014-05-07 | 北京理工大学 | Quick image registration method based on visual remarkable area |
CN103903249A (en) * | 2012-12-27 | 2014-07-02 | 纽海信息技术(上海)有限公司 | Image matching system and method |
-
2015
- 2015-04-07 CN CN201510160933.1A patent/CN104766323B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102184418A (en) * | 2011-06-10 | 2011-09-14 | 上海应用技术学院 | Triangle-area-representation-histogram-based image registration method |
CN103903249A (en) * | 2012-12-27 | 2014-07-02 | 纽海信息技术(上海)有限公司 | Image matching system and method |
CN103778626A (en) * | 2013-12-31 | 2014-05-07 | 北京理工大学 | Quick image registration method based on visual remarkable area |
Non-Patent Citations (3)
Title |
---|
Point-matching method for remote sensing images with background variation;Xiaolong Shi等;《Journal of Applied Remote Sensing》;20150131;第9卷(第1期);参见第2节 * |
三角形约束下的图像特征点匹配方法;吴飞等;《计算机辅助设计与图形学学报》;20100331;第 22 卷(第 3 期);参见第503-510页 * |
基于Delaunay三角化的有效角点匹配算法;李赣华等;《信号处理》;20071031;第23卷(第5期);参见第695-698页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104766323A (en) | 2015-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111028277B (en) | SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network | |
CN104036480B (en) | Quick elimination Mismatching point method based on surf algorithm | |
CN109784223B (en) | Multi-temporal remote sensing image matching method and system based on convolutional neural network | |
CN108346162B (en) | Registration Method of Remote Sensing Image Based on Structural Information and Spatial Constraint | |
CN107346550B (en) | A fast registration method for 3D point cloud data with color information | |
CN105354866A (en) | Polygon contour similarity detection method | |
CN104851095B (en) | The sparse solid matching method of workpiece image based on modified Shape context | |
CN104751112B (en) | A kind of fingerprint template and fingerprint identification method based on fuzzy characteristics point information | |
CN105279769A (en) | Hierarchical particle filtering tracking method combined with multiple features | |
CN105631872B (en) | Remote sensing image registration method based on multi-characteristic points | |
CN104268866A (en) | Video sequence registering method based on combination of motion information and background information | |
CN105654421A (en) | Projection transform image matching method based on transform invariant low-rank texture | |
CN104766323B (en) | A kind of Point matching method of remote sensing images | |
CN112861870B (en) | Pointer instrument image correction method, system and storage medium | |
CN103279955B (en) | Image matching method and system | |
CN104616280A (en) | Image registration method based on maximum stable extreme region and phase coherence | |
CN104978582A (en) | Contour chord angle feature based identification method for blocked target | |
CN105760879A (en) | Fourier-Mellin transform-based image geometric matching method | |
CN107862319A (en) | A kind of heterologous high score optical image matching error elimination method based on neighborhood ballot | |
CN111310720A (en) | Pedestrian re-identification method and system based on graph metric learning | |
CN107590234A (en) | A kind of method of the indoor vision positioning database redundancy information reduction based on RANSAC | |
CN105760865A (en) | Facial image recognizing method capable of increasing comparison correct rate | |
CN103839081A (en) | Across-viewing-angle gait recognition method based on topology expression | |
CN111833249A (en) | A UAV image registration and stitching method based on bidirectional point features | |
CN106651756B (en) | An Image Registration Method Based on SIFT and Verification Mechanism |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |