CN109508674A - Airborne lower view isomery image matching method based on region division - Google Patents
Airborne lower view isomery image matching method based on region division Download PDFInfo
- Publication number
- CN109508674A CN109508674A CN201811348826.1A CN201811348826A CN109508674A CN 109508674 A CN109508674 A CN 109508674A CN 201811348826 A CN201811348826 A CN 201811348826A CN 109508674 A CN109508674 A CN 109508674A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- area
- real
- airborne
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 23
- 238000011156 evaluation Methods 0.000 claims abstract description 12
- 238000003709 image segmentation Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 18
- 230000009466 transformation Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 4
- 239000013598 vector Substances 0.000 description 4
- 238000005316 response function Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013476 bayesian approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Remote Sensing (AREA)
- Astronomy & Astrophysics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
Abstract
本发明提供基于区域划分的机载下视异构图像匹配方法,属于图像匹配技术领域。本发明首先使用方向直方图的标准差STD确定目标图像的纹理特征;若为丰富纹理图像,分别对目标图像和实时图像使用Meanshift图像分割算法进行分割,分割成若干区域,并生成相应的掩模图像;若为非丰富纹理图像,将实时图像进行划分,形成若干区域块;然后使用SIFT特征匹配方法,分别将每一个实时图像区域与所有目标图像区域进行一致性匹配;最后利用基于方向直方图的评价函数对各个匹配的结果进行评价,选取最优的匹配区域作为匹配结果。本发明解决了现有针对复杂且异构的机载图像匹配准确率不高的问题。本发明可用于无人机的异构图像匹配。
The invention provides an airborne down-view heterogeneous image matching method based on area division, which belongs to the technical field of image matching. The present invention first uses the standard deviation STD of the direction histogram to determine the texture feature of the target image; if it is a rich texture image, the target image and the real-time image are respectively segmented using the Meanshift image segmentation algorithm, divided into several regions, and corresponding masks are generated. image; if it is a non-textured image, divide the real-time image to form several area blocks; then use the SIFT feature matching method to match each real-time image area with all target image areas respectively; finally, use the direction-based histogram The evaluation function evaluates each matching result, and selects the optimal matching area as the matching result. The invention solves the problem of low matching accuracy for complex and heterogeneous airborne images in the prior art. The present invention can be used for heterogeneous image matching of unmanned aerial vehicles.
Description
技术领域technical field
本发明涉及机载下视异构图像匹配方法,属于图像匹配技术领域。The invention relates to an airborne down-view heterogeneous image matching method, and belongs to the technical field of image matching.
背景技术Background technique
随着无人机技术的提升,无人机技术吸引了大量的生产者和使用者,这使得无人机相关技术具有广泛的市场。基于无人机的异构图像匹配是将来自无人机的实时图像和来自卫星的目标图像进行对齐的过程,该技术在无人机导航、无人机着陆、无人机攻击和空间飞行器的导航等方面有广泛的应用。然而,机载下视图像具有复杂的结构特征,机载下视异构图像的精确匹配成为无人机应用中的关键技术,具有重要意义。With the improvement of drone technology, drone technology has attracted a large number of producers and users, which makes drone-related technologies have a wide market. UAV-based heterogeneous image matching is the process of aligning real-time images from UAVs and target images from satellites. This technology is used in UAV navigation, UAV landing, UAV attack and space vehicle Navigation and other aspects have a wide range of applications. However, the airborne down-view images have complex structural characteristics, and the accurate matching of the air-borne down-view heterogeneous images has become a key technology in the application of UAVs, which is of great significance.
在无人机目标定位中,目标图像是非实时的,而定位图像是实时的,这就使得目标图像和定位图像具有不同的图像结构。实时图像将会非常复杂,通常包括尺度变化图像、旋转变化图像、季节变化图像、模糊图像、遮挡图像和SAR(Synthetic Aperture Radar,合成孔径雷达)图像等。这使得机载下视异构图像匹配成为具有挑战性的问题,经典的图像匹配方法难以满足无人机异构图像匹配的应用需求。In the UAV target positioning, the target image is non-real-time, and the positioning image is real-time, which makes the target image and the positioning image have different image structures. Real-time images will be very complex, usually including scale-change images, rotation-change images, seasonal-change images, blurred images, occlusion images, and SAR (Synthetic Aperture Radar) images. This makes the airborne down-view heterogeneous image matching a challenging problem, and the classical image matching methods are difficult to meet the application requirements of UAV heterogeneous image matching.
目前最经典的图像匹配方法是基于SIFT(Scale-invariant feature transform尺度不变特征变换)特征图像匹配方法。吴刚等人发明了一种基于轮廓提取的建筑物图像匹配与融合方法,该方法首先提取物体轮廓,在轮廓图上提取直线,使用直线匹配算法,按照直线特征对两幅图像进行匹配,对最优匹配对集合内直线夹角进行计算,得到夹角矩阵,并对夹角矩阵进行相似度计算。张浩鹏等人发明了一种空间目标图像匹配方法,利用GMS(Grid-based Motion Statistics基于网格的运动估计)匹配算法对空间目标的三视图图像进行粗匹配,引入误差阈值算法NFA剔除误匹配点对,使算法自适应性更强。王爽等人发明了基于深度学习的异源图像匹配方法,该方法首先制作异源图像块数据集,进行图像预处理,获取图像块特征图,通过特征图得到特征向量,对特征图进行融合并归一化,训练图像匹配网络,预测匹配概率,有效的解决了图像匹配过拟合问题。以上提到的图像匹配方法都比较成熟,通常情况也能够达到比较高的图像匹配准确率,但实际情况下,机载下视图像往往比较复杂,如图1、图2、图3、图4、图5、图6、图7所示;针对复杂且异构的机载图像,这些方法还达不到较好的效果,仍需寻求能够达到更高匹配准确率的技术。At present, the most classic image matching method is based on SIFT (Scale-invariant feature transform) feature image matching method. Wu Gang et al. invented a building image matching and fusion method based on contour extraction. The method first extracts the contour of the object, extracts the straight line on the contour map, and uses the straight line matching algorithm to match the two images according to the straight line feature. The optimal matching calculates the angle between the straight lines in the set, obtains the angle matrix, and calculates the similarity of the angle matrix. Zhang Haopeng et al. invented a spatial target image matching method, using the GMS (Grid-based Motion Statistics grid-based motion estimation) matching algorithm to roughly match the three-view images of the spatial target, and introducing the error threshold algorithm NFA to eliminate mismatched points Yes, it makes the algorithm more adaptive. Wang Shuang et al. invented a heterogeneous image matching method based on deep learning. The method first creates a heterogeneous image block dataset, performs image preprocessing, obtains image block feature maps, obtains feature vectors from the feature maps, and fuses the feature maps. And normalize, train the image matching network, predict the matching probability, and effectively solve the problem of image matching overfitting. The image matching methods mentioned above are relatively mature, and usually can achieve a relatively high image matching accuracy. However, in actual situations, the airborne down-view images are often more complicated, as shown in Figure 1, Figure 2, Figure 3, and Figure 4. , Figure 5, Figure 6, Figure 7; for complex and heterogeneous airborne images, these methods have not achieved good results, and it is still necessary to seek technologies that can achieve higher matching accuracy.
发明内容SUMMARY OF THE INVENTION
本发明为解决现有技术针对复杂且异构的机载图像匹配准确率不高的问题,提供了基于区域划分的机载下视异构图像匹配方法。In order to solve the problem of low matching accuracy of complex and heterogeneous airborne images in the prior art, the present invention provides an airborne down-view heterogeneous image matching method based on area division.
本发明所述基于区域划分的机载下视异构图像匹配方法,通过以下技术方案实现:The airborne down-view heterogeneous image matching method based on area division according to the present invention is realized by the following technical solutions:
(1)使用方向直方图的标准差STD作为参数确定目标图像的纹理特征:如果标准差STD大于阈值S,则判定目标图像为丰富纹理图像;如果标准差STD小于等于阈值S,则判定目标图像为非丰富纹理图像;(1) Use the standard deviation STD of the orientation histogram as a parameter to determine the texture features of the target image: if the standard deviation STD is greater than the threshold S, the target image is determined to be a rich texture image; if the standard deviation STD is less than or equal to the threshold S, the target image is determined is a non-rich texture image;
(2)若目标图像为丰富纹理图像,分别对目标图像和实时图像使用Meanshift均值偏移图像分割算法进行分割,分割成若干区域,并分别将分割的目标图像区域和实时图像区域分层生成相应的掩模图像;(2) If the target image is a rich texture image, use the Meanshift mean shift image segmentation algorithm to segment the target image and the real-time image respectively, and divide them into several regions, and then layer the segmented target image region and real-time image region to generate corresponding the mask image;
若目标图像为非丰富纹理图像,将实时图像进行划分,形成若干区域块,目标图像整体视为一个区域;If the target image is a non-textured image, the real-time image is divided to form several area blocks, and the whole target image is regarded as one area;
(3)使用SIFT特征匹配方法,分别将每一个实时图像区域与所有目标图像区域进行一致性匹配;(3) Using the SIFT feature matching method, each real-time image area is matched with all target image areas for consistency;
(4)利用基于方向直方图的评价函数对各个匹配的结果进行评价,选取最优的匹配区域作为匹配结果。(4) Use the evaluation function based on the direction histogram to evaluate the results of each matching, and select the optimal matching area as the matching result.
本发明最为突出的特点和显著的有益效果是:The most prominent feature and significant beneficial effect of the present invention are:
1.对目标图像进行分类,对于不同类型图像进行不同的处理,使得图像的匹配准确率提高;仿真实验中,相比传统SIFT算法,本发明方法(基于区域划分的机载下视异构图像匹配方法)的图像匹配准确率提高约15%;1. Classify the target image, and perform different processing for different types of images, so that the matching accuracy of the images is improved; in the simulation experiment, compared with the traditional SIFT algorithm, the method of the present invention (based on the airborne down-view heterogeneous image based on area division) matching method), the image matching accuracy is improved by about 15%;
2.融合了多种图像处理技术,利用不同类型实时图像的信息,能够在具有大量噪声和干扰的机载下视图像中鲁棒、准确完成图像匹配过程;2. It integrates a variety of image processing technologies and uses the information of different types of real-time images to robustly and accurately complete the image matching process in the airborne down-view image with a lot of noise and interference;
3.该发明对于机载下视图像的目标定位系统具有重要意义,大大拓展了基于图像匹配的无人机目标定位系统的应用范围;3. The invention is of great significance to the target positioning system of the airborne down-view image, and greatly expands the application scope of the UAV target positioning system based on image matching;
4.本发明使用方向直方图标准差作为评价参数,确定目标图像的纹理特征,将目标进行了分类,包括丰富纹理图像和非丰富纹理图像,对这两种图像分别处理,提高了系统的鲁棒性;4. The present invention uses the standard deviation of the direction histogram as an evaluation parameter to determine the texture features of the target image, and classifies the target, including rich texture images and non-rich texture images, and processes these two images respectively, which improves the robustness of the system. stickiness;
5.本发明针对丰富的纹理图像,首先使用Meanshift图像分割方法得到不同分割区域的掩模图像,通过引入掩模图像对不同的区域分别进行匹配,将最优匹配区域匹配的结果作为最终匹配结果,这一思想使得类内区域相似性更强,类间区域相似性较弱,目标匹配准确率更高;5. Aiming at rich texture images, the present invention first uses the Meanshift image segmentation method to obtain mask images of different segmentation regions, and then matches different regions by introducing mask images, and takes the result of the optimal matching region matching as the final matching result. , this idea makes the intra-class region similarity stronger, the inter-class region similarity weaker, and the target matching accuracy rate is higher;
6.本发明针对非丰富纹理图像,将实时宽幅图像进行分块,将这些区域块分别与目标图像进行匹配,将最优匹配区域匹配的结果作为最终匹配结果。对于非丰富纹理图像,使用分割方法进行分割后,分割区域分散,无法在分割区域中完成准确匹配,将区域分块在某种程度上也可以使得类内区域相似性更强,类间区域相似性较弱,这提高了非丰富纹理目标图像匹配的准确性;6. For the non-textured image, the present invention divides the real-time wide image into blocks, matches these area blocks with the target image respectively, and uses the result of the optimal matching area as the final matching result. For non-textured images, after segmentation is performed using the segmentation method, the segmented areas are scattered, and accurate matching cannot be completed in the segmented area. To some extent, dividing the area into blocks can also make the intra-class area more similar and the inter-class area similar. weaker, which improves the accuracy of matching non-texture-rich target images;
7.本发明提出使用基于方向直方图的评价函数得到最优匹配区域的匹配结果。为了保证匹配结果的准确性,使用巴氏距离BD(Bhattacharyya distance)比较两幅匹配图像的相似性,如果BD值越大则认为匹配越准确,该发明使得最终的匹配结果更准确。7. The present invention proposes to use the evaluation function based on the orientation histogram to obtain the matching result of the optimal matching area. In order to ensure the accuracy of the matching results, the Bhattacharyya distance (BD) is used to compare the similarity of the two matching images. If the BD value is larger, the matching is considered to be more accurate, and the invention makes the final matching result more accurate.
附图说明Description of drawings
图1为尺度变化的实时图像;Figure 1 is a real-time image of scale change;
图2为旋转变化的实时图像;Fig. 2 is the real-time image of rotation change;
图3为春天的实时图像;Figure 3 is a real-time image of spring;
图4为冬天的实时图像;Figure 4 is a real-time image in winter;
图5为模糊的实时图像;Figure 5 is a blurred real-time image;
图6为遮挡的实时图像;Fig. 6 is the real-time image of occlusion;
图7为合成孔径雷达SAR实时图像;Figure 7 is a real-time image of synthetic aperture radar SAR;
图8为本发明方法流程示意图;8 is a schematic flow chart of the method of the present invention;
图9为a地图像(也是实施例中的目标图像);Fig. 9 is a ground image (also the target image in the embodiment);
图10为a地方向直方图;Pixel number表示像素数目,Angle表示角度;Figure 10 is a histogram of the direction of a; Pixel number represents the number of pixels, and Angle represents the angle;
图11为b地图像;Figure 11 is the image of b;
图12为b地方向直方图;Figure 12 is a histogram of the direction of b;
图13为c地图像;Figure 13 is an image of ground c;
图14为c地方向直方图;Figure 14 is a histogram of the direction of c;
图15为d地图像;Figure 15 is an image of d;
图16为d地方向直方图;Figure 16 is a histogram of the direction of d;
图17为SIFT算法中不同h对应的执行时间和匹配率示意图;Figure 17 is a schematic diagram of the execution time and matching rate corresponding to different h in the SIFT algorithm;
图18为DOG图像中最大和最小极值点检测示意图;Figure 18 is a schematic diagram of the detection of the maximum and minimum extreme points in the DOG image;
图19为描述算子的计算示意图;19 is a schematic diagram of the calculation of the description operator;
图20为实施例中用于匹配c地图像的实时图像。FIG. 20 is a real-time image used to match the image of c in the embodiment.
具体实施方式Detailed ways
具体实施方式一:结合图8对本实施方式进行说明,本实施方式给出的基于区域划分的机载下视异构图像匹配方法,用于如图1所示的目标图像和复杂且异构的机载的实时图像,该方法首先使用方向直方图的标准差STD作为参数确定目标图像的纹理特征;如果目标图像为丰富纹理图像,使用基于图像分割的图像匹配方法完成机载下视图像定位过程,该过程使用图像分割产生图像不同区域的掩模图像,在这些区域中,使用改进的SIFT图像匹配方法对各个区域进行匹配,使用基于方向直方图的评价函数获得最优的匹配区域作为匹配结果;对于非丰富纹理图像使用基于区域块划分的图像匹配方法完成图像定位过程,该过程将实时图像进行区域划分,使用改进的SIFT图像匹配方法对每一个块区域和目标图像进行匹配,同样使用基于方向直方图的评价函数得到最优的匹配块区域作为匹配结果。具体包括以下步骤:Embodiment 1: This embodiment will be described with reference to FIG. 8 . The airborne down-view heterogeneous image matching method based on area division given in this embodiment is used for the target image shown in FIG. 1 and the complex and heterogeneous image For airborne real-time images, the method first uses the standard deviation STD of the orientation histogram as a parameter to determine the texture features of the target image; if the target image is a rich texture image, an image matching method based on image segmentation is used to complete the airborne down-view image localization process , the process uses image segmentation to generate mask images of different areas of the image, in these areas, the improved SIFT image matching method is used to match each area, and the evaluation function based on the orientation histogram is used to obtain the optimal matching area as the matching result ; For non-rich texture images, use the image matching method based on regional block division to complete the image positioning process. This process divides the real-time image into regions, and uses the improved SIFT image matching method to match each block area with the target image. The evaluation function of the direction histogram obtains the optimal matching block area as the matching result. Specifically include the following steps:
(1)使用方向直方图的标准差STD作为参数确定目标图像的纹理特征:如果标准差STD大于阈值S,则判定目标图像为丰富纹理图像;如果标准差STD小于等于阈值S,则判定目标图像为非丰富纹理图像;如图9~图16分别为a、b、c、d四地不同纹理图像及其方向直方图,其中,a地图像STD=0.152,b地图像STD=0.157,c地图像STD=0.129,d地图像STD=0.0747。(1) Use the standard deviation STD of the orientation histogram as a parameter to determine the texture features of the target image: if the standard deviation STD is greater than the threshold S, the target image is determined to be a rich texture image; if the standard deviation STD is less than or equal to the threshold S, the target image is determined It is a non-rich texture image; as shown in Figure 9 to Figure 16 are four different texture images and their direction histograms of a, b, c, and d, where a image STD=0.152, b image STD=0.157, and c image STD=0.157 Image STD=0.129, d image STD=0.0747.
(2)若目标图像为丰富纹理图像,分别对目标图像和实时图像使用Meanshift均值偏移图像分割算法进行分割,分割成若干区域,并分别将分割的目标图像区域和实时图像区域分层生成相应的掩模图像;(2) If the target image is a rich texture image, use the Meanshift mean shift image segmentation algorithm to segment the target image and the real-time image respectively, and divide them into several regions, and then layer the segmented target image region and real-time image region to generate corresponding the mask image;
若目标图像为非丰富纹理图像,传统的特征检测算法检测的特征不具有明显的区别性,在大幅实时图像上进行图像匹配将导致匹配失败,本实施方式将实时图像进行划分,形成若干区域,目标图像整体视为一个区域(在之后进行一致性匹配的步骤时,将目标图像变换到实时图像的相应位置,最后根据子图像在大幅图像中的位置标定目标的位置。将此种对实时图像进行区域划分,使用SIFT图像匹配方法对每一个区域和目标图像进行匹配,并使用基于方向直方图的评价函数得到最优的匹配区的方法命名为基于块划分的图像匹配算法)。If the target image is a non-textured image, the features detected by the traditional feature detection algorithm have no obvious difference, and image matching on a large real-time image will result in a matching failure. This embodiment divides the real-time image to form several regions. The target image as a whole is regarded as an area (when the step of consistency matching is performed later, the target image is transformed to the corresponding position of the real-time image, and finally the position of the target is demarcated according to the position of the sub-image in the large image. The area is divided, each area is matched with the target image using the SIFT image matching method, and the method of obtaining the optimal matching area using the evaluation function based on the orientation histogram is named as the image matching algorithm based on block division).
MeanShift是一种特征空间分析方法,对于图像分割就是将问题映射到颜色特征空间。图像分割问题就是对每个像素点,找到他的类中心问题,MeanShift认为中心就是概率密度的极大值点。核概率密度估计(在模式识别中可被看成是Parzen窗技术)是最流行的概率密度估计方法。对于n个数据的点集xi,i=1,...,n,在d维空间Rd中,在点x位置,给定核K(x)和d×d大小对称正定矩阵H,多变量核密度估计定义为:MeanShift is a feature space analysis method, which for image segmentation maps the problem to the color feature space. The image segmentation problem is to find its class center for each pixel. MeanShift believes that the center is the maximum value point of the probability density. Kernel probability density estimation (which can be seen as a Parzen window technique in pattern recognition) is the most popular probability density estimation method . For a point set x i , i=1,...,n of n data, in the d-dimensional space R d , at the point x position, given a kernel K(x) and a d × d size symmetric positive definite matrix H, The multivariate kernel density estimate is defined as:
这里需满足条件:Here are the conditions to be met:
KH(x)=|H|-1/2K(H-1/2x)K H (x)=|H| -1/2 K(H -1/2 x)
同时K(x)满足:At the same time, K(x) satisfies:
这里cK是一个常数,xT表示x的转置,I表示图像的灰度值。在实际应用中,为了降低算法的复杂度,H使用对角矩阵得到最著名的表达式:Here c K is a constant, x T represents the transpose of x, and I represents the grayscale value of the image. In practical applications, in order to reduce the complexity of the algorithm, H uses a diagonal matrix Get the most famous expression:
这里,为对角矩阵H的对角元素;h为核K(x)的宽度,式中K(x)通常取一类特殊的对称核:here, is the diagonal element of the diagonal matrix H; h is the width of the kernel K(x), where K(x) usually takes a special kind of symmetric kernel:
K(x)=cK,dk(||x||2)K(x)=c K,d k(||x|| 2 )
其中,cK,d为标准化系数;k(||x||2)表示一个投影函数;Among them, c K, d is the standardization coefficient; k(||x|| 2 ) represents a projection function;
最终概率密度估计可写成:The final probability density estimate can be written as:
MeanShift是确定(概率密度估计的极值)的有效方法,设g(x)=-k'(x),k'(x)是k(x)的导数;定义核G(x)=cg,dg(||x||2),cg,d是标准化常数。根据上式的线性表达式可以得到MeanShift向量:MeanShift is OK (Probability Density Estimation The effective method of the extreme value), let g(x)=-k'(x), k'(x) is the derivative of k(x); define the kernel G(x)=c g,d g(||x || 2 ), c g,d are normalization constants. According to the linear expression of the above formula, the MeanShift vector can be obtained:
因此,MeanShift算法执行包括两部分:1)计算MeanShift向量mh,G(x);2)通过mh,G(x)对核窗G(x)进行卷积。该方法可以保证最终概率密度估计表达式在梯度为0的点附近收敛,并找到聚类中心。Therefore, the implementation of the MeanShift algorithm consists of two parts: 1) calculating the MeanShift vector m h,G (x); 2) convolving the kernel window G(x) by m h,G (x). This method can guarantee the final probability density estimation expression Converge around the point where the gradient is 0 and find the cluster center.
目标图像分割效果很好;然而实时图像通常比较大,如图17为SIFT算法中不同h对应的执行时间和匹配率示意图;为了满足实时性要求,可将图像的分辨率降低后再对目标进行分割。The target image segmentation effect is very good; however, the real-time image is usually relatively large. Figure 17 is a schematic diagram of the execution time and matching rate corresponding to different h in the SIFT algorithm; in order to meet the real-time requirements, the resolution of the image can be reduced and then the target segmentation.
(3)使用SIFT特征匹配方法,分别将每一个实时图像区域与所有目标图像区域进行一致性匹配。(3) Using the SIFT feature matching method, each real-time image area is matched with all target image areas for consistency.
(4)利用基于方向直方图的评价函数对各个匹配的结果进行评价,选取最优的匹配区域作为匹配结果。(4) Use the evaluation function based on the direction histogram to evaluate the results of each matching, and select the optimal matching area as the matching result.
具体实施方式二:本实施方式与具体实施方式一不同的是,步骤(3)中使用SIFT特征匹配方法,分别将每一个实时图像区域与所有目标图像区域进行一致性匹配的过程中,加入了角点(Corner)作为特征关键点。Embodiment 2: The difference between this embodiment and Embodiment 1 is that the SIFT feature matching method is used in step (3), and in the process of matching each real-time image area with all target image areas for consistency, adding Corner points are used as feature key points.
传统的SIFT算法包括尺度空间极值检测,关键点定位,方向设置和关键点描述算子确定四个部分。改进的SIFT图像匹配方法将角点关键点和SIFT关键点组合到一起构成算法的关键点集合,把角点区域作为掩模生成角点区域的关键点,加入角点作为特征关键点的SIFT图像匹配算法在分割后的图像区域中执行,这提高了图像匹配的性能。The traditional SIFT algorithm includes four parts: scale space extremum detection, key point location, direction setting and key point description operator determination. The improved SIFT image matching method combines the corner key points and SIFT key points to form the key point set of the algorithm, uses the corner area as a mask to generate the key points of the corner area, and adds the corner points as the SIFT image of the feature key points. The matching algorithm is performed in the segmented image region, which improves the performance of image matching.
加入角点作为特征关键点的SIFT图像匹配方法具体包括以下步骤:The SIFT image matching method that adds corner points as feature key points specifically includes the following steps:
(3.1)尺度空间内峰值选择(3.1) Peak selection in scale space
尺度空间内峰值选择是在由常数因子k确定的不同尺度的DOG(difference-of-Gaussian)函数的卷积图像D(x,y,σ)上寻找峰值,D(x,y,σ)被定义为:The peak selection in the scale space is to find peaks on the convoluted image D(x, y, σ) of the DOG (difference-of-Gaussian) function of different scales determined by a constant factor k, D(x, y, σ) is defined as:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y ,σ)
G(x,y,σ)表示高斯滤波器,L(x,y,σ)表示高斯平滑图像,尺度空间的最大值和最小值通过将不同尺度空间上当前像素与3×3×3邻域内的26个相邻像素进行比较,如果当前值为最大或最小,则成为极大或极小的候选像素,如图18所示。G(x, y, σ) represents a Gaussian filter, L(x, y, σ) represents a Gaussian smoothed image, and the maximum and minimum values of the scale space are obtained by comparing the current pixel on different scale spaces with the 3×3×3 neighborhood. The 26 adjacent pixels are compared, and if the current value is the largest or the smallest, it becomes the largest or smallest candidate pixel, as shown in Figure 18.
(3.2)关键点确定(3.2) Determination of key points
2002年Lowe等人提出一种确定采样点的方法,将D(x,y,σ)进行泰勒展开如下所示:In 2002, Lowe et al. proposed a method to determine the sampling point. The Taylor expansion of D(x, y, σ) is as follows:
D表示D(x)在0点处的函数值或者说D表示D(x)在采样点x=(0,0,0)处的函数值,采样点极值通过计算该函数的梯度为零时的获得,则有:D represents the function value of D(x) at point 0 or D represents the function value of D(x) at the sampling point x=(0,0,0), and the extreme value of the sampling point is calculated by calculating the gradient of the function to zero time obtained, there are:
选择峰值点小于0.03的位置作为关键点,同时考虑了边缘效应。choose The position of the peak point less than 0.03 is used as the key point, and the edge effect is also considered.
(3.3)方向设置(3.3) Direction setting
图像上每一个点的幅值m和θ根据像素微差来计算,则有:The amplitude m and θ of each point on the image are calculated according to the pixel difference, so there are:
通过上述两个公式,计算所有采样点的方向直方图,直方图的峰值则可以看作是局部梯度的主方向,如图19右侧所示。Through the above two formulas, the direction histogram of all sampling points is calculated, and the peak value of the histogram can be regarded as the main direction of the local gradient, as shown on the right side of Figure 19.
(3.4)局部描述算子确定以及关键点匹配(3.4) Local description operator determination and key point matching
SIFT描述算子的计算如图19所示,在每一个关键点上,图像梯度和方向被计算,在关键点所在的4×4的区域内创建方向直方图,每个直方图包含8个方向,每一个箭头代表直方图中每一个方向,长度代表每个方向的幅值大小。一个梯度采样可以选择四个4×4区域,每个4×4区域可以计算一个方向直方图,这四个直方图以图19中间图所排列方式作为SIFT的描述算子。本实施方式中选用了4×4个小区域,得到了4×4×8=128维的向量,这128维向量用于图像匹配的相似性测量。The calculation of the SIFT description operator is shown in Figure 19. At each key point, the image gradient and direction are calculated, and the direction histogram is created in the 4×4 area where the key point is located, and each histogram contains 8 directions. , each arrow represents each direction in the histogram, and the length represents the magnitude of each direction. One gradient sampling can select four 4×4 regions, and each 4×4 region can calculate a direction histogram. These four histograms are arranged in the middle diagram of Figure 19 as the description operator of SIFT. In this embodiment, 4×4 small regions are selected, and a 4×4×8=128-dimensional vector is obtained, and the 128-dimensional vector is used for similarity measurement of image matching.
当图像的关键点描述产生后,对于目标中每一个特征点,在实时图像中找到相应的一致性关键点,这一过程称为关键点匹配。这一问题可以描述为:给定集合T和未知采样x,找到对应于x的最可能的采样c。使用贝叶斯方法,可以选择这里P(cj|x,T)表示x所在cj类的后验概率,cj属于训练数据集合T中的一个元素。本实施方式中使用k近邻算法如下所示:After the keypoint description of the image is generated, for each feature point in the target, the corresponding consistent keypoint is found in the real-time image, and this process is called keypoint matching. The problem can be described as: given a set T and unknown samples x, find the most probable sample c corresponding to x. Using a Bayesian approach, one can choose Here P(c j |x, T) represents the posterior probability of class c j where x is located, and c j belongs to an element in the training data set T. The k-nearest neighbor algorithm used in this implementation is as follows:
步骤1:在集合T中找到k个x最近邻元素,也就是找到一个集合同时满足 Step 1: Find the k nearest neighbors of x in the set T, that is, find a set satisfy both
步骤2:在Y中,将频繁出现的关键点对放入T集合中,打破随机产生的关联。Step 2: In Y, put frequently occurring keypoint pairs into the T set to break the randomly generated association.
(3.5)转换函数的确定(3.5) Determination of conversion function
本实施方式中变换函数使用的是透视变换,透视变换的本质是将图像投影到一个新的视平面,其通用变换公式为:The transformation function in this embodiment uses perspective transformation. The essence of perspective transformation is to project the image to a new viewing plane. The general transformation formula is:
这里(u,v)是原始图像位置,(x,y)是变换图像的位置。因此,x=x*/w*,y=x*/w*,变换矩阵可以分成四个部分:表示线性变换,[a31 a32]是偏移量,[a13 a23]T产生透视变换。于是可以得到Here (u,v) is the original image position and (x,y) is the position of the transformed image. Therefore, x=x * /w * , y=x * /w * , the transformation matrix can be divided into four parts: Represents a linear transformation, [a 31 a 32 ] is the offset, and [a 13 a 23 ] T produces a perspective transformation. So you can get
RANSAC方法是一种对数据进行模型拟合的一种方法,可用于目标图像与机载下视实时图像的匹配。RANSAC方法与经典的最小二乘拟合确定投影函数相比较,它具有内在的检测和拒绝错误像素的性质。RANSAC执行过程如下:The RANSAC method is a method of model fitting on the data, which can be used to match the target image with the airborne down-view real-time image. The RANSAC method is compared with the classical least squares fitting to determine the projection function, which has the inherent property of detecting and rejecting erroneous pixels. The execution process of RANSAC is as follows:
给定已知目标图像和实时图像的对应关键点对集合P,这里(|P|>n),n是实例化投影参数所需的关键点对的最少个数。在集合P中随机选择n个数据点的子集S1,对模型进行实例化,使用实例化模型M1确定P集合中子集S1*,该集合中的数据对于M1具有最小误差。如果S1*的大小大于某一个阈值,使用最小二乘拟合的方法计算一个新的模型M1*,如果S1*的大小小于阈值t,则随机选择一个新的子集S1,重新对数据进行实例化,如果循环次数大于阈值ln,则认为找不到一致性点集,终止算法。通过RANSAC算法得到了匹配对应的投影函数,同时将目标在实时图像中标定出来。Given a set P of corresponding keypoint pairs of known target images and live images, where (|P|>n), n is the minimum number of keypoint pairs required to instantiate the projection parameters. A subset S1 of n data points is randomly selected in the set P, the model is instantiated, and the instantiated model M1 is used to determine the subset S1 * in the set P, the data in this set has the smallest error for M1. If the size of S1 * is greater than a certain threshold, a new model M1 * is calculated using the least squares fitting method. If the size of S1 * is less than the threshold t, a new subset S1 is randomly selected and the data is re-instanced If the number of cycles is greater than the threshold ln, it is considered that the set of consistency points cannot be found, and the algorithm is terminated. The matching corresponding projection function is obtained through the RANSAC algorithm, and the target is calibrated in the real-time image at the same time.
其他步骤及参数与具体实施方式一相同。Other steps and parameters are the same as in the first embodiment.
具体实施方式三:本实施方式与具体实施方式一不同的是,步骤(4)中所述利用基于方向直方图的评价函数对匹配的结果进行评价,选取最优的匹配区域作为匹配结果的具体过程包括:Embodiment 3: The difference between this embodiment and Embodiment 1 is that in step (4), the evaluation function based on the direction histogram is used to evaluate the matching result, and the optimal matching area is selected as the specific matching result. The process includes:
(4.1)使用巴氏距离BD作为进行匹配的两个区域直方图的匹配相似性测量系数;(4.1) use the Babbitt distance BD as the matching similarity measurement coefficient of the two region histograms for matching;
(4.2)若BD大于阈值T,则判定匹配成功;否则BD小于等于阈值T,则匹配失败;(4.2) If BD is greater than the threshold T, it is determined that the matching is successful; otherwise, the BD is less than or equal to the threshold T, the matching fails;
(4.3)在匹配成功的区域中,选取对应BD值最大的区域作为最优的匹配区域。(4.3) Among the successfully matched regions, the region with the largest corresponding BD value is selected as the optimal matching region.
其他步骤及参数与具体实施方式一或二相同。Other steps and parameters are the same as in the first or second embodiment.
具体实施方式四:本实施方式与具体实施方式三不同的是,步骤(4.1)中所述巴氏距离BD具体为:Embodiment 4: The difference between this embodiment and Embodiment 3 is that the Bavarian distance BD described in step (4.1) is specifically:
其中,th(j)表示目标图像的方向直方图,rh(j)表示实时图像的方向直方图;j=1,…,N;N表示图像灰度级的类别数目。Among them, th(j) represents the orientation histogram of the target image, rh(j) represents the orientation histogram of the real-time image; j=1,...,N; N represents the number of categories of image gray levels.
其他步骤及参数与具体实施方式一、二或三相同。Other steps and parameters are the same as in the first, second or third embodiment.
具体实施方式五:本实施方式与具体实施方式三不同的是,步骤(1)中所述标准差STD的定义具体为:Embodiment 5: The difference between this embodiment and Embodiment 3 is that the definition of the standard deviation STD in step (1) is specifically:
其中,μ表示方向直方图的均值;h(j)表示图像的方向直方图;j=1,…,N;N表示图像灰度级的类别数目。Among them, μ represents the mean value of the orientation histogram; h(j) represents the orientation histogram of the image; j=1,...,N; N represents the number of categories of image gray levels.
其他步骤及参数与具体实施方式一、二、三或四相同。Other steps and parameters are the same as in the first, second, third or fourth embodiment.
具体实施方式六:本实施方式与具体实施方式二不同的是,所述角点采用Harris角点检测方法获得。Embodiment 6: This embodiment is different from Embodiment 2 in that the corner points are obtained by using the Harris corner point detection method.
Harris角点检测方法是通过确定角点响应函数R的大小确定图像中的角点。I表示图像的灰度值,(x,y)表示图像中像素坐标位置,对于(x,y)位置的小区域,该区域的变化量Ex,y定义如下:The Harris corner detection method determines the corners in the image by determining the size of the corner response function R. I represents the gray value of the image, and (x, y) represents the pixel coordinate position in the image. For a small area at the (x, y) position, the variation E x , y of the area is defined as follows:
Ex,y=∑u,vwu,v|Ix+u,y+v-Iu,v|2 E x,y =∑ u,v w u,v |I x+u,y+v -I u,v | 2
这里wu,v表示一个图像窗口,(u,v)是窗口中坐标的位置,(x,y)可以平移4个方向{(-1,1),(1,1),(-1,-1),(1,-1)}。经过分析扩展可以得到Ex,y的另一个表达形式:Here w u, v represents an image window, (u, v) is the position of the coordinates in the window, (x, y) can be translated in 4 directions {(-1,1),(1,1),(-1, -1),(1,-1)}. After analysis and expansion, another expression form of Ex, y can be obtained:
Ex,y=(x,y)M(x,y)T E x,y =(x,y)M(x,y) T
M是2×2的对称矩阵:M is a 2×2 symmetric matrix:
其中,in,
定义以下公式有:The following formulas are defined as:
Tr(M)=A+BTr(M)=A+B
Det(M)=AB-C2 Det(M)=AB-C 2
则可以得到角点响应函数R:Then the corner response function R can be obtained:
R=Det(M)-qTr2(M)R=Det(M)-qTr 2 (M)
这里q为经验值,角点检测即是找到响应函数大于某一阈值的点作为角点。Here q is an empirical value, and corner detection is to find a point whose response function is greater than a certain threshold as a corner.
其他步骤及参数与具体实施方式一、二、三、四胡或五相同。Other steps and parameters are the same as in the first, second, third, fourth or fifth embodiment.
具体实施方式七:本实施方式与具体实施方式一至六不同的是,阈值S的取值为0.13~0.15。由于对于纹理较少的图像,传统的特征检测算法检测的特征不具有明显的区别性,在大幅实时图像上进行图像匹配将导致匹配失败。通过大量的图像计算STD统计,得到当STD小于0.13时,能够判定为通常意义上的非丰富纹理图像;当STD大于0.15时,能够判定为通常意义上的丰富纹理图像,因此阈值S取为0.13~0.15时,用于区分图像纹理丰富与非丰富,比较合理且能取得较好的效果。Embodiment 7: This embodiment differs from Embodiments 1 to 6 in that the threshold value S is 0.13-0.15. For images with less texture, the features detected by traditional feature detection algorithms are not distinct, and image matching on large real-time images will lead to matching failure. By calculating STD statistics from a large number of images, it is obtained that when the STD is less than 0.13, it can be judged as a non-textured image in the usual sense; when the STD is greater than 0.15, it can be judged as an image with rich texture in the usual sense, so the threshold S is taken as 0.13 When ~0.15, it is used to distinguish between rich and non-rich image textures, which is more reasonable and can achieve better results.
其他步骤及参数与具体实施方式一、二、三、四、五或六相同。Other steps and parameters are the same as in the first, second, third, fourth, fifth or sixth embodiment.
实施例Example
采用以下实施例验证本发明的有益效果:Adopt the following examples to verify the beneficial effects of the present invention:
选取a地图像为目标图像(如图9所示),实时图像为图1、图2、图6;选取c地图像(如图13所示)为目标图像,实时图像为图20。Select the image of ground a as the target image (as shown in Figure 9), and the real-time images as shown in Figure 1, Figure 2, and Figure 6; select the image of ground c (as shown in Figure 13) as the target image, and the real-time image as shown in Figure 20.
采用基于区域划分的机载下视异构图像匹配方法具体步骤为:The specific steps of using the airborne down-view heterogeneous image matching method based on area division are as follows:
1、设定阈值S的取值为0.14,利用公式计算得到a地方向直方图标准差STD=0.152(如图10),判定为丰富纹理图像;计算得到c地方向直方图标准差STD=0.129(如图14),判定为非丰富纹理图像。1. Set the threshold value S to 0.14, using the formula The standard deviation of the histogram in the direction of a is STD=0.152 (as shown in Figure 10), and it is determined as a rich texture image; the standard deviation of the histogram in the direction of c is calculated as STD=0.129 (as shown in Figure 14), and it is determined as a non-textured image.
2、如图8所示,对目标图像和实时图像使用Meanshift均值偏移图像分割算法进行分割,分割成若干区域,并分别将分割的目标图像区域和实时图像区域分层生成相应的掩模图像。将在不同区域检测关键点并在不同区域对中找到一致性关键点,接下来使用转换函数产生标记图像,最终,基于方向直方图的评价函数用于选择最优匹配区域。2. As shown in Figure 8, the target image and the real-time image are segmented using the Meanshift mean shift image segmentation algorithm, divided into several regions, and the segmented target image region and the real-time image region are layered to generate corresponding mask images. . Keypoints will be detected in different regions and consistent keypoints will be found in different region pairs, then a transformation function will be used to generate labeled images, and finally, an orientation histogram-based evaluation function is used to select the optimal matching region.
为了检验本发明方法的效果,将上述目标图像与实时图像运用现有其他算法进行匹配试验,得到本发明方法与其他算法效果比较情况如下表:In order to check the effect of the method of the present invention, the above-mentioned target image and the real-time image are used to carry out a matching test with other existing algorithms, and the comparison of the effect of the method of the present invention and other algorithms is obtained as follows:
表1不同方法匹配效果的比较Table 1 Comparison of matching effects of different methods
从上表可以看出,本发明方法能够达到更高的图像匹配准确率。本发明中使用的MeanShift、SIFT与角点组合方式能大大提高旋转变化的图像匹配准确率,当其他方法针对非丰富纹理图像无效时,本发明中使用的基于块划分图像匹配算法能够有效进行图像的匹配。相比传统SIFT算法,本发明方法(基于区域划分的机载下视异构图像匹配方法)的图像匹配准确率提高约15%;It can be seen from the above table that the method of the present invention can achieve higher image matching accuracy. The combination of MeanShift, SIFT and corner points used in the present invention can greatly improve the image matching accuracy of rotational changes. When other methods are ineffective for non-rich texture images, the block-based image matching algorithm used in the present invention can effectively perform image matching. match. Compared with the traditional SIFT algorithm, the image matching accuracy of the method of the present invention (airborne down-view heterogeneous image matching method based on area division) is improved by about 15%;
本发明还可有其它多种实施例,在不背离本发明精神及其实质的情况下,本领域技术人员当可根据本发明作出各种相应的改变和变形,但这些相应的改变和变形都应属于本发明所附的权利要求的保护范围。The present invention can also have other various embodiments. Without departing from the spirit and essence of the present invention, those skilled in the art can make various corresponding changes and deformations according to the present invention, but these corresponding changes and deformations are all It should belong to the protection scope of the appended claims of the present invention.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811348826.1A CN109508674B (en) | 2018-11-13 | 2018-11-13 | Airborne Down-View Heterogeneous Image Matching Method Based on Region Division |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811348826.1A CN109508674B (en) | 2018-11-13 | 2018-11-13 | Airborne Down-View Heterogeneous Image Matching Method Based on Region Division |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109508674A true CN109508674A (en) | 2019-03-22 |
CN109508674B CN109508674B (en) | 2021-08-13 |
Family
ID=65748318
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811348826.1A Expired - Fee Related CN109508674B (en) | 2018-11-13 | 2018-11-13 | Airborne Down-View Heterogeneous Image Matching Method Based on Region Division |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109508674B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110523657A (en) * | 2019-09-03 | 2019-12-03 | 广州能源检测研究院 | A kind of instrument appearance delection device and its working method based on 3D scanning method |
CN114565861A (en) * | 2022-03-02 | 2022-05-31 | 佳木斯大学 | Airborne Downward-looking Target Image Localization Method Based on Probabilistic and Statistical Differential Homeomorphic Set Matching |
CN116778513A (en) * | 2023-08-24 | 2023-09-19 | 国网山西省电力公司太原供电公司 | Intelligent archiving control method for bills in power industry |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103914847A (en) * | 2014-04-10 | 2014-07-09 | 西安电子科技大学 | SAR image registration method based on phase congruency and SIFT |
CN104933434A (en) * | 2015-06-16 | 2015-09-23 | 同济大学 | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method |
CN105160684A (en) * | 2015-09-30 | 2015-12-16 | 中国科学院遥感与数字地球研究所 | Online automatic matching method for geometric correction of remote sensing image |
CN106355577A (en) * | 2016-09-08 | 2017-01-25 | 武汉科技大学 | Method and system for quickly matching images on basis of feature states and global consistency |
US20170308673A1 (en) * | 2011-08-12 | 2017-10-26 | Help Lightning, Inc. | System and method for image registration of multiple video streams |
CN107480727A (en) * | 2017-08-28 | 2017-12-15 | 荆门程远电子科技有限公司 | The unmanned plane image fast matching method that a kind of SIFT and ORB are combined |
-
2018
- 2018-11-13 CN CN201811348826.1A patent/CN109508674B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170308673A1 (en) * | 2011-08-12 | 2017-10-26 | Help Lightning, Inc. | System and method for image registration of multiple video streams |
CN103914847A (en) * | 2014-04-10 | 2014-07-09 | 西安电子科技大学 | SAR image registration method based on phase congruency and SIFT |
CN104933434A (en) * | 2015-06-16 | 2015-09-23 | 同济大学 | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method |
CN105160684A (en) * | 2015-09-30 | 2015-12-16 | 中国科学院遥感与数字地球研究所 | Online automatic matching method for geometric correction of remote sensing image |
CN106355577A (en) * | 2016-09-08 | 2017-01-25 | 武汉科技大学 | Method and system for quickly matching images on basis of feature states and global consistency |
CN107480727A (en) * | 2017-08-28 | 2017-12-15 | 荆门程远电子科技有限公司 | The unmanned plane image fast matching method that a kind of SIFT and ORB are combined |
Non-Patent Citations (3)
Title |
---|
JIANG YANG 等: "A Remote Sensing Imagery Automatic Feature Registration Method Bases on Mean-Shift", 《2012 IEEE INTERNATIONALGEOSCIENCE AND REMOTE SENSING SYMPOSIUM》 * |
WENPING MA 等: "Remote Sensing Image Registration Based on Multifeature and Region Division", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》 * |
薛武 等: "沙漠地区无人机影像连接点提取", 《测绘科学技术学报》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110523657A (en) * | 2019-09-03 | 2019-12-03 | 广州能源检测研究院 | A kind of instrument appearance delection device and its working method based on 3D scanning method |
CN114565861A (en) * | 2022-03-02 | 2022-05-31 | 佳木斯大学 | Airborne Downward-looking Target Image Localization Method Based on Probabilistic and Statistical Differential Homeomorphic Set Matching |
CN114565861B (en) * | 2022-03-02 | 2024-04-30 | 佳木斯大学 | Airborne downward-looking target image positioning method based on probability statistical differential homoembryo set matching |
CN116778513A (en) * | 2023-08-24 | 2023-09-19 | 国网山西省电力公司太原供电公司 | Intelligent archiving control method for bills in power industry |
CN116778513B (en) * | 2023-08-24 | 2023-10-27 | 国网山西省电力公司太原供电公司 | Intelligent archiving control method for bills in power industry |
Also Published As
Publication number | Publication date |
---|---|
CN109508674B (en) | 2021-08-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113012203B (en) | High-precision multi-target tracking method under complex background | |
CN108388896B (en) | License plate identification method based on dynamic time sequence convolution neural network | |
Li et al. | SAR image change detection using PCANet guided by saliency detection | |
CN111145228B (en) | Heterogeneous Image Registration Method Based on Fusion of Local Contour Points and Shape Features | |
CN104200495B (en) | A kind of multi-object tracking method in video monitoring | |
CN108446634B (en) | Aircraft continuous tracking method based on combination of video analysis and positioning information | |
CN109785366B (en) | Related filtering target tracking method for shielding | |
CN109446894B (en) | A Multispectral Image Change Detection Method Based on Probabilistic Segmentation and Gaussian Mixture Clustering | |
CN102800099B (en) | Multi-feature multi-level visible light and high-spectrum image high-precision registering method | |
CN106611420A (en) | SAR image segmentation method based on deconvolution network and sketch direction constraint | |
CN105321189A (en) | Complex environment target tracking method based on continuous adaptive mean shift multi-feature fusion | |
CN107977660A (en) | Region of interest area detecting method based on background priori and foreground node | |
CN109508674B (en) | Airborne Down-View Heterogeneous Image Matching Method Based on Region Division | |
CN103295031A (en) | Image object counting method based on regular risk minimization | |
CN104392459A (en) | Infrared image segmentation method based on improved FCM (fuzzy C-means) and mean drift | |
CN102930294A (en) | Chaotic characteristic parameter-based motion mode video segmentation and traffic condition identification method | |
CN105894037A (en) | Whole supervision and classification method of remote sensing images extracted based on SIFT training samples | |
Elmikaty et al. | Car detection in aerial images of dense urban areas | |
CN103854290A (en) | Extended target tracking method combining skeleton characteristic points and distribution field descriptors | |
CN105447488B (en) | SAR image target detection method based on sketched line segment topology | |
CN110929598B (en) | Unmanned aerial vehicle-mounted SAR image matching method based on contour features | |
CN112734816A (en) | Heterogeneous image registration method based on CSS-Delaunay | |
CN112613565A (en) | Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating | |
CN110738098A (en) | target identification positioning and locking tracking method | |
CN104766321A (en) | Infrared pedestrian image accurate segmentation method utilizing shortest annular path |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20210813 |