CN105005993B - A kind of quick fine matching method of dimensional topography based on isomery projection - Google Patents

A kind of quick fine matching method of dimensional topography based on isomery projection Download PDF

Info

Publication number
CN105005993B
CN105005993B CN201510397177.4A CN201510397177A CN105005993B CN 105005993 B CN105005993 B CN 105005993B CN 201510397177 A CN201510397177 A CN 201510397177A CN 105005993 B CN105005993 B CN 105005993B
Authority
CN
China
Prior art keywords
matching
terrain
point
perspective
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510397177.4A
Other languages
Chinese (zh)
Other versions
CN105005993A (en
Inventor
刘贵喜
方兰兰
吕孟娇
张娜
姚李阳
唐海军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510397177.4A priority Critical patent/CN105005993B/en
Publication of CN105005993A publication Critical patent/CN105005993A/en
Application granted granted Critical
Publication of CN105005993B publication Critical patent/CN105005993B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

本发明公开了一种基于异构投影的三维地形快速精确匹配方法。该三维地形匹配方法包括三维DEM(Digital Elevation Model)地形数据的获取与转换,对三维地形进行正射投影,根据地形的正射投影再进行投影图匹配。对投影图进行匹配过程中采用直线特征与点特征结合的方法,在对投影图进行直线检测并匹配完同名直线之后,找到同名直线两两相交的虚拟角点,并计算出该角点的坐标。然后利用改进的SURF(Speed‑up robust features)算法对找出的虚拟角点进行匹配,改进的SURF算法结合了SURF算法、HARRIS算法和NCC(Normalized Cross Correlation)算法。当且仅当对应的角点匹配时,才认为这两条直线正确匹配。计算出投影图之间的变换关系,再将变换参数应用于三维地形之间的匹配,从而完成整个地形匹配过程。

The invention discloses a fast and accurate three-dimensional terrain matching method based on heterogeneous projection. The three-dimensional terrain matching method includes acquisition and conversion of three-dimensional DEM (Digital Elevation Model) terrain data, performing orthographic projection on the three-dimensional terrain, and performing projection map matching according to the orthographic projection of the terrain. In the process of matching the projection image, the method of combining the straight line feature and the point feature is adopted. After the line detection is performed on the projection image and the straight line with the same name is matched, the virtual corner point where the two straight lines with the same name intersect is found, and the coordinates of the corner point are calculated. . Then use the improved SURF (Speed-up robust features) algorithm to match the found virtual corners. The improved SURF algorithm combines SURF algorithm, HARRIS algorithm and NCC (Normalized Cross Correlation) algorithm. Two lines are considered to match correctly if and only if the corresponding corner points match. Calculate the transformation relationship between the projection maps, and then apply the transformation parameters to the matching between the three-dimensional terrain, so as to complete the whole terrain matching process.

Description

一种基于异构投影的三维地形快速精确匹配方法A Fast and Accurate Matching Method for 3D Terrain Based on Heterogeneous Projection

技术领域technical field

本发明属于三维地形匹配领域,具体为一种基于异构投影的三维地形快速精确匹配方法。The invention belongs to the field of three-dimensional terrain matching, in particular to a fast and accurate three-dimensional terrain matching method based on heterogeneous projections.

背景技术Background technique

地形匹配技术是地形辅助导航的关键技术之一,它在航空领域应用广泛,在水下运载体的海底地形匹配定位、机器人导航定位以及陆地车辆导航等方面,也有着广阔的应用前景。Terrain matching technology is one of the key technologies for terrain-assisted navigation. It is widely used in the aviation field, and it also has broad application prospects in underwater vehicle terrain matching and positioning, robot navigation and positioning, and land vehicle navigation.

现有的地形匹配算法很多,国内外也一直有人在研究,算法也在不断的更新改进。现有的方法中,有在高斯曲率图像中利用高斯曲率最大和最小的点作为特征点进行地形匹配的方法。有用归一化的小波描述符进行轮廓的描述和匹配的基于地形轮廓的匹配方法,但是此方法在地形轮廓不明显等情况不适用。有利用视觉词典的方法证明直接的2D到3D的匹配方法能提高匹配性能,但是此方法数据量太大的话就会影响内存,耗费的时间长,匹配过程中没有进行误描述符的剔除。后来出现了一种基于面特征的三维地形匹配算法,此方法的缺点是如果地形数据太大或者地形图复杂,匹配的时间会很长,匹配速度太慢。为此,本发明提出了一种基于地形正射投影的新的三维地形快速精确匹配方法,充分利用地形的表面特征,很多情形下投影图之间不是同源的,而是异构的图像。异构的图像缺乏灰度信息,传统的图像匹配方法已经不适用。本发明利用直线特征与点特征结合的方法进行投影图匹配。首先利用直线匹配算法匹配图像中的同名直线,对同名直线两两相交的交点作为虚拟角点,再利用点特征进行第二次匹配,剔除误匹配的直线。对点特征进行匹配时采用的是具有更高匹配精度的改进SURF(Speed-up robust features)算法,该算法结合了SURF算法、HARRIS算法和NCC(Normalized Cross Correlation)算法。There are many existing terrain matching algorithms, and people at home and abroad have been studying them, and the algorithms are constantly being updated and improved. Among the existing methods, there is a method of using the points with the largest and smallest Gaussian curvatures in the Gaussian curvature image as feature points for terrain matching. A matching method based on terrain contours is used to describe and match contours using normalized wavelet descriptors, but this method is not suitable for situations where the contours of the terrain are not obvious. There is a method using visual dictionaries to prove that the direct 2D to 3D matching method can improve the matching performance, but if the amount of data in this method is too large, it will affect the memory and take a long time, and the wrong descriptors are not eliminated during the matching process. Later, a 3D terrain matching algorithm based on surface features appeared. The disadvantage of this method is that if the terrain data is too large or the terrain map is complex, the matching time will be very long and the matching speed will be too slow. For this reason, the present invention proposes a new fast and accurate matching method for 3D terrain based on terrain orthographic projection, which makes full use of the surface features of the terrain. In many cases, the projection images are not homogeneous but heterogeneous images. Heterogeneous images lack grayscale information, and traditional image matching methods are no longer applicable. The present invention uses the method of combining the straight line feature and the point feature to match the projection image. Firstly, the straight line matching algorithm is used to match the straight lines with the same name in the image, and the intersection point of two straight lines with the same name is used as a virtual corner point, and then the point feature is used for the second matching, and the wrongly matched straight lines are eliminated. When matching point features, the improved SURF (Speed-up robust features) algorithm with higher matching accuracy is used, which combines SURF algorithm, HARRIS algorithm and NCC (Normalized Cross Correlation) algorithm.

发明内容Contents of the invention

本发明的目的是提供一种基于异构投影的三维地形快速精确匹配方法,以便提高三维地形匹配效率和匹配精度。The purpose of the present invention is to provide a fast and accurate matching method for three-dimensional terrain based on heterogeneous projections, so as to improve the matching efficiency and accuracy of three-dimensional terrain.

本发明的目的是这样实现的,一种基于异构投影的三维地形快速精确匹配方法,其特征在于,至少包括如下步骤:The purpose of the present invention is achieved in this way, a method for fast and accurate matching of three-dimensional terrain based on heterogeneous projections, characterized in that it at least includes the following steps:

步骤1,将原始数字高程模型地形数据格式USGS-DEM转换为数字高程模型地形数据格式CNSTDF-DEM;Step 1, convert the original digital elevation model terrain data format USGS-DEM into digital elevation model terrain data format CNSTDF-DEM;

步骤2,对步骤1所述地形格式的参考三维地形和待匹配三维地形进行正射投影;Step 2, performing orthographic projection on the reference 3D terrain of the terrain format described in step 1 and the 3D terrain to be matched;

步骤3,根据步骤2所述的正射投影,对投影图进行匹配,得到投影图变换参数,匹配过程采用直线特征和点特征结合的方法;Step 3, according to the orthographic projection described in step 2, the projection map is matched to obtain the projection map transformation parameters, and the matching process adopts the method of combining line features and point features;

步骤4,根据步骤3所述的变换参数,对参考三维地形和待匹配三维地形进行匹配并得到匹配结果。Step 4, according to the transformation parameters described in step 3, match the reference 3D terrain and the 3D terrain to be matched and obtain a matching result.

所述步骤1,包括如下步骤:Said step 1 includes the following steps:

步骤11,打开一个地形文件;Step 11, open a terrain file;

步骤12,判断打开的文件是否为USGS-DEM地形数据格式;Step 12, judging whether the opened file is in the USGS-DEM terrain data format;

步骤13,如果不是则转到步骤11,如果是则开始读取数据头;Step 13, if not then go to step 11, if it is then start to read the data header;

步骤14,提取相关地形文件头信息;Step 14, extracting relevant terrain file header information;

步骤15,存储文件头信息;Step 15, storing file header information;

步骤16,开辟数据存储空间;Step 16, open up data storage space;

步骤17,读取数据体;Step 17, read the data body;

步骤18,过滤数据体的每行的前144个字节;Step 18, filtering the first 144 bytes of each line of the data body;

步骤19,存储数据相关信息;Step 19, storing data-related information;

步骤110,结合存储的文件头和数据体保存为CNSTDF-DEM地形格式文件;Step 110, save as CNSTDF-DEM terrain format file in conjunction with stored file header and data body;

步骤111,地形转换完成。Step 111, terrain conversion is completed.

所述步骤2,包括如下步骤:Said step 2 comprises the following steps:

步骤21,打开得到的三维CNSTDF-DEM地形数据;Step 21, open the obtained three-dimensional CNSTDF-DEM terrain data;

步骤22,得到地形表面纹理特征;Step 22, obtaining terrain surface texture features;

步骤23,得到三维地形纹理;Step 23, obtaining a three-dimensional terrain texture;

步骤24,根据纹理得到地形正射投影图。In step 24, the terrain orthographic projection map is obtained according to the texture.

所述步骤3,包括如下步骤:Said step 3 includes the following steps:

步骤31,对待匹配投影图和参考投影图进行直线的检测与匹配,匹配完同名直线后,在待匹配投影图中将两两直线相交的交点作为待匹配投影图虚拟角点;Step 31, performing line detection and matching on the projection image to be matched and the reference projection image, after matching the straight lines with the same name, using the intersection point where two straight lines intersect in the projection image to be matched as the virtual corner point of the projection image to be matched;

步骤32,在参考投影图中将两两直线相交的交点作为参考投影图虚拟角点;Step 32, using the intersection point where two straight lines intersect in the reference projection diagram as the virtual corner point of the reference projection diagram;

步骤33,对虚拟角点进行SURF算法粗匹配;Step 33, performing rough matching of SURF algorithm on virtual corner points;

步骤34,对虚拟角点进行SURF算法精匹配;Step 34, performing SURF algorithm fine matching on the virtual corner points;

步骤35,通过第一次匹配,得到了参考图像和待配准图像的精确特征点对集合φAB,通过φAB便可以求取参考图像和待配准图像间的透视变换矩阵H;Step 35, through the first matching, the accurate feature point pair set φ AB of the reference image and the image to be registered is obtained, and the perspective transformation matrix H between the reference image and the image to be registered can be obtained through φ AB ;

步骤36,对待匹配投影图根据求出的变换矩阵H进行变换;Step 36, transforming the projection image to be matched according to the obtained transformation matrix H;

步骤37,对待匹配投影图进行插值;Step 37, interpolating the projection image to be matched;

步骤38,变换插值之后得到中间图像;Step 38, obtain the intermediate image after transformation and interpolation;

步骤39,获得参考投影图和待匹配投影图;Step 39, obtaining the reference projection image and the projection image to be matched;

步骤310,找出参考投影图和待匹配投影图之间的重叠部分作为各自的感兴趣区域,并且根据重叠区域大小划分参考图像为多块子区域。当重叠区域较大时,子区域大小为64×64,当重叠区域较小时子区域也相应的变小;Step 310, find out the overlapping parts between the reference projection image and the projection image to be matched as respective regions of interest, and divide the reference image into multiple sub-regions according to the size of the overlapping regions. When the overlapping area is larger, the size of the sub-area is 64×64, and when the overlapping area is smaller, the sub-area becomes smaller accordingly;

步骤311,在参考投影图的子区域中,以区域中心为中心,在周围32×32,即参考图像子区域大小的0.5倍邻域内进行HARRIS特征点提取,取该区域中所有HARRIS特征点中R值最大的,即与周围点最有区分度的点作为参考投影图的特征点;如果在32×32邻域内没有HARRIS特征点,则把子区域中心作为一个特征点处理;Step 311, in the sub-region of the reference projection image, take the center of the region as the center, and perform HARRIS feature point extraction in a neighborhood of 32×32, that is, 0.5 times the size of the reference image sub-region, and take all the HARRIS feature points in this region The point with the largest R value, that is, the most distinguishable point from the surrounding points is used as the feature point of the reference projection map; if there is no HARRIS feature point in the 32×32 neighborhood, the center of the sub-region is treated as a feature point;

步骤312,当所有的参考投影图特征点提取完后,进行NCC算法匹配;Step 312, after all the reference projection map feature points are extracted, perform NCC algorithm matching;

步骤313,在中间图像中,以参考投影图特征点坐标为中心的96×96,即参考投影图子区域大小的1.5倍邻域内进行搜索,记录相关系数和其特征点坐标,得到粗匹配点;Step 313, in the intermediate image, search within the neighborhood of 96×96 centered on the coordinates of the feature points of the reference projection image, that is, 1.5 times the size of the sub-area of the reference projection image, record the correlation coefficient and the coordinates of the feature points, and obtain rough matching points ;

步骤314,在96×96区域搜索完成后,比较所有记录的粗匹配点的相关系数,选出最大的相关系数,进行阈值TNCC限定;Step 314, after the 96×96 area search is completed, compare the correlation coefficients of all recorded rough matching points, select the largest correlation coefficient, and limit the threshold T NCC ;

步骤315,如果大于给定阈值TNCC,则相应坐标点作为中间图像的特征点和参考投影图特征点的精匹配点;Step 315, if it is greater than the given threshold T NCC , then the corresponding coordinate point is used as the fine matching point between the feature point of the intermediate image and the feature point of the reference projection image;

步骤316,根据精匹配点对用最小二乘法进行拟合;Step 316, fitting with the least squares method according to the fine matching points;

步骤317,求出中间图像和参考投影图间的变换矩阵,得到投影匹配的最后变换参数,完成投影图匹配。In step 317, the transformation matrix between the intermediate image and the reference projection image is obtained, and the final transformation parameters for projection matching are obtained to complete the projection image matching.

所述步骤4,包括如下步骤:Said step 4 comprises the following steps:

步骤41,获取投影匹配完成后得到的变换参数;Step 41, obtaining the transformation parameters obtained after the projection matching is completed;

步骤42,返回到三维地形中,对三维地形进行转换;Step 42, return to the 3D terrain, and convert the 3D terrain;

步骤43,完成三维地形匹配。Step 43, complete the three-dimensional terrain matching.

该方法包括三维DEM(Digital Elevation Model)地形数据的获取与转换,对三维地形进行正射投影,根据地形的正射投影再进行投影图匹配。对投影图进行匹配过程中采用直线特征与点特征结合的方法,在对投影图进行直线检测并匹配完同名直线之后,找到同名直线两两相交的虚拟角点,并计算出该角点的坐标。然后利用改进的SURF算法对找出的虚拟角点进行匹配,当且仅当对应的角点匹配时,才认为这两条直线正确匹配。计算出投影图之间的变换关系,再将变换参数应用于三维地形之间的匹配,从而完成整个地形匹配过程。本发明利用三维地形投影进行匹配,是一种三维地形匹配的新方法。本发明可以应用于无人机视觉导航中,退化环境下,有遮挡的情况,针对异构的投影进行匹配。The method includes acquisition and conversion of 3D DEM (Digital Elevation Model) terrain data, orthographic projection of the 3D terrain, and matching of projection maps according to the orthographic projection of the terrain. In the process of matching the projection image, the method of combining the straight line feature and the point feature is adopted. After the line detection is performed on the projection image and the straight line with the same name is matched, the virtual corner point where the two straight lines with the same name intersect is found, and the coordinates of the corner point are calculated. . Then use the improved SURF algorithm to match the found virtual corners. If and only when the corresponding corners match, the two straight lines are considered to match correctly. Calculate the transformation relationship between the projection maps, and then apply the transformation parameters to the matching between the three-dimensional terrain, so as to complete the whole terrain matching process. The invention uses three-dimensional terrain projection for matching, and is a new method for three-dimensional terrain matching. The present invention can be applied to unmanned aerial vehicle visual navigation, under the condition of degraded environment, there is occlusion, and is aimed at the matching of heterogeneous projection.

本发明的有益效果是:充分利用地形的表面特征,对三维地形的正射投影进行匹配,提供了一种新的三维地形匹配算法。对于异构的地形投影,该算法也适用。改进的SURF点特征算法提高了投影匹配的精度。The beneficial effect of the invention is that the surface features of the terrain are fully utilized to match the orthographic projection of the three-dimensional terrain, and a new three-dimensional terrain matching algorithm is provided. The algorithm also works for heterogeneous terrain projections. The improved SURF point feature algorithm improves the accuracy of projection matching.

附图说明Description of drawings

图1本发明流程图;Fig. 1 flow chart of the present invention;

图2地形数据格式转换流程图;Fig. 2 flow chart of terrain data format conversion;

图3地形投影图获取;Fig. 3 acquisition of terrain projection map;

图4改进SURF算法流程图;Fig. 4 improves the SURF algorithm flow chart;

图5地形匹配过程;Figure 5 Terrain matching process;

图6异构投影图的SURF与改进SURF算法结果。Figure 6 The results of SURF and improved SURF algorithm for heterogeneous projection graphs.

具体实施方式Detailed ways

如图1所示,三维地形匹配的流程图步骤特征是:As shown in Figure 1, the characteristics of the flowchart steps of 3D terrain matching are:

步骤1,将原始数字高程模型地形数据格式USGS-DEM转换为数字高程模型地形数据格式CNSTDF-DEM;Step 1, convert the original digital elevation model terrain data format USGS-DEM into digital elevation model terrain data format CNSTDF-DEM;

步骤2,对步骤1所述地形格式的参考三维地形和待匹配三维地形进行正射投影;Step 2, performing orthographic projection on the reference 3D terrain of the terrain format described in step 1 and the 3D terrain to be matched;

步骤3,根据步骤2所述的正射投影,对投影图进行匹配,得到投影图变换参数,匹配过程采用直线特征和点特征结合的方法;Step 3, according to the orthographic projection described in step 2, the projection map is matched to obtain the projection map transformation parameters, and the matching process adopts the method of combining line features and point features;

步骤4,根据步骤3所述的变换参数,对参考三维地形和待匹配三维地形进行匹配并得到匹配结果。Step 4, according to the transformation parameters described in step 3, match the reference 3D terrain and the 3D terrain to be matched and obtain a matching result.

如图2所示,所述步骤1,包括如下步骤,其特征是:As shown in Figure 2, said step 1 includes the following steps, which are characterized in that:

步骤11,打开一个地形文件;Step 11, open a terrain file;

步骤12,判断打开的文件是否为USGS-DEM地形数据格式;Step 12, judging whether the opened file is in the USGS-DEM terrain data format;

步骤13,如果不是则转到步骤11,如果是则开始读取数据头;Step 13, if not then go to step 11, if it is then start to read the data header;

步骤14,提取相关地形文件头信息;Step 14, extracting relevant terrain file header information;

步骤15,存储文件头信息;Step 15, storing file header information;

步骤16,开辟数据存储空间;Step 16, open up data storage space;

步骤17,读取数据体;Step 17, read the data body;

步骤18,过滤数据体的每行的前144个字节;Step 18, filtering the first 144 bytes of each line of the data body;

步骤19,存储数据相关信息;Step 19, storing data-related information;

步骤110,结合存储的文件头和数据体保存为CNSTDF-DEM地形格式文件;Step 110, save as CNSTDF-DEM terrain format file in conjunction with stored file header and data body;

步骤111,地形转换完成。Step 111, terrain conversion is completed.

如图3所示,所述步骤2,包括如下步骤,其特征是:As shown in Figure 3, said step 2 includes the following steps, which are characterized in that:

步骤21,打开得到的三维CNSTDF-DEM地形数据;Step 21, open the obtained three-dimensional CNSTDF-DEM terrain data;

步骤22,得到地形表面纹理特征;Step 22, obtaining terrain surface texture features;

步骤23,得到三维地形纹理;Step 23, obtaining a three-dimensional terrain texture;

步骤24,根据纹理得到地形正射投影图。In step 24, the terrain orthographic projection map is obtained according to the texture.

如图4所示,所述步骤3,包括如下步骤,其特征是:As shown in Figure 4, said step 3 includes the following steps, characterized in that:

步骤31,对待匹配投影图和参考投影图进行直线的检测与匹配,匹配完同名直线后,在待匹配投影图中将两两直线相交的交点作为待匹配投影图虚拟角点;Step 31, performing line detection and matching on the projection image to be matched and the reference projection image, after matching the straight lines with the same name, using the intersection point where two straight lines intersect in the projection image to be matched as the virtual corner point of the projection image to be matched;

步骤32,在参考投影图中将两两直线相交的交点作为参考投影图虚拟角点。Step 32, taking the intersection point where two lines intersect in the reference projected map as the virtual corner point of the reference projected map.

步骤33,对虚拟角点进行SURF算法粗匹配;Step 33, performing rough matching of SURF algorithm on virtual corner points;

步骤34,对虚拟角点进行SURF算法精匹配;Step 34, performing SURF algorithm fine matching on the virtual corner points;

步骤35,通过第一次匹配,得到了参考图像和待配准图像的精确特征点对集合φAB,通过φAB便可以求取参考图像和待配准图像间的透视变换矩阵H;Step 35, through the first matching, the accurate feature point pair set φ AB of the reference image and the image to be registered is obtained, and the perspective transformation matrix H between the reference image and the image to be registered can be obtained through φ AB ;

步骤36,对待匹配投影图根据求出的变换矩阵H进行变换;Step 36, transforming the projection image to be matched according to the obtained transformation matrix H;

步骤37,对待匹配投影图进行插值;Step 37, interpolating the projection image to be matched;

步骤38,变换插值之后得到中间图像;Step 38, obtain the intermediate image after transformation and interpolation;

步骤39,获得参考投影图和待匹配投影图;Step 39, obtaining the reference projection image and the projection image to be matched;

步骤310,找出参考投影图和待匹配投影图之间的重叠部分作为各自的感兴趣区域,并且根据重叠区域大小划分参考图像为多块子区域。当重叠区域较大时,子区域大小为64×64,当重叠区域较小时子区域也相应的变小;Step 310, find out the overlapping parts between the reference projection image and the projection image to be matched as respective regions of interest, and divide the reference image into multiple sub-regions according to the size of the overlapping regions. When the overlapping area is larger, the size of the sub-area is 64×64, and when the overlapping area is smaller, the sub-area becomes smaller accordingly;

步骤311,在参考投影图的子区域中,以区域中心为中心,在周围32×32,即参考图像子区域大小的0.5倍邻域内进行HARRIS特征点提取,取该区域中所有HARRIS特征点中R值最大的,即与周围点最有区分度的点作为参考投影图的特征点;如果在32×32邻域内没有HARRIS特征点,则把子区域中心作为一个特征点处理;Step 311, in the sub-region of the reference projection image, take the center of the region as the center, and perform HARRIS feature point extraction in a neighborhood of 32×32, that is, 0.5 times the size of the reference image sub-region, and take all the HARRIS feature points in this region The point with the largest R value, that is, the most distinguishable point from the surrounding points is used as the feature point of the reference projection map; if there is no HARRIS feature point in the 32×32 neighborhood, the center of the sub-region is treated as a feature point;

步骤312,当所有的参考投影图特征点提取完后,进行NCC算法匹配;Step 312, after all the reference projection map feature points are extracted, perform NCC algorithm matching;

步骤313,在中间图像中,以参考投影图特征点坐标为中心的96×96(参考投影图子区域大小的1.5倍)区域内进行搜索,记录相关系数和其特征点坐标,得到粗匹配点;Step 313, in the intermediate image, search in the 96×96 (1.5 times the size of the sub-area of the reference projection image) area centered on the coordinates of the feature points of the reference projection image, record the correlation coefficient and the coordinates of the feature points, and obtain rough matching points ;

步骤314,在96×96区域搜索完成后,比较所有记录的粗匹配点的相关系数,选出最大的的相关系数,进行阈值TNCC限定;Step 314, after the 96×96 area search is completed, compare the correlation coefficients of all recorded rough matching points, select the largest correlation coefficient, and limit the threshold T NCC ;

步骤315,如果大于给定阈值TNCC,则相应坐标点作为中间图像的特征点和参考投影图特征点的精匹配点。Step 315, if it is greater than a given threshold T NCC , the corresponding coordinate point is used as a precise matching point between the feature point of the intermediate image and the feature point of the reference projection image.

步骤316,根据精匹配点对用最小二乘法进行拟合;Step 316, fitting with the least squares method according to the fine matching points;

步骤317,求出中间图像和参考投影图间的变换矩阵,得到投影匹配的最后变换参数,完成投影图匹配。In step 317, the transformation matrix between the intermediate image and the reference projection image is obtained, and the final transformation parameters for projection matching are obtained to complete the projection image matching.

如图5所示,所述步骤4,包括如下步骤,其特征是:As shown in Figure 5, said step 4 includes the following steps, which are characterized in that:

步骤41,获取投影匹配完成后得到的变换参数;Step 41, obtaining the transformation parameters obtained after the projection matching is completed;

步骤42,返回到三维地形中,对三维地形进行转换;Step 42, return to the 3D terrain, and convert the 3D terrain;

步骤43,完成三维地形匹配。Step 43, complete the three-dimensional terrain matching.

如图6所示,图6(a)为地形的可见光投影图,图6(b)为地形的红外投影图,图6(c)为用SURF算法进行匹配的结果,匹配精度为0.2108像素,图6(d)为用本文改进SURF算法进行匹配的结果,匹配精度为0.0338像素。结果表明改进SURF算法提高了匹配过程的精度。As shown in Figure 6, Figure 6(a) is the visible light projection map of the terrain, Figure 6(b) is the infrared projection map of the terrain, Figure 6(c) is the matching result using the SURF algorithm, and the matching accuracy is 0.2108 pixels, Figure 6(d) is the matching result using the improved SURF algorithm in this paper, and the matching accuracy is 0.0338 pixels. The results show that the improved SURF algorithm improves the accuracy of the matching process.

步骤中没有详细叙述的部分属本领域公知的常用手段及算法,这里不一一叙述。The parts not described in detail in the steps belong to common means and algorithms known in the art, and will not be described here one by one.

Claims (4)

1. a kind of quick fine matching method of dimensional topography based on isomery projection, it is characterised in that including at least following steps:
Step 1, original figure elevation model terrain data form USGS-DEM is converted into digital elevation model terrain data lattice Formula CNSTDF-DEM;
Step 2, to digital elevation model terrain data form CNSTDF-DEM reference dimensional topography described in step 1 and to be matched Dimensional topography carries out orthogonal projection;
Step 3, the orthogonal projection according to step 2, is matched to perspective view, is obtained perspective view transformation parameter, was matched The method that Cheng Caiyong linear features and point feature combine;
Step 4, the transformation parameter according to step 3, to being matched and being obtained with reference to dimensional topography and dimensional topography to be matched To matching result;
The step 3, comprises the following steps:
Step 31, treat matching pursuit figure and the detection and matching of straight line are carried out with reference to perspective view, after having matched homonymous line, Using the intersection point of straight line intersection two-by-two as the virtual angle point of perspective view to be matched in perspective view to be matched;
Step 32, with reference in perspective view using the intersection point of straight line intersection two-by-two as referring to the virtual angle point of perspective view;
Step 33, SURF algorithm is carried out to virtual angle point slightly to match;
Step 34, SURF algorithm essence matching is carried out to virtual angle point;
Step 35, matching that the two is combined by the thick matching of step 33 and the essence matching of step 34, obtained reference picture and The accurate feature points of image subject to registration are to set φAB, pass through φABCan be saturating between reference picture and image subject to registration to ask for Depending on transformation matrix H;
Step 36, treat matching pursuit figure and line translation is entered according to the transformation matrix H obtained;
Step 37, treat matching pursuit figure and enter row interpolation;
Step 38, convert interpolation and obtain intermediate image afterwards;
Step 39, obtain and refer to perspective view and perspective view to be matched;
Step 310, find out with reference to the lap between perspective view and perspective view to be matched as respective area-of-interest, and And it is polylith subregion to divide reference picture according to overlapping region size;When overlapping region is larger, subregion size be 64 × 64, when overlapping region is smaller, subregion also diminishes accordingly;
Step 311, in the subregion with reference to perspective view, centered on regional center, 32 × 32 around, i.e., reference picture is sub HARRIS feature point extractions are carried out in 0.5 times of neighborhood of area size, take in the region R values maximum in all HARRIS characteristic points , i.e., most there is the point of discrimination with surrounding point as the characteristic point with reference to perspective view;If do not have in 32 × 32 neighborhoods HARRIS characteristic points, then sub-areas center handled as a characteristic point;
Step 312, after all reference perspective view feature point extractions are complete, NCC algorithmic match is carried out;
Step 313, in intermediate image, 96 × 96 centered on reference perspective view feature point coordinates, i.e., with reference to perspective view Scanned in 1.5 times of neighborhoods of area size, record coefficient correlation and its feature point coordinates, obtain thick match point;
Step 314, after the completion of 96 × 96 range searchings, the coefficient correlation of the thick match point of more all records, maximum is selected Coefficient correlation, carry out threshold value TNCCLimit;
Step 315, if greater than given threshold value TNCC, then corresponding coordinate point is as the characteristic point of intermediate image and with reference to perspective view The smart match point of characteristic point;
Step 316, it is fitted according to smart matching double points with least square method;
Step 317, the transformation matrix between intermediate image and reference perspective view is obtained, obtains the last transformation parameter of projection matching, Complete perspective view matching.
2. a kind of quick fine matching method of dimensional topography based on isomery projection according to claim 1, its feature exist In:The step 1, comprises the following steps:
Step 11, a terrain file is opened;
Step 12, whether the file for judging to open is USGS-DEM terrain data forms;
Step 13, if not step 11 is then gone to, if it is start to read data head;
Step 14, related terrain file header is extracted;
Step 15, storage file header;
Step 16, data space is opened up;
Step 17, data volume is read;
Step 18, preceding 144 bytes of the often row of data volume are filtered;
Step 19, data storage relevant information;
Step 110, the file header and data volume with reference to storage save as CNSTDF-DEM topography format files;
Step 111, terrain transition is completed.
3. a kind of quick fine matching method of dimensional topography based on isomery projection according to claim 1, its feature exist In:The step 2, comprises the following steps:
Step 21, obtained three-dimensional CNSTDF-DEM terrain datas are opened;
Step 22, topographical surface textural characteristics are obtained;
Step 23, dimensional topography texture is obtained;
Step 24, landform orthogonal projection figure is obtained according to texture.
4. a kind of quick fine matching method of dimensional topography based on isomery projection according to claim 1, its feature exist In:The step 4, comprises the following steps:
Step 41, the transformation parameter obtained after the completion of projection matching is obtained;
Step 42, return in dimensional topography, dimensional topography is changed;
Step 43, three_dimensional topograph model is completed.
CN201510397177.4A 2015-07-08 2015-07-08 A kind of quick fine matching method of dimensional topography based on isomery projection Expired - Fee Related CN105005993B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510397177.4A CN105005993B (en) 2015-07-08 2015-07-08 A kind of quick fine matching method of dimensional topography based on isomery projection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510397177.4A CN105005993B (en) 2015-07-08 2015-07-08 A kind of quick fine matching method of dimensional topography based on isomery projection

Publications (2)

Publication Number Publication Date
CN105005993A CN105005993A (en) 2015-10-28
CN105005993B true CN105005993B (en) 2018-02-16

Family

ID=54378650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510397177.4A Expired - Fee Related CN105005993B (en) 2015-07-08 2015-07-08 A kind of quick fine matching method of dimensional topography based on isomery projection

Country Status (1)

Country Link
CN (1) CN105005993B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016221680B4 (en) * 2016-11-04 2022-06-15 Audi Ag Method for operating a semi-autonomous or autonomous motor vehicle and motor vehicle
CN110728713B (en) * 2018-07-16 2022-09-30 Oppo广东移动通信有限公司 Test method and test system
CN117132796B (en) * 2023-09-09 2024-10-01 廊坊市珍圭谷科技有限公司 Position efficient matching method based on heterogeneous projection

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618045A (en) * 2015-01-27 2015-05-13 北京交通大学 Collected data-based wireless channel transmission model establishing method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4970296B2 (en) * 2008-01-21 2012-07-04 株式会社パスコ Orthophoto image generation method and photographing apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618045A (en) * 2015-01-27 2015-05-13 北京交通大学 Collected data-based wireless channel transmission model establishing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3种常用DEM格式自动化互换的初探;李山山 等;《测绘与空间地理信息》;20080630;第31卷(第3期);第6-8,11页 *
高频信息矢量匹配实现异源图像配准;韩广良 等;《中国光学》;20111031;第4卷(第5期);第468-473页 *

Also Published As

Publication number Publication date
CN105005993A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CN112270249B (en) Target pose estimation method integrating RGB-D visual characteristics
CN107730503B (en) Image object component-level semantic segmentation method and device based on 3D feature embedding
CN113902860B (en) A multi-scale static map construction method based on multi-line lidar point cloud
CN105261017B (en) The method that image segmentation based on road surface constraint extracts pedestrian's area-of-interest
CN103996198B (en) The detection method of area-of-interest under Complex Natural Environment
CN106446936B (en) Hyperspectral data classification method based on convolutional neural network combined spatial spectrum data to waveform map
CN103699900B (en) Building horizontal vector profile automatic batch extracting method in satellite image
CN113178009A (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN102316352B (en) Stereo video depth image manufacturing method based on area communication image and apparatus thereof
CN103714541A (en) Method for identifying and positioning building through mountain body contour area constraint
CN110414385B (en) A method and system for lane line detection based on homography transformation and feature window
CN103700101A (en) Non-rigid brain image registration method
CN104657986B (en) A kind of quasi- dense matching extended method merged based on subspace with consistency constraint
CN107578376A (en) Image Mosaic Method Based on Feature Point Clustering Quaternary Partition and Local Transformation Matrix
CN108010123A (en) A kind of three-dimensional point cloud acquisition methods for retaining topology information
CN111105452B (en) Binocular vision-based high-low resolution fusion stereo matching method
CN106485737A (en) Cloud data based on line feature and the autoregistration fusion method of optical image
CN106709499A (en) SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform
CN105005993B (en) A kind of quick fine matching method of dimensional topography based on isomery projection
CN101334896A (en) Sub-pixel edge processing method for digital image measurement
CN105335685B (en) Image-recognizing method and device
CN102799646A (en) Multi-view video-oriented semantic object segmentation method
CN105678791A (en) Lane line detection and tracking method based on parameter non-uniqueness property
CN103914840B (en) A kind of human body contour outline extraction method for non-simple background
CN114596592B (en) A pedestrian re-identification method, system, device, and computer-readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180216

Termination date: 20180708

CF01 Termination of patent right due to non-payment of annual fee