WO2020134617A1 - 基于街景图像匹配重复结构建筑的定位方法 - Google Patents

基于街景图像匹配重复结构建筑的定位方法 Download PDF

Info

Publication number
WO2020134617A1
WO2020134617A1 PCT/CN2019/115900 CN2019115900W WO2020134617A1 WO 2020134617 A1 WO2020134617 A1 WO 2020134617A1 CN 2019115900 W CN2019115900 W CN 2019115900W WO 2020134617 A1 WO2020134617 A1 WO 2020134617A1
Authority
WO
WIPO (PCT)
Prior art keywords
matching
street view
points
matching points
view image
Prior art date
Application number
PCT/CN2019/115900
Other languages
English (en)
French (fr)
Inventor
李小亚
赵伟
朱晶晶
谢超
Original Assignee
南京航空航天大学
南京航空航天大学秦淮创新研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京航空航天大学, 南京航空航天大学秦淮创新研究院 filed Critical 南京航空航天大学
Publication of WO2020134617A1 publication Critical patent/WO2020134617A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the invention relates to the technical field of navigation, and in particular to a positioning method based on street view image matching repeated structure buildings.
  • GPS system In modern society, the use of GPS system is very popular. Open GPS in various map software on the car system or mobile phone, you can quickly and accurately obtain the user's current location, direction and other navigation information.
  • GPS navigation In special environments such as tunnels and underground shopping malls, where GPS signals are weak or there is no GPS signal, traditional map navigation is difficult to play a role. At this time, feature-based visual navigation methods can play a role.
  • Feature-based visual navigation technology mainly includes two aspects, feature extraction and feature matching.
  • Feature extraction technology is an important premise and foundation for visual feature methods, and it also affects the accuracy of matching.
  • Commonly used feature extraction methods include Harris corner and Susan corner.
  • Harris corner and Susan corner Commonly used feature extraction methods include Harris corner and Susan corner.
  • Harris corner and Susan corner Commonly used feature extraction methods include Harris corner and Susan corner.
  • improved methods include SURF operator, HOG and so on.
  • the advantage of the scale-invariant feature descriptor is that it is invariant to zoom and rotation transformations, and has good robustness to illumination and viewing angle changes, so it is suitable for street view navigation.
  • Feature matching techniques mainly include brute force matching method and FLANN matching method.
  • the matching results obtained by the above method usually have more mismatches. Because there are many similar buildings in the street view, there will be mismatches when matching.
  • a more effective matching strategy is the distance ratio method. By setting a threshold, it considers that the ratio of the closest distance to the next closest distance of the descriptor is less than the threshold as the correct matching point, thereby eliminating the mismatching points. From the matching results, the correct matching items are retained, thereby improving the accuracy of matching. But this method has flaws. Repeating the feature descriptors of similar structure buildings is usually close, so the effect of distinguishing similar buildings by the distance ratio method is not ideal. In response to this problem, this patent proposes a new matching strategy to improve the mismatching phenomenon caused by repeated similar structures.
  • the technical problem to be solved by the present invention is to solve the problem of mismatch caused by repeating similar structure buildings in street view navigation, and the problem that the commonly used distance ratio method cannot correctly match feature descriptors that are close to each other, and proposes a repeated structure based on street view navigation New matching method for architecture.
  • the positioning method based on street view image matching repeated structure buildings includes the following steps:
  • Step 1) take a street view image, use a feature extraction operator to detect the feature points of the street view image, and calculate the corresponding descriptor;
  • Step 2 matching the image to be matched with the model image in the preset model database to obtain each matching point;
  • Step 3 screening out the mismatch points that cause mismatch problems
  • Step 4 calculate the current position of the user according to the filtered matching points, and feed back the position information to the user.
  • step 1) As a localization method based on street view image matching repeated structure buildings of the present invention, the detailed steps of step 1) are as follows:
  • Step 1.1 take a street view image
  • Step 1.2 use the invariant feature operator to extract the features of the image to obtain each feature point;
  • Step 1.3 calculate the descriptor of each feature point
  • Step 1.4 select any feature point, use the feature point as the center of the circle, divide the neighborhood of the feature point with m equally spaced concentric circles, and then divide the circumference n into equal parts to form m*n subregions, the m , N are positive integers;
  • Step 1.5 the pixels in each circle are weighted by a Gaussian function, and the pixels on the same circle use the same Gaussian weighting coefficient;
  • Step 1.6 calculate the gradient value in each sub-region to form an m*n-dimensional global operator
  • Step 1.7 the global operator and the local operator are combined in the form of a vector to form a local-global structure feature descriptor.
  • step 2 As a localization method based on street view image matching repeated structure buildings of the present invention, the detailed steps of step 2) are as follows:
  • Step 2.1 first use FLANN, the fast nearest neighbor approximation search function library method to match the descriptor vector in the preset model database;
  • Step 2.2 use the matching strategy of the distance ratio method to perform feature matching on the obtained descriptor vector to obtain each matching point, and the expression of the distance ratio method is:
  • f 1 is the descriptor of the query image
  • f 1st and f 2nd are the closest and next closest descriptors to f 1 in the model database
  • d 1 is the distance between the descriptors f 1 and f 1st
  • d 2 is Refers to the distance between the descriptors f 1 and f 2nd , where the distance refers to the Euclidean distance
  • is a preset first screening threshold, which affects the number of matching points and the accuracy of matching.
  • step 3 As a localization method based on street view image matching repeated structure buildings of the present invention, the detailed steps of step 3) are as follows:
  • Step 3.1 randomly select a pair of matching points from the matching points obtained by the distance ratio method
  • Step 3.2 verify the selected pair of matching points, the verification method is as follows:
  • Step 3.2.1 let this pair of matching points be (A, A'), f A is the descriptor of point A, g A is the descriptor of point A', and ⁇ 'is the preset second screening threshold, calculate the description sub f a, the angle ⁇ g a 1;
  • Step 3.2.2 compare the included angle ⁇ 1 and ⁇ '. If the constraint condition of ⁇ 1 ⁇ ' is satisfied, A and A'are considered to be correct matching points, otherwise A and A'are considered to be wrong matching points;
  • Step 3.3 for the matching points obtained by the distance ratio method that have not been verified, randomly select a pair of matching points, and verify the selected pair of matching points;
  • Step 3.4 repeat Step 3.3) until all matching point sets obtained by the distance ratio method have been verified;
  • Step 3.5 all correct matching points in the matching points obtained by the distance ratio method are used as the input of the RANSAC algorithm, and the wrong matching points are further filtered out to obtain the filtered matching points.
  • the present invention adopts the above technical solutions and has the following technical effects:
  • the present invention adds stricter matching conditions to the descriptors of repeated similar structure buildings, thereby solving the problem of the mismatch of the distance ratio method to the descriptors of similar buildings with closer distances, and effectively improving the accuracy of street view navigation image matching It also improves the accuracy of street view navigation positioning, can be effectively applied to the matching of street view navigation repeat structure buildings, and has strong engineering and practical application value;
  • FIG. 1 is a schematic flow chart of the present invention.
  • the present invention discloses a positioning method based on street view image matching repeated structure buildings, including the following steps:
  • Step 1) take a street view image, use the feature extraction operator to detect the feature points of the street view image, and calculate the corresponding descriptor:
  • Step 1.1 take a street view image
  • Step 1.2 use the invariant feature operator to extract the features of the image to obtain each feature point;
  • Step 1.3 calculate the descriptor of each feature point
  • Step 1.4 select any feature point, use the feature point as the center of the circle, divide the neighborhood of the feature point with 5 concentric circles at equal intervals, and then divide the circumference into 6 equal parts to form 30 sub-regions;
  • Step 1.5 the pixels in each circle are weighted by a Gaussian function, and the pixels on the same circle use the same Gaussian weighting coefficient;
  • Step 1.6 calculate the gradient value in each sub-region to form a 30-dimensional global operator
  • Step 1.7 the global operator and the local operator are combined in the form of a vector to form a local-global structure feature descriptor
  • Step 2) Match the query image with the corresponding image in the preset model database, including:
  • Step 2.1 first use FLANN, the fast nearest neighbor approximation search function library method to match the descriptor vector in the preset model database;
  • Step 2.2 use the matching strategy of the distance ratio method to perform feature matching on the obtained descriptor vector to obtain each matching point, and the expression of the distance ratio method is:
  • f 1 is the descriptor of the query image
  • f 1st and f 2nd are the closest and next closest descriptors to f 1 in the model database
  • d 1 is the distance between the descriptors f 1 and f 1st
  • d 2 is Refers to the distance between the descriptors f 1 and f 2nd , where the distance refers to the Euclidean distance
  • is the preset first screening threshold, which affects the number of matching points and the accuracy of matching
  • Step 3 due to the characteristics of a large number of repeated structural buildings in street view navigation, the matching results of the distance ratio method often have mismatch problems, so it is necessary to filter out the mismatch points that cause the mismatch problem:
  • Step 3.1 randomly select a pair of matching points from the matching points obtained by the distance ratio method
  • Step 3.2 verify the selected pair of matching points, the verification method is as follows:
  • Step 3.2.1 let this pair of matching points be (A, A'), f A is the descriptor of point A, g A is the descriptor of point A', and ⁇ 'is the preset second screening threshold, calculate the description sub f a, the angle ⁇ g a 1;
  • Step 3.2.2 compare the included angle ⁇ 1 and ⁇ '. If the constraint condition of ⁇ 1 ⁇ ' is satisfied, A and A'are considered to be correct matching points, otherwise A and A'are considered to be wrong matching points;
  • Step 3.3 for the matching points obtained by the distance ratio method that have not been verified, randomly select a pair of matching points, and verify the selected pair of matching points;
  • Step 3.4 repeat Step 3.3) until all matching point sets obtained by the distance ratio method have been verified;
  • Step 3.5 all correct matching points in the matching points obtained by the distance ratio method are used as the input of the RANSAC algorithm, and the wrong matching points are further filtered out to obtain the filtered matching points.
  • Step 4 calculate the current position of the user according to the filtered matching points, and feed back the position information to the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Navigation (AREA)
  • Image Analysis (AREA)

Abstract

一种基于街景图像匹配重复结构建筑的定位方法,该方法为:用特征提取算子检测街景图像的特征点,并计算得到相应的描述子;将待匹配图像与预设的模型数据库中的模型图像进行匹配,得到各个匹配点;筛除引起误匹配问题的错误匹配点;根据筛除过后的匹配点计算出用户当前的位置,并将位置信息反馈给用户。该方案能够有效解决因重复相似性结构建筑所引起的误匹配问题,提高街景导航中定位的精度,有效降低了算法的复杂度和计算量。

Description

基于街景图像匹配重复结构建筑的定位方法 技术领域
本发明涉及导航技术领域,尤其涉及一种基于街景图像匹配重复结构建筑的定位方法。
背景技术
在现代社会,GPS系统的使用十分普及。在车载系统或手机端的各类地图软件中打开GPS,便可以快速准确的获取用户当前的位置、方向等导航信息。然而,GPS导航也存在一些问题。在隧道、地下商场等特殊环境下,GPS信号较弱或不存在GPS信号,传统的地图导航难以发挥作用,这时基于特征的视觉导航方法就可以发挥作用。
早在上世纪60年代,图像跟踪技术就在军事领域取得了广泛应用。上世纪70年代末80年代初,麻省理工学院的马尔教授创立了计算机视觉理论,将视觉研究向前推进了一大步。随着视觉导航技术的不断发展和成熟,其在社会生活、军事等领域的应用也越来越广泛。
基于特征的视觉导航技术主要包含两方面,特征提取和特征匹配。特征提取技术是视觉特征方法的重要前提和基础,也会影响匹配的精度。常用的特征提取方法有Harris角点、Susan角点等。但上述方法的缺点是不具有尺度不变性,因此不适合解决街景导航问题。为解决上面的问题,Lowe提出了SIFT算子,在此基础上改进的方法有SURF算子、HOG等。尺度不变特征描述子的优势是对于缩放和旋转变换具有不变性,而对光照和视角变化具有良好的鲁棒性,因此适用于街景导航。
特征匹配技术主要有蛮力匹配法、FLANN匹配法等。但是用上述方法得到的匹配结果通常有较多的误匹配。由于在街景中有许多重复相似的建筑,因此匹配时会出现误匹配。为解决这个问题,一种较为有效的匹配策略是距离比值法。它通过设置一个阈值,将描述子最近距离和次近距离的比小于该阈值的认为是正确的匹配点,从而剔除掉误匹配点。从匹配结果看,正确的匹配项被保留下来,从而提高了匹配的准确度。但是该方法存在缺陷。重复相似结构建筑的特征描述子通常距离较近,因此用距离比值法区分相似建筑的效果并不理想。针对该问题,本专利提出了新的匹配策略,改进因重复相似结构引起的误匹配现象。
发明内容
本发明所要解决的技术问题是针对街景导航中重复相似结构建筑引起的误匹配问题、以及常用的距离比值法对于距离很近的特征描述子不能正确匹配的问题,提出一种基于街景导航重复结构建筑的新型匹配方法。
本发明为解决上述技术问题采用以下技术方案:
基于街景图像匹配重复结构建筑的定位方法,包括如下步骤:
步骤1),拍摄街景图像,用特征提取算子检测街景图像的特征点,并计算得到相应的描述子;
步骤2),将待匹配图像与预设的模型数据库中的模型图像进行匹配,得到各个匹 配点;
步骤3),筛除引起误匹配问题的错误匹配点;
步骤4),根据筛除过后的匹配点计算出用户当前的位置,并将位置信息反馈给用户。
作为本发明一种基于街景图像匹配重复结构建筑的定位方法,所述步骤1)的详细步骤如下:
步骤1.1),拍摄街景图像;
步骤1.2),采用具有不变性的特征算子对图像进行特征提取,得到各个特征点;
步骤1.3),计算各个特征点的描述子;
步骤1.4),选取任意特征点,以该特征点为圆心,用等间距的m个同心圆对特征点的邻域进行划分,再将圆周n等分,形成m*n个子区域,所述m、n均为正整数;
步骤1.5),对每一个圆环内的像素点进行高斯函数加权,同一圆环上的像素点采用相同的高斯加权系数;
步骤1.6),计算每个子区域内的梯度值,形成m*n维的全局算子;
步骤1.7),将全局算子和局部算子以向量形式组合,形成局部-全局结构的特征描述子。
作为本发明一种基于街景图像匹配重复结构建筑的定位方法,所述步骤2)的详细步骤如下:
步骤2.1),首先使用FLANN、即快速最近邻逼近搜索函数库方法在预设的模型数据库中匹配描述符向量;
步骤2.2),采用距离比值法的匹配策略对得到的描述符向量进行特征匹配,得到各个匹配点,距离比值法的表达式是:
Figure PCTCN2019115900-appb-000001
式中,f 1是查询图像的描述子,f 1st,f 2nd是模型数据库中和f 1最近和次近的描述子,d 1是描述子f 1和f 1st之间的距离,d 2是指描述子f 1和f 2nd的距离,这里距离是指欧氏距离,τ是预设的第一筛选阈值,其影响了匹配点的数量和匹配的准确度。
作为本发明一种基于街景图像匹配重复结构建筑的定位方法,所述步骤3)的详细步骤如下:
步骤3.1),从距离比值法得到的匹配点中随机选出一对匹配点;
步骤3.2),对选出的一对匹配点进行验证,验证方法如下:
步骤3.2.1),令这一对匹配点为(A,A'),f A是点A描述子,g A是点A'描述子,τ'是预设的第二筛选阈值,计算描述子f A、g A的夹角θ 1
步骤3.2.2),比较夹角θ 1和τ',如果满足θ 1≤τ'约束条件,认为A、A'是正确的匹配点,否则认为A、A'是错误的匹配点;
步骤3.3),对于距离比值法得到的匹配点中尚未被验证过的匹配点,随机抽取一对匹配点,并对选出的这对匹配点进行验证;
步骤3.4),重复执行步骤3.3),直至所有距离比值法得到的匹配点集都被验证过;
步骤3.5),将距离比值法得到的匹配点中所有正确的匹配点作为RANSAC算法的输入,进一步筛除错误的匹配点,得到筛除过后的匹配点。
作为本发明一种基于街景图像匹配重复结构建筑的定位方法,所述步骤2)中,m=5,n=6。
本发明采用以上技术方案与现有技术相比,具有以下技术效果:
1.本发明对重复相似结构建筑的描述子添加更为严格的匹配条件,从而解决了距离比值法对相似建筑距离较近的描述子的误匹配问题,有效地提高了街景导航图像匹配的准确性,也提高了街景导航定位的精度,能够有效应用于街景导航重复结构建筑的匹配,具有很强的工程和实际应用价值;
2.原理可靠,思路清晰,性能稳定,为街景导航中重复结构建筑的匹配提供了新的思路和方法。
附图说明
图1是本发明的流程示意图。
具体实施方式
下面结合附图对本发明的技术方案做进一步的详细说明:
本发明可以以许多不同的形式实现,而不应当认为限于这里所述的实施例。相反,提供这些实施例以便使本公开透彻且完整,并且将向本领域技术人员充分表达本发明的范围。在附图中,为了清楚起见放大了组件。
如图1所示,本发明公开了一种基于街景图像匹配重复结构建筑的定位方法,包括下列步骤:
步骤1),拍摄街景图像,用特征提取算子检测街景图像的特征点,并计算得到相应的描述子:
步骤1.1),拍摄街景图像;
步骤1.2),采用具有不变性的特征算子对图像进行特征提取,得到各个特征点;
步骤1.3),计算各个特征点的描述子;
步骤1.4),选取任意特征点,以该特征点为圆心,用等间距的5个同心圆对特征点的邻域进行划分,再将圆周6等分,形成30个子区域;
步骤1.5),对每一个圆环内的像素点进行高斯函数加权,同一圆环上的像素点采用相同的高斯加权系数;
步骤1.6),计算每个子区域内的梯度值,形成30维的全局算子;
步骤1.7),将全局算子和局部算子以向量形式组合,形成局部-全局结构的特征描述子;
步骤2),将查询图像与预设的模型数据库中的相应的图像进行匹配,包括:
步骤2.1),首先使用FLANN、即快速最近邻逼近搜索函数库方法在预设的模型数据库中匹配描述符向量;
步骤2.2),采用距离比值法的匹配策略对得到的描述符向量进行特征匹配,得到各个匹配点,距离比值法的表达式是:
Figure PCTCN2019115900-appb-000002
式中,f 1是查询图像的描述子,f 1st,f 2nd是模型数据库中和f 1最近和次近的描述子,d 1是描述子f 1和f 1st之间的距离,d 2是指描述子f 1和f 2nd的距离,这里距离是指欧氏距离,τ是预设的第一筛选阈值,其影响了匹配点的数量和匹配的准确度;
步骤3),由于街景导航中具有大量重复结构建筑的特点,距离比值法的匹配结果往往存在误匹配问题,所以需要筛除引起误匹配问题的错误匹配点:
步骤3.1),从距离比值法得到的匹配点中随机选出一对匹配点;
步骤3.2),对选出的一对匹配点进行验证,验证方法如下:
步骤3.2.1),令这一对匹配点为(A,A'),f A是点A描述子,g A是点A'描述子,τ'是预设的第二筛选阈值,计算描述子f A、g A的夹角θ 1
步骤3.2.2),比较夹角θ 1和τ',如果满足θ 1≤τ'约束条件,认为A、A'是正确的匹配点,否则认为A、A'是错误的匹配点;
步骤3.3),对于距离比值法得到的匹配点中尚未被验证过的匹配点,随机抽取一对匹配点,并对选出的这对匹配点进行验证;
步骤3.4),重复执行步骤3.3),直至所有距离比值法得到的匹配点集都被验证过;
步骤3.5),将距离比值法得到的匹配点中所有正确的匹配点作为RANSAC算法的输入,进一步筛除错误的匹配点,得到筛除过后的匹配点。
步骤4),根据筛除过后的匹配点计算出用户当前的位置,并将位置信息反馈给用户。
将查询图像和预设的模型数据库中的图像进行匹配,保留匹配准确率高于90%的数据库图像。若有几幅图像的准确度高于该阈值,则保留匹配准确度最高的图像,该图像的关联位置坐标即为用户当前位置,可以确定其位置信息。
本技术领域技术人员可以理解的是,除非另外定义,这里使用的所有术语(包括技术术语和科学术语)具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样定义,不会用理想化或过于正式的含义来解释。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (5)

  1. 基于街景图像匹配重复结构建筑的定位方法,其特征在于,包括如下步骤:
    步骤1),拍摄街景图像,用特征提取算子检测街景图像的特征点,并计算得到相应的描述子;
    步骤2),将待匹配图像与预设的模型数据库中的模型图像进行匹配,得到各个匹配点;
    步骤3),筛除引起误匹配问题的错误匹配点;
    步骤4),根据筛除过后的匹配点计算出用户当前的位置,并将位置信息反馈给用户。
  2. 根据权利要求1所述的基于街景图像匹配重复结构建筑的定位方法,其特征在于,所述步骤1)的详细步骤如下:
    步骤1.1),拍摄街景图像;
    步骤1.2),采用具有不变性的特征算子对图像进行特征提取,得到各个特征点;
    步骤1.3),计算各个特征点的描述子;
    步骤1.4),选取任意特征点,以该特征点为圆心,用等间距的m个同心圆对特征点的邻域进行划分,再将圆周n等分,形成m*n个子区域,所述m、n均为正整数;
    步骤1.5),对每一个圆环内的像素点进行高斯函数加权,同一圆环上的像素点采用相同的高斯加权系数;
    步骤1.6),计算每个子区域内的梯度值,形成m*n维的全局算子;
    步骤1.7),将全局算子和局部算子以向量形式组合,形成局部-全局结构的特征描述子。
  3. 根据权利要求2所述的基于街景图像匹配重复结构建筑的定位方法,其特征在于,所述步骤2)的详细步骤如下:
    步骤2.1),首先使用FLANN、即快速最近邻逼近搜索函数库方法在预设的模型数据库中匹配描述符向量;
    步骤2.2),采用距离比值法的匹配策略对得到的描述符向量进行特征匹配,得到各个匹配点,距离比值法的表达式是:
    Figure PCTCN2019115900-appb-100001
    式中,f 1是查询图像的描述子,f 1st,f 2nd是模型数据库中和f 1最近和次近的描述子,d 1是描述子f 1和f 1st之间的距离,d 2是指描述子f 1和f 2nd的距离,这里距离是指欧氏距离,τ是预设的第一筛选阈值,其影响了匹配点的数量和匹配的准确度。
  4. 根据权利要求3所述的基于街景图像匹配重复结构建筑的定位方法,其特征在于,所述步骤3)的详细步骤如下:
    步骤3.1),从距离比值法得到的匹配点中随机选出一对匹配点;
    步骤3.2),对选出的一对匹配点进行验证,验证方法如下:
    步骤3.2.1),令这一对匹配点为(A,A'),f A是点A描述子,g A是点A'描述子,τ'是预设的第二筛选阈值,计算描述子f A、g A的夹角θ 1
    步骤3.2.2),比较夹角θ 1和τ',如果满足θ 1≤τ'约束条件,认为A、A'是正确的匹配点,否则认为A、A'是错误的匹配点;
    步骤3.3),对于距离比值法得到的匹配点中尚未被验证过的匹配点,随机抽取一对匹 配点,并对选出的这对匹配点进行验证;
    步骤3.4),重复执行步骤3.3),直至所有距离比值法得到的匹配点集都被验证过;
    步骤3.5),将距离比值法得到的匹配点中所有正确的匹配点作为RANSAC算法的输入,进一步筛除错误的匹配点,得到筛除过后的匹配点。
  5. 根据权利要求3所述的基于街景图像匹配重复结构建筑的定位方法,其特征在于,所述步骤2)中,m=5,n=6。
PCT/CN2019/115900 2018-12-28 2019-11-06 基于街景图像匹配重复结构建筑的定位方法 WO2020134617A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811620866.7 2018-12-28
CN201811620866.7A CN109858361B (zh) 2018-12-28 2018-12-28 基于街景图像匹配重复结构建筑的定位方法

Publications (1)

Publication Number Publication Date
WO2020134617A1 true WO2020134617A1 (zh) 2020-07-02

Family

ID=66892780

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/115900 WO2020134617A1 (zh) 2018-12-28 2019-11-06 基于街景图像匹配重复结构建筑的定位方法

Country Status (2)

Country Link
CN (1) CN109858361B (zh)
WO (1) WO2020134617A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914855A (zh) * 2020-07-31 2020-11-10 西安电子科技大学 一种超大数字影像地图的先验特征点稀疏化方法
CN111966769A (zh) * 2020-07-14 2020-11-20 北京城市象限科技有限公司 基于生活圈的信息推荐方法、装置、设备和介质
CN112070813A (zh) * 2020-08-21 2020-12-11 国网山东省电力公司青岛供电公司 一种基于连线特征一致性的特征匹配方法
CN112233178A (zh) * 2020-11-11 2021-01-15 广东拓斯达科技股份有限公司 基于机器视觉的复杂环境中动态物料测距方法
CN113160284A (zh) * 2021-03-09 2021-07-23 大连海事大学 一种基于局部相似结构约束的引导性空间一致光伏图像配准方法
CN113658238A (zh) * 2021-08-23 2021-11-16 重庆大学 一种基于改进特征检测的近红外静脉图像高精度匹配方法
CN113657194A (zh) * 2021-07-27 2021-11-16 武汉理工大学 基于改进的surf算法的车辆摄像头图像特征提取匹配方法
CN114041878A (zh) * 2021-10-19 2022-02-15 山东建筑大学 骨关节置换手术机器人的ct图像的三维重建方法及系统
CN114299462A (zh) * 2021-12-28 2022-04-08 湖北工业大学 一种基于锚点图像的地下停车场多尺度场景识别方法
CN116612306A (zh) * 2023-07-17 2023-08-18 山东顺发重工有限公司 基于计算机视觉的法兰盘智能对位方法及系统
CN116797407A (zh) * 2023-08-21 2023-09-22 北京华邑建设集团有限公司 一种室外建筑场地施工管理方法及系统
CN112233178B (zh) * 2020-11-11 2024-05-17 广东拓斯达科技股份有限公司 基于机器视觉的复杂环境中动态物料测距方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858361B (zh) * 2018-12-28 2023-04-18 南京航空航天大学 基于街景图像匹配重复结构建筑的定位方法
CN111383335B (zh) * 2020-03-05 2023-03-21 南京大学 一种众筹照片与二维地图结合的建筑物三维建模方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426019A (zh) * 2011-08-25 2012-04-25 航天恒星科技有限公司 一种无人机景象匹配辅助导航方法及系统
CN104036480A (zh) * 2014-06-20 2014-09-10 天津大学 基于surf算法的快速消除误匹配点方法
CN108388902A (zh) * 2018-02-12 2018-08-10 山东大学 结合全局框架点与局部shot特征的复合3d描述子构建方法
CN109086350A (zh) * 2018-07-13 2018-12-25 哈尔滨工业大学 一种基于WiFi的混合图像检索方法
CN109858361A (zh) * 2018-12-28 2019-06-07 南京航空航天大学 基于街景图像匹配重复结构建筑的定位方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598885B (zh) * 2015-01-23 2017-09-22 西安理工大学 街景图像中的文字标牌检测与定位方法
CN107084736A (zh) * 2017-04-27 2017-08-22 维沃移动通信有限公司 一种导航方法及移动终端
CN107133325B (zh) * 2017-05-05 2020-01-07 南京大学 一种基于街景地图的互联网照片地理空间定位方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102426019A (zh) * 2011-08-25 2012-04-25 航天恒星科技有限公司 一种无人机景象匹配辅助导航方法及系统
CN104036480A (zh) * 2014-06-20 2014-09-10 天津大学 基于surf算法的快速消除误匹配点方法
CN108388902A (zh) * 2018-02-12 2018-08-10 山东大学 结合全局框架点与局部shot特征的复合3d描述子构建方法
CN109086350A (zh) * 2018-07-13 2018-12-25 哈尔滨工业大学 一种基于WiFi的混合图像检索方法
CN109858361A (zh) * 2018-12-28 2019-06-07 南京航空航天大学 基于街景图像匹配重复结构建筑的定位方法

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111966769A (zh) * 2020-07-14 2020-11-20 北京城市象限科技有限公司 基于生活圈的信息推荐方法、装置、设备和介质
CN111966769B (zh) * 2020-07-14 2024-01-02 北京城市象限科技有限公司 基于生活圈的信息推荐方法、装置、设备和介质
CN111914855A (zh) * 2020-07-31 2020-11-10 西安电子科技大学 一种超大数字影像地图的先验特征点稀疏化方法
CN111914855B (zh) * 2020-07-31 2024-04-05 西安电子科技大学 一种超大数字影像地图的先验特征点稀疏化方法
CN112070813A (zh) * 2020-08-21 2020-12-11 国网山东省电力公司青岛供电公司 一种基于连线特征一致性的特征匹配方法
CN112233178A (zh) * 2020-11-11 2021-01-15 广东拓斯达科技股份有限公司 基于机器视觉的复杂环境中动态物料测距方法
CN112233178B (zh) * 2020-11-11 2024-05-17 广东拓斯达科技股份有限公司 基于机器视觉的复杂环境中动态物料测距方法
CN113160284A (zh) * 2021-03-09 2021-07-23 大连海事大学 一种基于局部相似结构约束的引导性空间一致光伏图像配准方法
CN113160284B (zh) * 2021-03-09 2024-04-30 大连海事大学 一种基于局部相似结构约束的引导性空间一致光伏图像配准方法
CN113657194B (zh) * 2021-07-27 2023-09-22 武汉理工大学 基于改进的surf算法的车辆摄像头图像特征提取匹配方法
CN113657194A (zh) * 2021-07-27 2021-11-16 武汉理工大学 基于改进的surf算法的车辆摄像头图像特征提取匹配方法
CN113658238A (zh) * 2021-08-23 2021-11-16 重庆大学 一种基于改进特征检测的近红外静脉图像高精度匹配方法
CN113658238B (zh) * 2021-08-23 2023-08-08 重庆大学 一种基于改进特征检测的近红外静脉图像高精度匹配方法
CN114041878A (zh) * 2021-10-19 2022-02-15 山东建筑大学 骨关节置换手术机器人的ct图像的三维重建方法及系统
CN114299462B (zh) * 2021-12-28 2024-04-23 湖北工业大学 一种基于锚点图像的地下停车场多尺度场景识别方法
CN114299462A (zh) * 2021-12-28 2022-04-08 湖北工业大学 一种基于锚点图像的地下停车场多尺度场景识别方法
CN116612306B (zh) * 2023-07-17 2023-09-26 山东顺发重工有限公司 基于计算机视觉的法兰盘智能对位方法及系统
CN116612306A (zh) * 2023-07-17 2023-08-18 山东顺发重工有限公司 基于计算机视觉的法兰盘智能对位方法及系统
CN116797407A (zh) * 2023-08-21 2023-09-22 北京华邑建设集团有限公司 一种室外建筑场地施工管理方法及系统
CN116797407B (zh) * 2023-08-21 2023-11-03 北京华邑建设集团有限公司 一种室外建筑场地施工管理方法及系统

Also Published As

Publication number Publication date
CN109858361B (zh) 2023-04-18
CN109858361A (zh) 2019-06-07

Similar Documents

Publication Publication Date Title
WO2020134617A1 (zh) 基于街景图像匹配重复结构建筑的定位方法
Yao et al. Multi-modal remote sensing image matching considering co-occurrence filter
Liu et al. Seqlpd: Sequence matching enhanced loop-closure detection based on large-scale point cloud description for self-driving vehicles
CN104376548A (zh) 一种基于改进型surf算法的图像快速拼接方法
CN110175615B (zh) 模型训练方法、域自适应的视觉位置识别方法及装置
CN104331682A (zh) 一种基于傅里叶描述子的建筑物自动识别方法
Lange et al. Dld: A deep learning based line descriptor for line feature matching
WO2015042772A1 (zh) 一种遥感图像显著目标变化检测方法
CN107180436A (zh) 一种改进的kaze图像匹配算法
CN102938147A (zh) 一种基于快速鲁棒特征的低空无人机视觉定位方法
Wang et al. Edge Enhanced Direct Visual Odometry.
CN105913069B (zh) 一种图像识别方法
CN103955950A (zh) 一种利用关键点特征匹配的图像跟踪方法
Ma et al. Remote sensing image registration based on multifeature and region division
CN105809678A (zh) 一种短基线条件下两视图间线段特征全局匹配方法
Liu et al. Motion consistency-based correspondence growing for remote sensing image matching
CN110246165B (zh) 提高可见光图像与sar图像配准速度的方法及系统
CN114332172A (zh) 一种基于协方差矩阵改进的激光点云配准方法
CN106651756B (zh) 一种基于sift和验证机制的图像配准方法
Quan et al. A Novel Coarse-to-Fine Deep Learning Registration Framework for Multi-Modal Remote Sensing Images
CN111951263A (zh) 一种基于卷积神经网络的机械零件图纸检索方法
Shen et al. A detector-oblivious multi-arm network for keypoint matching
Li et al. A novel automatic image stitching algorithm for ceramic microscopic images
Zhao et al. Research on Feature Matching of an Improved ORB Algorithm
Wang et al. A multi-sensor image matching method based on KAZE-HOG features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905727

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19905727

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19905727

Country of ref document: EP

Kind code of ref document: A1