CN110197157B - Pavement crack growth detection method based on historical crack data - Google Patents
Pavement crack growth detection method based on historical crack data Download PDFInfo
- Publication number
- CN110197157B CN110197157B CN201910469277.1A CN201910469277A CN110197157B CN 110197157 B CN110197157 B CN 110197157B CN 201910469277 A CN201910469277 A CN 201910469277A CN 110197157 B CN110197157 B CN 110197157B
- Authority
- CN
- China
- Prior art keywords
- crack
- image
- historical
- data
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000013507 mapping Methods 0.000 claims description 24
- 238000004458 analytical method Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 230000004807 localization Effects 0.000 claims description 2
- 230000005855 radiation Effects 0.000 claims description 2
- 238000012360 testing method Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000005316 response function Methods 0.000 description 2
- 241000238097 Callinectes sapidus Species 0.000 description 1
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 1
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 1
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 1
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/80—Recognising image objects characterised by unique random patterns
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A30/00—Adapting or protecting infrastructure or their operation
- Y02A30/60—Planning or developing urban green infrastructure
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Library & Information Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域Technical Field
本发明涉及图像处理技术,尤其涉及一种基于历史裂纹数据的路面裂纹生长检测方法。The invention relates to image processing technology, and in particular to a pavement crack growth detection method based on historical crack data.
背景技术Background Art
目前,在路面裂纹检测方面已有一些成果,例如已申请的专利CN106548182,申请日2016年11月02日,专利名为“基于深度学习和主成因分析的路面裂纹检测方法及装置”公开了一种基于卷积神经网络和主成因分析的路面裂纹检测方法。已申请的专利CN106324084,申请日2016年8月30日,专利名为“裂纹深度检测方法”公开了一种检测裂纹深度值的方法和仪器。At present, there have been some achievements in the detection of pavement cracks, such as the patent CN106548182, which was applied for on November 2, 2016, and the patent name is "Pavement crack detection method and device based on deep learning and main cause analysis", which discloses a pavement crack detection method based on convolutional neural network and main cause analysis. The patent CN106324084, which was applied for on August 30, 2016, and the patent name is "Crack depth detection method", discloses a method and instrument for detecting crack depth value.
近年来,随着路面裂纹检测技术的发展不断加快,路面裂纹检测方法尤其是对路面进行裂纹扩展分析的需求也日益紧迫,目前基于机器学习的监督方法是路面裂纹检测的主要手段,该方法应用的前提是对大量的标记数据进行训练,在实际中难以广泛应用。本专利提出一种利用一定时间间隔历史图像数据与现有裂纹图像进行映射对比,在提高检测精度的同时减少训练数据量。In recent years, with the rapid development of pavement crack detection technology, the demand for pavement crack detection methods, especially crack propagation analysis of pavement, has become increasingly urgent. At present, the supervised method based on machine learning is the main means of pavement crack detection. The premise of the application of this method is to train a large amount of labeled data, which is difficult to be widely used in practice. This patent proposes a method of mapping and comparing historical image data with existing crack images at a certain time interval, which can improve the detection accuracy while reducing the amount of training data.
发明内容Summary of the invention
本发明要解决的技术问题在于针对现有技术中的缺陷,提供一种基于历史裂纹数据的路面裂纹生长检测方法。The technical problem to be solved by the present invention is to provide a pavement crack growth detection method based on historical crack data in view of the defects in the prior art.
本发明解决其技术问题所采用的技术方案是:一种基于历史裂纹数据的路面裂纹生长检测方法,包括以下步骤:The technical solution adopted by the present invention to solve the technical problem is: a pavement crack growth detection method based on historical crack data, comprising the following steps:
1)实时采集当前路面裂缝图像,并同步采集当前路面裂缝图像对应的定位信息;1) Real-time acquisition of the current pavement crack image and synchronous acquisition of the positioning information corresponding to the current pavement crack image;
2)初定位:根据当前定位信息与历史地图数据中的位置信息,在历史路面图像数据中提取与当前定位信息在阈值距离内的历史图像数据,并从中选取多张与当前路面图像相似的历史图像;2) Initial positioning: Based on the current positioning information and the location information in the historical map data, extract the historical image data within the threshold distance from the current positioning information from the historical road image data, and select multiple historical images similar to the current road image;
3)图像级定位:在步骤2)粗匹配的相似图像中找到一幅最接近路面图像,具体如下:3) Image-level positioning: Find an image closest to the road surface among the similar images roughly matched in step 2), as follows:
3.1)对图像进行灰度处理;3.1) grayscale processing of the image;
3.2)计算当前灰度图像和相似图像的全部特征描述子的汉明距离,比较汉明距离实现特征点粗匹配,从而找到当前灰度图像与相似图像的相同特征点,这些匹配的特征点作为路面指纹信息;3.2) Calculate the Hamming distance of all feature descriptors of the current grayscale image and similar images, compare the Hamming distance to achieve rough matching of feature points, and thus find the same feature points of the current grayscale image and similar images. These matched feature points are used as road surface fingerprint information;
3.3)通过比较相同特征点的数量,在粗匹配的相似图像中找到一幅最接近路面图像;3.3) By comparing the number of identical feature points, find an image closest to the road surface in the roughly matched similar images;
4)像素级定位:将最接近路面图像中标记的历史裂纹映射到当前路面裂缝图像中;4) Pixel-level localization: Map the historical cracks marked in the closest pavement image to the current pavement crack image;
5)对当前路面裂缝图像基于RGM(Region Growth Method)的映射裂纹分析,检测出采集的裂缝图像中所有属于新生长裂缝的像素。5) Based on the RGM (Region Growth Method) mapping crack analysis of the current pavement crack image, all pixels belonging to newly grown cracks in the collected crack image are detected.
按上述方案,所述步骤3.2)中特征描述子的获取方法如下:According to the above scheme, the method for obtaining the feature descriptor in step 3.2) is as follows:
3.2.1)检测图像特征点:通过Harris算法检测角点作为图像特征点;3.2.1) Detect image feature points: Detect corner points as image feature points using the Harris algorithm;
3.2.2)加入方向信息:在提取到的特征点上加入方向信息,使得提取的特征点的方向不变,所述方向是通过圆形窗口内图像的力矩计算像素块的熵值获得;3.2.2) Adding direction information: Adding direction information to the extracted feature points so that the direction of the extracted feature points remains unchanged. The direction is obtained by calculating the entropy value of the pixel block through the moment of the image in the circular window;
3.2.3)ORB特征点匹配:3.2.3) ORB feature point matching:
在特征点上选定n对特征,形成映射矩阵s,矩阵s的大小为2×2n,其元素为各个特征对在X,Y轴的坐标,再利用特征点到形心的方向得到仿射变换矩阵R,利用矩阵R计算得到新的描述矩阵sθ,结合BRIEF描述子,得到ORB特征描述子。Select n pairs of features on the feature points to form a mapping matrix s. The size of the matrix s is 2×2n, and its elements are the coordinates of each feature pair on the X and Y axes. Then use the direction from the feature point to the centroid to get the affine transformation matrix R. Use the matrix R to calculate the new description matrix s θ , and combine it with the BRIEF descriptor to get the ORB feature descriptor.
按上述方案,所述步骤3.2.1)中图像特征点的判定方法为:比较图像中的一个像素点P和周围若干个点组成的圆形窗口上的像素点的差值,若干个点的差值的和为N,当N大于判定标准时,则判定该点为图像特征点;According to the above scheme, the method for determining the image feature point in step 3.2.1) is: comparing the difference between a pixel point P in the image and the pixel points on a circular window composed of a number of surrounding points, the sum of the differences of the several points is N, and when N is greater than the determination criterion, the point is determined to be an image feature point;
其中,I(x)为当前像素点的灰度值,I(p)为像素点P的灰度值,ε为设定的阈值,circle(p)为像素点P周围的圆形窗口上的辐射范围,公式对满足范围要求的像素点进行代入。Among them, I(x) is the gray value of the current pixel, I(p) is the gray value of the pixel P, ε is the set threshold, circle(p) is the radiation range on the circular window around the pixel P, and the formula is substituted for the pixels that meet the range requirements.
按上述方案,所述步骤4)中像素级定位是将标记的历史裂纹像素映射到当前裂纹图像中。According to the above scheme, the pixel-level positioning in step 4) is to map the marked historical crack pixels into the current crack image.
按上述方案,所述步骤4)中像素级定位是通过ORB匹配的图像特征点,求两张图像的H矩阵,然后利用H矩阵,将标记的历史裂纹像素映射到当前裂纹图像中。According to the above scheme, the pixel-level positioning in step 4) is to obtain the H matrix of the two images by matching the image feature points through ORB, and then use the H matrix to map the marked historical crack pixels to the current crack image.
按上述方案,所述步骤4)中基于多尺度定位的裂纹映射的具体步骤为:According to the above scheme, the specific steps of crack mapping based on multi-scale positioning in step 4) are:
假设道路是一个平面,在针孔相机模型下,可以用单应性矩阵描述基本的几何形状,则单应性矩阵上的两个线性约束可知,将历史裂缝标签通过以下关系映射到查询图像上:Assuming that the road is a plane, the basic geometric shape can be described by the homography matrix under the pinhole camera model. Then, the two linear constraints on the homography matrix show that the historical crack labels are mapped to the query image through the following relationship:
其中,n为历史裂缝标签像素的个数,[u′i y′i]T为历史数据中映射裂缝数据的坐标,[ui yi]T为历史裂缝标签像素的坐标。Where n is the number of historical crack label pixels, [u′ i y′ i ] T is the coordinate of the mapped crack data in the historical data, and [u i y i ] T is the coordinate of the historical crack label pixel.
按上述方案,所述步骤5)中基于RGM的历史裂缝分析的具体步骤为:According to the above scheme, the specific steps of historical crack analysis based on RGM in step 5) are:
5.1)映射裂纹灰度值分布分析:将上述步骤中映射的查询裂缝标签作为“理想的”初始种子点,从这些种子点开始,通过寻找与种子点具有相似属性的相邻点来扩展区域,所述属性包括颜色和强度;通过映射标签,从查询图像上的映射对应关系计算像素值,利用统计学原理,绘制映射裂纹数据的图像强度分布直方图;5.1) Mapped crack grayscale value distribution analysis: The query crack labels mapped in the above step are used as "ideal" initial seed points. Starting from these seed points, the region is expanded by finding adjacent points with similar attributes to the seed points, including color and intensity. By mapping the labels, the pixel values are calculated from the mapping correspondence on the query image, and the image intensity distribution histogram of the mapped crack data is plotted using statistical principles.
5.2)用高斯模型表示映射裂缝像素的强度分布:5.2) Use Gaussian model to represent the intensity distribution of mapped crack pixels:
建立一个高斯模型来表示这种分布,并用其分布特征作为算子,对应的均值和标准差的计算公式为:A Gaussian model is established to represent this distribution, and its distribution characteristics are used as operators. The corresponding mean and standard deviation calculation formulas are:
其中,N为映射裂缝标签像素的个数。Where N is the number of pixels mapped to crack labels.
5.3)生长裂纹分析:当图像中某一点的强度满足以下条件时,则将其划分为裂纹:5.3) Growth crack analysis: When the intensity of a point in the image meets the following conditions, it is classified as a crack:
I(pμ,pv)∈[0,ω+λσ]I(p μ ,p v )∈[0,ω+λσ]
式中,λ是根据实际应用情况确定的常值。Where λ is a constant value determined according to the actual application situation.
按上述方案,所述步骤1)中的定位信息为GPS定位信息。According to the above solution, the positioning information in step 1) is GPS positioning information.
本发明产生的有益效果是:从GPS初定位、图像级定位以及基于RGM的历史裂缝分析三个步骤开展裂纹的检测;GPS初定位通过比较当前定位信息与历史地图数据中的位置信息,提取多张与当前路面图像接近的相似图像。图像级定位通过Harris算法检测角点,并在提取到的特征点上加入方向信息,使得提取的特征点的方向不变,进行ORB特征点匹配后得到精匹配的图像数据。基于RGM的历史裂缝分析将匹配后的历史裂缝标签映射到查询图像,并用高斯模型表示映射裂缝像素的强度分布,最后将满足条件的像素值划分为裂纹。该方法通过参考历史裂缝数据,提出了一种研究裂纹状态随时间变化的有效可靠策略,大大简化和提高裂缝的检测和识别。The beneficial effects of the present invention are as follows: crack detection is carried out through three steps: GPS initial positioning, image-level positioning, and historical crack analysis based on RGM; GPS initial positioning extracts multiple similar images close to the current road surface image by comparing the current positioning information with the location information in the historical map data. Image-level positioning detects corner points through the Harris algorithm, and adds direction information to the extracted feature points so that the direction of the extracted feature points remains unchanged, and obtains precisely matched image data after ORB feature point matching. Historical crack analysis based on RGM maps the matched historical crack labels to the query image, and uses a Gaussian model to represent the intensity distribution of the mapped crack pixels, and finally classifies the pixel values that meet the conditions as cracks. This method proposes an effective and reliable strategy for studying the changes in crack status over time by referring to historical crack data, which greatly simplifies and improves the detection and identification of cracks.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
下面将结合附图及实施例对本发明作进一步说明,附图中:The present invention will be further described below with reference to the accompanying drawings and embodiments, in which:
图1是本发明实施例的方法流程图;FIG1 is a flow chart of a method according to an embodiment of the present invention;
图2是本发明实施例的方法流程图;FIG2 is a flow chart of a method according to an embodiment of the present invention;
图3是本发明实施例的坐标匹配初定位示意图;FIG3 is a schematic diagram of initial positioning of coordinate matching according to an embodiment of the present invention;
图4是本发明实施例的种子点和生长区域示意图;FIG4 is a schematic diagram of a seed point and a growth area according to an embodiment of the present invention;
图5是本发明实施例的映射裂缝数据图像强度直方图。FIG. 5 is an intensity histogram of a mapped crack data image according to an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。In order to make the purpose, technical solution and advantages of the present invention more clearly understood, the present invention is further described in detail below in conjunction with the embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention and are not used to limit the present invention.
如图1和图2所示,本发明实施例的基于历史裂纹数据的裂纹生长检测方法,包括:多尺度定位;基于多尺度定位的裂纹映射和基于RGM的映射裂纹分析。其中,多尺度定位包括GPS初定位、图像级定位和像素级定位;基于RGM的映射裂纹分析包括历史裂纹灰度值分布分析、高斯模型构建以及生长裂纹分析。As shown in Figures 1 and 2, the crack growth detection method based on historical crack data of the embodiment of the present invention includes: multi-scale positioning; crack mapping based on multi-scale positioning and mapping crack analysis based on RGM. Among them, multi-scale positioning includes GPS initial positioning, image-level positioning and pixel-level positioning; mapping crack analysis based on RGM includes historical crack gray value distribution analysis, Gaussian model construction and growth crack analysis.
其中,GPS初定位的具体步骤为:Among them, the specific steps of GPS initial positioning are:
S1、通过设置在待定位车上的图像传感模块实时采集当前路面裂缝图像,定位模块同步采集当前定位信息;S1. The image sensor module installed on the vehicle to be positioned collects the current road crack image in real time, and the positioning module simultaneously collects the current positioning information;
S2、通过比较当前定位信息与历史地图数据中的位置信息,在历史地图数据中提取多张与当前路面图像接近的相似图像,并最终获取在阈值距离内的历史图像数据。满足如下公式:S2. By comparing the current positioning information with the location information in the historical map data, multiple similar images close to the current road image are extracted from the historical map data, and finally the historical image data within the threshold distance is obtained. The following formula is satisfied:
dji=dist(Gj,Gi)d ji = dist(G j ,G i )
Pos={Gi|dij≤k}Pos={G i |d ij ≤k}
其中,Gj为第j次查询裂缝数据的GPS坐标,Gi为第i次历史裂缝数据的GPS坐标,d为当前裂缝数据和历史裂缝数据的GPS距离,k为候选历史裂缝数据的阈值距离,Pos为一组历史裂缝候选图像。Among them, Gj is the GPS coordinate of the j-th query crack data, Gi is the GPS coordinate of the i-th historical crack data, d is the GPS distance between the current crack data and the historical crack data, k is the threshold distance of the candidate historical crack data, and Pos is a set of historical crack candidate images.
同时,GPS在实际应用中的定位精度为10米,在此步骤中将定位精度作为阈值对历史裂纹进行初步筛选,从而完成GPS初定位,并作为多尺度定位的第一步。At the same time, the positioning accuracy of GPS in practical applications is 10 meters. In this step, the positioning accuracy is used as a threshold to perform a preliminary screening of historical cracks, thereby completing the GPS initial positioning and serving as the first step of multi-scale positioning.
图像级定位的具体步骤为:The specific steps of image-level positioning are:
S1、检测图像特征点:通过Harris算法检测角点,即比较图像中的一个像素点P和周围若干个点组成的圆上的像素点的差值,其公式为:S1. Detect image feature points: Detect corner points using the Harris algorithm, that is, compare the difference between a pixel point P in the image and the pixel points on the circle composed of several surrounding points. The formula is:
其中,I(x)为当前像素点的灰度值,I(p)为像素点P的灰度值,ε为设定的阈值,若干个点的响应函数值的和为N,当N大于判定标准时,该点为图像特征点;Among them, I(x) is the gray value of the current pixel, I(p) is the gray value of the pixel P, ε is the set threshold, and the sum of the response function values of several points is N. When N is greater than the judgment standard, the point is an image feature point;
一般我们取N值的3/4的整数部分为判定标准;Generally, we take the integer part of 3/4 of the N value as the judgment standard;
S2、加入方向信息:在提取到的特征点上加入方向信息,使得提取的特征点的方向不变。方向是由圆形窗口内图像的矩计算出来的。图像的力矩定义如下:S2. Adding direction information: Adding direction information to the extracted feature points makes the direction of the extracted feature points unchanged. The direction is calculated by the moment of the image in the circular window. The moment of the image is defined as follows:
因此,由各时刻计算图像的熵值如下:Therefore, the entropy value of the image is calculated at each moment as follows:
像素的方向定义如下:The orientation of a pixel is defined as follows:
θ=arctan(m10,m01)θ=arctan(m 10 ,m 01 )
其中,mpq,(p,q∈(0,1))表示图像块的力矩,I(x,y)表示在坐标(x,y)处的灰度值,θ为质心与特征点的夹角;Where, m pq , (p, q∈(0,1)) represents the moment of the image block, I(x, y) represents the gray value at the coordinate (x, y), and θ is the angle between the centroid and the feature point;
S3、ORB特征点匹配:S3, ORB feature point matching:
在特征点上选定n对特征,形成矩阵s:Select n pairs of features at the feature points to form a matrix s:
矩阵s的大小为2×2n,其中xi,yi(i∈(1,n))表示第i个特征对在X,Y轴的坐标,再利用特征点到形心的方向得到仿射变换矩阵R,利用矩阵R计算得到新的描述矩阵sθ,其公式为:The size of the matrix s is 2×2n, where x i , y i (i∈(1,n)) represents the coordinates of the i-th feature pair on the X and Y axes. The direction from the feature point to the centroid is used to obtain the affine transformation matrix R. The new description matrix s θ is calculated using the matrix R. The formula is:
结合BRIEF描述子,得到ORB特征描述子:Combined with the BRIEF descriptor, we get the ORB feature descriptor:
gn(P,θ)=fn(P)|(xi,yi)∈Sθ g n (P, θ)=f n (P)|(x i ,y i )∈S θ
同时,选定的n取值为256,并且根据得到的256维ORB特征描述子,计算待定位的当前灰度图像和相似图像的全部特征描述子的汉明距离,比较汉明距离实现特征点粗匹配,并利用RANSAC算法去除错误的匹配,从而找到待定位的当前灰度图像与相似图像的相同特征点,这些匹配的特征点作为路面指纹信息;通过比较相同特征点的数量,在粗匹配的相似图像中找到一幅最接近路面图像,其与待定位图像的匹配的特征点数量最多,从而利用指纹信息实现待定位图像精匹配。At the same time, the selected value of n is 256, and based on the obtained 256-dimensional ORB feature descriptor, the Hamming distance of all feature descriptors of the current grayscale image to be located and similar images is calculated, and the Hamming distance is compared to achieve coarse matching of feature points, and the RANSAC algorithm is used to remove incorrect matches, so as to find the same feature points of the current grayscale image to be located and similar images, and these matched feature points are used as road surface fingerprint information; by comparing the number of identical feature points, an image closest to the road surface is found in the coarsely matched similar images, and the image has the largest number of feature points matching the image to be located, so as to achieve precise matching of the image to be located using the fingerprint information.
像素级定位是用于求两张图像的H矩阵,然后利用H矩阵,将标记的历史裂纹映射到当前裂纹图像中,具体步骤为:Pixel-level positioning is used to find the H matrix of the two images, and then use the H matrix to map the marked historical cracks to the current crack image. The specific steps are:
假设道路是一个平面,在针孔相机模型下,可以用单应性矩阵描述基本的几何形状,具体公式如下:Assuming that the road is a plane, under the pinhole camera model, the basic geometric shape can be described by the homography matrix. The specific formula is as follows:
hi表示矩阵H的各分量。 hi represents the components of the matrix H.
将上式改写如下:Rewrite the above formula as follows:
由上式可得单应性矩阵上的两个线性约束:From the above formula, we can get two linear constraints on the homography matrix:
利用计算的单应性矩阵,将历史裂缝标签通过以下关系映射到查询图像上:Using the calculated homography matrix, the historical crack labels are mapped to the query image through the following relationship:
其中,n为历史裂缝标签像素,[u′i y′i]T为历史数据中映射裂缝数据的坐标,[uiyi]T为历史裂缝标签像素的坐标。Among them, n is the historical crack label pixel, [u′ i y′ i ] T is the coordinate of the mapped crack data in the historical data, and [u i y i ] T is the coordinate of the historical crack label pixel.
因此,查询图像上的映射裂缝标签可以用一组二维图像坐标表示,如下式:Therefore, the mapped crack labels on the query image can be represented by a set of 2D image coordinates as follows:
Q={[u′i y′i]T}(i=1,2,…n)Q={[u′ i y′ i ] T }(i=1,2,…n)
基于RGM的历史裂缝分析的具体步骤为:The specific steps of historical crack analysis based on RGM are:
S1、历史裂纹灰度值分布分析:将上述步骤中映射的查询裂缝标签作为“理想的”初始种子点,从这些种子点开始,通过寻找与种子点具有相似属性(例如,相似的颜色、强度等)的相邻点来扩展区域。通过映射标签,可以从查询图像上的映射对应关系计算像素值。利用统计学原理,绘制映射裂纹数据的图像强度分布直方图。S1. Analysis of historical crack grayscale value distribution: The query crack labels mapped in the above steps are used as "ideal" initial seed points. Starting from these seed points, the region is expanded by finding adjacent points with similar properties to the seed points (e.g., similar color, intensity, etc.). By mapping the labels, the pixel values can be calculated from the mapped correspondence on the query image. Using statistical principles, the image intensity distribution histogram of the mapped crack data is plotted.
S2、用高斯模型表示映射裂缝像素的强度分布:S2. Use the Gaussian model to represent the intensity distribution of the mapped crack pixels:
上述所有这些与映射的裂缝标签相关联的像素值I(μ',ν')都遵循一定的模式,即高斯分布,因此可用高斯分布作为裂纹扩展的约束条件。为此,建立一个高斯模型来表示这种分布,并用其分布特征作为算子。对应的均值和标准差可以计算为:All of the above pixel values I(μ',ν') associated with the mapped crack labels follow a certain pattern, namely Gaussian distribution, so Gaussian distribution can be used as a constraint for crack extension. To this end, a Gaussian model is established to represent this distribution, and its distribution characteristics are used as operators. The corresponding mean and standard deviation can be calculated as:
其中,N为映射裂缝标签像素的个数。Where N is the number of pixels mapped to crack labels.
需要注意的是,每个查询裂缝图像都具有独特的发达高斯模型,因此所用图像在进行区域增长计算时具有鲁棒性和可靠性。通过所建高斯模型,可以快速确定相邻图像是否具有与种子点相似的性质。同时,在进行区域增长计算时,由于图像中的裂纹区域通常具有较低的图像强度,因此可以从高斯模型参数中设置图像强度的范围,如平均值和标准差。It should be noted that each query crack image has a unique developed Gaussian model, so the image used is robust and reliable when performing region growing calculations. Through the constructed Gaussian model, it is possible to quickly determine whether the adjacent image has similar properties to the seed point. At the same time, when performing region growing calculations, since the crack area in the image usually has a lower image intensity, the range of image intensity, such as the mean and standard deviation, can be set from the Gaussian model parameters.
S3、生长裂纹分析:当图像中某一点的强度满足以下条件时,则将其划分为裂纹:S3. Growth crack analysis: When the intensity of a point in the image meets the following conditions, it is classified as a crack:
I(pμ,pv)∈[0,ω+λσ]I(p μ ,p v )∈[0,ω+λσ]
式中λ是根据实际应用情况进行确定的常值。Where λ is a constant determined according to the actual application situation.
在本发明的另一个具体实施例中:In another specific embodiment of the present invention:
本发明实施例的基于参考的裂缝分析方法由三个主要模块组成:1)多尺度定位;2)将历史裂缝图像映射到查询裂缝图像;3)裂缝后处理与分析。每个查询的裂缝数据和历史裂缝数据都包含GPS信息和裂缝图像。此外,历史裂缝数据中的历史裂缝标签,无论是手动提取还是自动提取,都表示为路面图像中属于裂缝的像素集(像素级)。因此,使用点集来表示每个历史裂缝数据如下:The reference-based crack analysis method of the embodiment of the present invention consists of three main modules: 1) multi-scale positioning; 2) mapping the historical crack image to the query crack image; 3) crack post-processing and analysis. Each query crack data and historical crack data contains GPS information and crack images. In addition, the historical crack labels in the historical crack data, whether manually extracted or automatically extracted, are represented as a set of pixels belonging to the crack in the pavement image (pixel level). Therefore, a point set is used to represent each historical crack data as follows:
mi={Gi,Ii,Li} i∈(1,n)m i ={G i ,I i ,L i } i∈(1,n)
其中n为历史裂缝数据个数。Gi为GPS信息,Ii为路面裂缝图像,Li为历史裂缝标签。Where n is the number of historical crack data, Gi is GPS information, Ii is the pavement crack image, and Li is the historical crack label.
具体流程如图1所示。The specific process is shown in Figure 1.
多尺度定位模块的主要目的是建立当前裂缝数据与历史裂缝数据之间的图像对应关系。该方法采用从粗到细的策略,由GPS初定位、图像级定位和像素级定位三个步骤实现多尺度定位。The main purpose of the multi-scale positioning module is to establish the image correspondence between the current fracture data and the historical fracture data. This method adopts a coarse-to-fine strategy to achieve multi-scale positioning through three steps: GPS initial positioning, image-level positioning, and pixel-level positioning.
GPS初定位:GPS initial positioning:
GPS数据初定位,过程如图3所示。设Gj为第j次查询裂缝数据的GPS坐标,Gi为第i次历史裂缝数据的GPS坐标。两者之间的距离可以计算为:The initial positioning of GPS data is shown in Figure 3. Let Gj be the GPS coordinates of the jth query crack data, and Gi be the GPS coordinates of the i-th historical crack data. The distance between the two can be calculated as:
dji=dist(Gj,Gi)d ji = dist(G j ,G i )
利用GPS数据匹配,得到一组历史裂缝候选图像,其相关GPS坐标和查询GPS坐标在阈值距离内。基于GPS的初定位允许从收集的大量历史裂缝图像中提取有限数量的候选图像。该任务的数学描述如下:Using GPS data matching, a set of candidate historical crack images is obtained whose associated GPS coordinates and query GPS coordinates are within a threshold distance. GPS-based initial positioning allows a limited number of candidate images to be extracted from a large number of collected historical crack images. The mathematical description of the task is as follows:
Pos={Gi|dij≤k}Pos={G i |d ij ≤k}
其中,k为选择候选历史裂缝数据的阈值距离。它决定第i个历史裂缝数据是否与第j个查询裂缝数据足够接近。在实际应用中,根据GPS定位的误差,阈值距离为10米。Among them, k is the threshold distance for selecting candidate historical crack data. It determines whether the i-th historical crack data is close enough to the j-th query crack data. In practical applications, according to the error of GPS positioning, the threshold distance is 10 meters.
从初定位结果出发,应用图像匹配实现图像级定位。图像级定位的目的是在候选历史裂缝数据中找到“最近的”历史裂缝图像。“最近”是指与查询裂缝图像相关的摄像机位置与匹配的历史裂缝图像之间的距离在所有历史裂缝图像中最小。该方法利用局部图像特征点对的匹配来实现图像级的定位。Based on the initial positioning results, image matching is applied to achieve image-level positioning. The purpose of image-level positioning is to find the "nearest" historical crack image in the candidate historical crack data. "Nearest" means that the distance between the camera position associated with the query crack image and the matched historical crack image is the smallest among all historical crack images. This method uses the matching of local image feature point pairs to achieve image-level positioning.
局部图像特征点对的匹配利用ORB算法实现。ORB是一种结合了oFAST(带有方向的FAST)和rBRIEF(旋转的BRIEF)的图像匹配算法。所有的图像都用一些局部特征点来表示。两幅图像的局部特征点对表示两幅图像中的同一区域。其中,oFAST用于特征点检测,rBRIEF用于特征描述符计算。The matching of local image feature point pairs is implemented using the ORB algorithm. ORB is an image matching algorithm that combines oFAST (FAST with orientation) and rBRIEF (rotated BRIEF). All images are represented by some local feature points. The local feature point pairs of two images represent the same area in the two images. Among them, oFAST is used for feature point detection and rBRIEF is used for feature descriptor calculation.
oFAST首先采用哈里斯角测度选取不同的特征点。其次,加入方向信息,使提取的特征点是方向不变的。方向是由圆形窗口内图像的矩计算出来的。图像的力矩定义如下:oFAST first uses the Harris angle measure to select different feature points. Secondly, the direction information is added so that the extracted feature points are direction-invariant. The direction is calculated by the moment of the image within the circular window. The moment of the image is defined as follows:
因此,由各时刻计算图像块的熵值如下:Therefore, the entropy value of the image block is calculated at each moment as follows:
图像块的方向定义如下:The orientation of the image patch is defined as follows:
θ=atan2(m01,m10)θ=atan2(m 01 ,m 10 )
其中,atan2是正切函数tan的一个变种,BRIEF描述符是由一组二元强度测试构建的图像补丁的位串描述。考虑一个平滑图像像素P,两个任意位置的二元测试指标x和y,比较它们图像强度的逻辑结果:Among them, atan2 is a variant of the tangent function tan, and the BRIEF descriptor is a bit string description of an image patch constructed by a set of binary intensity tests. Consider a smooth image pixel P, two binary test indicators x and y at arbitrary positions, and the logical result of comparing their image intensities:
其中P(x)为图像在某一点x处的强度。因此,BRIEF描述符被定义为n个二进制测试指标x和y的向量:where P(x) is the intensity of the image at a point x. Therefore, the BRIEF descriptor is defined as a vector of n binary test indicators x and y:
在文献中,有很多关于如何选择n个二元检验的解决方案。在本发明实施例中,使用围绕图像块中心的一种高斯分布,并选择向量长度n=256。因此,ORB特性描述符由256位字符串表示。为了使BRIEF描述符不随旋转而改变,根据关键点的方向来进行BRIEF描述。对于任何特性集n的二进制测试中,定义以下2×n矩阵In the literature, there are many solutions on how to select n binary tests. In the embodiment of the present invention, a Gaussian distribution around the center of the image block is used, and the vector length n=256 is selected. Therefore, the ORB feature descriptor is represented by a 256-bit string. In order to make the BRIEF descriptor invariant to rotation, the BRIEF description is performed according to the direction of the key point. For any binary test of feature set n, define the following 2×n matrix
从图像块的方向θ,可以计算出对应的旋转矩阵如下:From the direction θ of the image patch, the corresponding rotation matrix can be calculated as follows:
那么就可以通过旋转矩阵来构造一个可被控制的Sθ,如下所示:Then a controllable S θ can be constructed through the rotation matrix as shown below:
Sθ=RθSS θ =R θ S
因此可计算旋转不变描述符,也称为ORB描述符,如下所示:Therefore, the rotation invariant descriptor, also called ORB descriptor, can be calculated as follows:
gn(Ρ,θ)=fn(Ρ)|(xi,yi)∈Sθ g n (Ρ, θ)=f n (Ρ)|(x i ,y i )∈S θ
将历史裂缝标签映射到查询图像:Map historical crack labels to query images:
一旦得到“最近的”历史裂缝图像,就可以得到所查询和历史图像之间的潜在几何关系。假设人行道是一个平面,针孔相机模型下,底层几何可以用单应性矩阵这样描述:Once the “recent” historical crack image is obtained, the underlying geometric relationship between the query and historical images can be obtained. Assuming the sidewalk is a plane, under the pinhole camera model, the underlying geometry can be described by the homography matrix as follows:
hi表示矩阵H的各分量。 hi represents the components of the matrix H.
可以将上两式改写如下:The above two equations can be rewritten as follows:
由上式可得单应性矩阵上的两个线性约束:From the above formula, we can get two linear constraints on the homography matrix:
由于单应性矩阵可以在尺度上确定,因此可以从至少4个点对来计算单应性矩阵。在实际应用中,可以应用直接线性变换(DLT)来计算单应性矩阵,并用Levenberg-Marquardt(LM)(列文伯格-马夸尔特法)方法对结果进行优化。Since the homography matrix can be determined on a scale, it can be calculated from at least 4 point pairs. In practical applications, the direct linear transformation (DLT) can be applied to calculate the homography matrix, and the result can be optimized using the Levenberg-Marquardt (LM) method.
利用计算的单应性矩阵,可以将历史裂缝标签映射到查询图像上,如下式所示:Using the calculated homography matrix, the historical crack labels can be mapped to the query image as shown below:
其中n为历史裂缝标签像素。[u′i y′i]T为历史数据中映射裂缝数据的坐标。[uiyi]T为历史裂缝标签像素的坐标。Where n is the historical crack label pixel. [u′ i y′ i ] T is the coordinate of the mapped crack data in the historical data. [u i y i ] T is the coordinate of the historical crack label pixel.
因此,查询图像上的映射裂缝标签可以用一组二维图像坐标表示,如下式所示:Therefore, the mapped crack labels on the query image can be represented by a set of 2D image coordinates as shown below:
Q={[u′i y′i]T}(i=1,2,…n)Q={[u′ i y′ i ] T }(i=1,2,…n)
裂纹后处理与区域生长分析:Crack post-processing and regional growth analysis:
在实际应用中,可以通过将历史裂缝标签映射到查询裂缝图像中来快速实现查询裂缝标签。由于在查询和历史裂缝数据采集之间的时间间隔内,裂缝的情况可能会恶化,因此仍有一些属于新生成裂缝的像素需要检测。由于RGM依赖于良好的裂缝种子点,因此可以将上述步骤中映射的查询裂缝标签用作“理想的”初始种子点。从这些种子点开始,通过寻找与种子点具有相似属性(例如,相似的颜色、强度等)的相邻点来扩展区域,如图4所示。中部的红点是初始种子点,蓝点是生长区域。In practical applications, query crack labels can be quickly implemented by mapping historical crack labels into query crack images. Since the crack conditions may deteriorate during the time interval between the query and historical crack data collection, there are still some pixels belonging to newly generated cracks that need to be detected. Since RGM relies on good crack seed points, the query crack labels mapped in the above steps can be used as "ideal" initial seed points. Starting from these seed points, the region is expanded by finding neighboring points with similar properties to the seed points (e.g., similar color, intensity, etc.), as shown in Figure 4. The red point in the middle is the initial seed point, and the blue point is the growth area.
通过映射标签,可以从查询图像上的映射对应关系计算像素值。所有这些与映射的裂缝标签相关联的像素值都遵循一定的模式,如图5所示。图5展示了一个典型的类似于一个高斯模型映射裂缝像素的图像强度分布,它可以作为裂纹扩展的约束条件。因此,本发明实施例开发了一个高斯模型来完成这个任务。,对应的均值和标准差可以计算为:By mapping the labels, the pixel values can be calculated from the mapped correspondence on the query image. All of these pixel values associated with the mapped crack labels follow a certain pattern, as shown in Figure 5. Figure 5 shows a typical image intensity distribution similar to a Gaussian model mapping crack pixels, which can be used as a constraint for crack extension. Therefore, the embodiment of the present invention develops a Gaussian model to accomplish this task. The corresponding mean and standard deviation can be calculated as:
其中,N为映射裂缝标签像素的个数。Where N is the number of pixels mapped to crack labels.
需要注意的是,每个查询裂缝图像都具有独特的发达高斯模型,因此对于区域增长具有鲁棒性和可靠性。从计算的高斯模型中,可以快速确定相邻图像是否具有与种子点相似的性质。由于图像中的裂纹区域通常具有较低的图像强度,因此可以从高斯模型参数(如均值和标准差)设置图像强度的范围。因此,当某一点的强度满足以下条件时,则将其划分为裂纹:It is important to note that each query crack image has a unique developed Gaussian model, making it robust and reliable for region growing. From the computed Gaussian model, it is quickly possible to determine whether neighboring images have similar properties to the seed point. Since crack regions in an image usually have lower image intensity, the range of image intensity can be set from the Gaussian model parameters such as mean and standard deviation. Therefore, a point is classified as a crack when its intensity satisfies the following conditions:
I(pμ,pv)∈[0,ω+λσ]I(p μ ,p v )∈[0,ω+λσ]
式中λ为确定比,在实际应用中是根据经验设置的。Where λ is the determination ratio, which is set based on experience in practical applications.
因此,RGM可以应用上式来检测种子点附近所有相邻的图像点。利用RGM,可以检测出查询裂缝图像中所有属于新生长裂缝的像素。Therefore, RGM can apply the above formula to detect all adjacent image points near the seed point. Using RGM, all pixels belonging to newly grown cracks in the query crack image can be detected.
在本发明的另一个具体实施例中:In another specific embodiment of the present invention:
本发明提供一种基于历史裂纹数据的裂纹生长检测方法,包括以下步骤:The present invention provides a crack growth detection method based on historical crack data, comprising the following steps:
历史地图采集阶段:Historical map collection stage:
S1、通过设置在地图采集车上的图像传感模块实时采集路面图像,定位模块和惯性导航模块同步采集定位信息和惯导信息;S1. The image sensor module installed on the map collection vehicle collects road images in real time, and the positioning module and the inertial navigation module synchronously collect positioning information and inertial navigation information;
S2、对采集到的路面图像进行预处理,得到对应的灰度图像;对定位信息和惯导信息进行预处理,得到灰度图像对应的位置信息;S2, preprocessing the collected road surface image to obtain a corresponding grayscale image; preprocessing the positioning information and inertial navigation information to obtain position information corresponding to the grayscale image;
S3、将各个灰度图像和位置信息一一对应,结合实际车速和图像传感模块的采集频率,使每张灰度图像固定间隔,得到地图集;S3, matching each grayscale image with the position information one by one, combining the actual vehicle speed and the acquisition frequency of the image sensor module, making each grayscale image have a fixed interval, and obtaining an atlas;
定位阶段:Positioning phase:
S4、通过设置在待定位车上的图像传感模块实时采集当前路面图像,定位模块同步采集当前定位信息,通过比较当前定位信息与地图集中的位置信息,在地图集中提取多张与当前路面图像接近的相似图像;S4, collecting the current road image in real time through the image sensor module set on the vehicle to be positioned, and the positioning module synchronously collects the current positioning information, and extracts multiple similar images close to the current road image from the map set by comparing the current positioning information with the position information in the map set;
S5、对当前路面图像进行预处理,得到当前灰度图像;对当前灰度图像和相似图像进行图像特征提取,得到符合特征唯一性、时间不变性和特征平移、旋转不变性的特征点作为路面指纹信息,根据路面指纹信息得到一张最接近路面图像;S5, preprocessing the current road surface image to obtain a current grayscale image; extracting image features from the current grayscale image and similar images to obtain feature points that meet feature uniqueness, time invariance, and feature translation and rotation invariance as road surface fingerprint information, and obtaining an image that is closest to the road surface based on the road surface fingerprint information;
S6、根据最接近路面图像和当前路面图像中指纹信息的相对位置关系,计算得到当前路面图像的精确定位结果。S6. According to the relative position relationship between the fingerprint information in the closest road surface image and the current road surface image, the accurate positioning result of the current road surface image is calculated.
进一步地,本发明的步骤S5中进行图像特征提取的方法为:Furthermore, the method for extracting image features in step S5 of the present invention is:
S51、检测图像特征点:通过Harris算法检测角点,即比较图像中的一个像素点P和周围若干个点组成的圆上的像素点的差值,其公式为:S51, detecting image feature points: detecting corner points by Harris algorithm, that is, comparing the difference between a pixel point P in the image and the pixel points on the circle composed of several surrounding points, the formula is:
其中,I(x)为当前像素点的灰度值,I(p)为像素点P的灰度值,ε为设定的阈值,若干个点的响应函数值的和为N,当N大于判定标准时,该点为图像特征点;Among them, I(x) is the gray value of the current pixel, I(p) is the gray value of the pixel P, ε is the set threshold, and the sum of the response function values of several points is N. When N is greater than the judgment standard, the point is an image feature point;
根据特征点到形心的方向确定特征点的主方向:Determine the main direction of the feature point based on the direction from the feature point to the centroid:
θ=arctan(m10,m01)θ=arctan(m 10 ,m 01 )
其中,mpq,(p,q∈(0,1))表示图像块的力矩,I(x,y)表示在坐标(x,y)处的灰度值,θ为质心与特征点的夹角;Where, m pq , (p, q∈(0,1)) represents the moment of the image block, I(x, y) represents the gray value at the coordinate (x, y), and θ is the angle between the centroid and the feature point;
S52、描述特征点:S52, describe the feature points:
在特征点上选定n对特征,形成矩阵s:Select n pairs of features at the feature points to form a matrix s:
矩阵s的大小为2×2n,其中xi,yi(i∈(1,n))表示第i个特征对在X,Y轴的坐标,再利用特征点到形心的方向得到仿射变换矩阵R,利用矩阵R计算得到新的描述矩阵sθ,其公式为:The size of the matrix s is 2×2n, where x i , y i (i∈(1,n)) represents the coordinates of the i-th feature pair on the X and Y axes. The direction from the feature point to the centroid is used to obtain the affine transformation matrix R. The new description matrix s θ is calculated using the matrix R. The formula is:
结合BRIEF描述子,得到ORB特征描述子:Combined with the BRIEF descriptor, we get the ORB feature descriptor:
gn(P,θ)=fn(P)|(xi,yi)∈Mθ g n (P, θ)=f n (P)|(x i ,y i )∈M θ
其中,n取值为256;Among them, n is 256;
S53、特征点匹配:根据得到的256维ORB特征描述子,计算待定位的当前灰度图像和相似图像的全部特征描述子的汉明距离,比较汉明距离实现特征点粗匹配,并利用RANSAC算法去除错误的匹配,从而找到待定位的当前灰度图像与相似图像的相同特征点,这些匹配的特征点作为路面指纹信息;通过比较相同特征点的数量,在粗匹配的相似图像中找到一幅最接近路面图像,其与待定位图像的匹配的特征点数量最多,从而利用指纹信息实现待定位图像精匹配。S53, feature point matching: Based on the obtained 256-dimensional ORB feature descriptor, calculate the Hamming distance of all feature descriptors of the current grayscale image to be located and similar images, compare the Hamming distance to achieve rough matching of feature points, and use the RANSAC algorithm to remove incorrect matches, so as to find the same feature points between the current grayscale image to be located and similar images, and these matched feature points are used as road surface fingerprint information; by comparing the number of identical feature points, find an image closest to the road surface in the roughly matched similar images, which has the largest number of feature points matching with the image to be located, so as to use the fingerprint information to achieve precise matching of the image to be located.
进一步地,本发明的步骤S6中计算得到当前路面图像的精确定位结果的方法为:Furthermore, the method for calculating the accurate positioning result of the current road surface image in step S6 of the present invention is:
利用待定位的当前路面图像和相似图像的指纹信息求得两张图像的相对位置关系,利用相似图像的位置和相对位置关系,计算待定位的当前路面图像的位置信息,从而实现车辆的度量级定位。The relative position relationship between the two images is obtained by using the fingerprint information of the current road surface image to be located and the similar images. The position information of the current road surface image to be located is calculated by using the position and relative position relationship of the similar images, thereby realizing the metric-level positioning of the vehicle.
进一步地,本发明的步骤S2中对定位信息和惯导信息进行预处理,得到灰度图像对应的位置信息的方法为:将GPS数据和惯导数据转换为经纬度表示的位置信息。Furthermore, in step S2 of the present invention, the positioning information and the inertial navigation information are preprocessed to obtain the position information corresponding to the grayscale image by converting the GPS data and the inertial navigation data into the position information represented by longitude and latitude.
进一步地,本发明的步骤S3中地图集中每张图像的固定间隔为0.5m。Furthermore, in step S3 of the present invention, the fixed interval of each image in the atlas is 0.5 m.
应当理解的是,对本领域普通技术人员来说,可以根据上述说明加以改进或变换,而所有这些改进和变换都应属于本发明所附权利要求的保护范围。It should be understood that those skilled in the art can make improvements or changes based on the above description, and all these improvements and changes should fall within the scope of protection of the appended claims of the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910469277.1A CN110197157B (en) | 2019-05-31 | 2019-05-31 | Pavement crack growth detection method based on historical crack data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910469277.1A CN110197157B (en) | 2019-05-31 | 2019-05-31 | Pavement crack growth detection method based on historical crack data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110197157A CN110197157A (en) | 2019-09-03 |
CN110197157B true CN110197157B (en) | 2023-03-24 |
Family
ID=67753569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910469277.1A Active CN110197157B (en) | 2019-05-31 | 2019-05-31 | Pavement crack growth detection method based on historical crack data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110197157B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112669453B (en) * | 2020-11-30 | 2023-05-02 | 南方咨询(湖北)有限公司 | Municipal road detecting system |
CN113129282A (en) * | 2021-04-16 | 2021-07-16 | 广东韶钢松山股份有限公司 | Belt abnormality determination method, device, equipment and storage medium |
CN113506292B (en) * | 2021-07-30 | 2022-09-20 | 同济大学 | Structure surface crack detection and extraction method based on displacement field |
CN114359147A (en) * | 2021-12-03 | 2022-04-15 | 深圳大学 | Crack detection method, device, server and storage medium |
CN114972314A (en) * | 2022-06-22 | 2022-08-30 | 广东电网有限责任公司 | Crack detection method for power equipment, computer equipment and storage medium |
CN114898107B (en) * | 2022-07-01 | 2022-12-02 | 深之蓝海洋科技股份有限公司 | Crack re-identification method and device |
CN117037105B (en) * | 2023-09-28 | 2024-01-12 | 四川蜀道新能源科技发展有限公司 | Pavement crack filling detection method, system, terminal and medium based on deep learning |
CN117890900A (en) * | 2024-01-18 | 2024-04-16 | 中建三局信息科技有限公司 | Radar positioning method, radar positioning device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016172827A1 (en) * | 2015-04-27 | 2016-11-03 | 武汉武大卓越科技有限责任公司 | Stepwise-refinement pavement crack detection method |
CN106960591A (en) * | 2017-03-31 | 2017-07-18 | 武汉理工大学 | A kind of vehicle high-precision positioner and method based on road surface fingerprint |
CN108229562A (en) * | 2018-01-03 | 2018-06-29 | 重庆亲禾智千科技有限公司 | It is a kind of to obtain the method for the specific failure modes situation in road surface |
-
2019
- 2019-05-31 CN CN201910469277.1A patent/CN110197157B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016172827A1 (en) * | 2015-04-27 | 2016-11-03 | 武汉武大卓越科技有限责任公司 | Stepwise-refinement pavement crack detection method |
CN106960591A (en) * | 2017-03-31 | 2017-07-18 | 武汉理工大学 | A kind of vehicle high-precision positioner and method based on road surface fingerprint |
CN108229562A (en) * | 2018-01-03 | 2018-06-29 | 重庆亲禾智千科技有限公司 | It is a kind of to obtain the method for the specific failure modes situation in road surface |
Also Published As
Publication number | Publication date |
---|---|
CN110197157A (en) | 2019-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110197157B (en) | Pavement crack growth detection method based on historical crack data | |
CN110686677B (en) | Global positioning method based on geometric information | |
CN110969088B (en) | A Change Detection Method for Remote Sensing Image Based on Saliency Detection and Deep Siamese Neural Network | |
CN107092877B (en) | Roof contour extraction method of remote sensing image based on building base vector | |
CN110334578B (en) | Weak supervision method for automatically extracting high-resolution remote sensing image buildings through image level annotation | |
CN102768022B (en) | Tunnel surrounding rock deformation detection method adopting digital camera technique | |
CN106960591B (en) | A high-precision vehicle positioning device and method based on road surface fingerprints | |
CN114998852A (en) | Intelligent detection method for road pavement diseases based on deep learning | |
CN109544612A (en) | Point cloud registration method based on the description of characteristic point geometric jacquard patterning unit surface | |
CN107341781B (en) | SAR image correction method based on improved phase consistency feature vector base map matching | |
CN101770581A (en) | Semi-automatic detecting method for road centerline in high-resolution city remote sensing image | |
CN110619258B (en) | Road track checking method based on high-resolution remote sensing image | |
CN112396612B (en) | Vector information assisted remote sensing image road information automatic extraction method | |
CN102903109B (en) | A kind of optical image and SAR image integration segmentation method for registering | |
Shao et al. | Application of a fast linear feature detector to road extraction from remotely sensed imagery | |
CN109461132B (en) | SAR Image Automatic Registration Method Based on Geometric Topological Relationship of Feature Points | |
CN107240130B (en) | Remote sensing image registration method, device and system | |
CN114596500A (en) | Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV3plus | |
CN110379007A (en) | Three-dimensional Highway Curve method for reconstructing based on vehicle-mounted mobile laser scanning point cloud | |
CN112329559A (en) | Method for detecting homestead target based on deep convolutional neural network | |
CN110245566A (en) | A long-distance tracking method for infrared targets based on background features | |
CN108492711A (en) | A kind of drawing electronic map method and device | |
CN109389165A (en) | Oil level gauge for transformer recognition methods based on crusing robot | |
CN118314157B (en) | Remote sensing image intelligent segmentation method based on artificial intelligence | |
CN114627380A (en) | Rice identification method based on fusion of optical image and SAR time sequence data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |