CN104331699B - A kind of method that three-dimensional point cloud planarization fast search compares - Google Patents
A kind of method that three-dimensional point cloud planarization fast search compares Download PDFInfo
- Publication number
- CN104331699B CN104331699B CN201410671969.1A CN201410671969A CN104331699B CN 104331699 B CN104331699 B CN 104331699B CN 201410671969 A CN201410671969 A CN 201410671969A CN 104331699 B CN104331699 B CN 104331699B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- image
- points
- point
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 16
- 230000009466 transformation Effects 0.000 claims abstract description 13
- 230000002146 bilateral effect Effects 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 5
- 239000003086 colorant Substances 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 2
- 230000014759 maintenance of location Effects 0.000 claims 1
- 238000009499 grossing Methods 0.000 abstract description 4
- 238000012545 processing Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 230000037237 body shape Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 241000283973 Oryctolagus cuniculus Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 238000002316 cosmetic surgery Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开一种三维点云平面化快速搜索比对的方法,该方法首先获取物体的点云数据,并对其进行光顺和精简处理,然后按照要求选定二维视图,找到该视图的边界,然后以此边界为基准,对图像按需求进行网格化分割,对分割后的图像的每一个网格进行遍历,根据每一格网格内的点云密度对网格进行标记,之后用得到的标记结果做出近似二值图像,该图像拥有能够反映点云数据情况的特征点,再使用尺度不变特征变换匹配算法将近似二值图像与标准库中按照采用相同方法处理过的图像进行特征点比对,采用遍历方式找出特征点匹配最多的一组数据。该方法具有精度高、速度快、高度灵活等特点,适用于各种需要建立标准库,并将点云数据与库中图像进行快速配准的场合。
The invention discloses a method for quick search and comparison of three-dimensional point cloud planarization. The method first obtains point cloud data of an object, performs smoothing and streamlining processing on it, and then selects a two-dimensional view according to requirements, and finds the boundary of the view. , and then based on this boundary, the image is divided into grids according to the requirements, and each grid of the segmented image is traversed, and the grid is marked according to the point cloud density in each grid, and then used The obtained marking results make an approximate binary image, which has feature points that can reflect the point cloud data, and then use the scale-invariant feature transformation matching algorithm to compare the approximate binary image with the image processed by the same method in the standard library Perform feature point comparison, and use traversal to find a set of data with the most matching feature points. This method has the characteristics of high precision, fast speed, and high flexibility, and is suitable for various occasions that need to establish a standard library and quickly register point cloud data with images in the library.
Description
技术领域technical field
本发明涉及三维点云处理和三维点云匹配的方法,属于计算机视觉和模式识别的领域,具体涉及一种三维点云平面化快速搜索比对的方法。The invention relates to a method for three-dimensional point cloud processing and three-dimensional point cloud matching, which belongs to the field of computer vision and pattern recognition, and in particular to a method for fast search and comparison of three-dimensional point cloud planarization.
背景技术Background technique
三维点云数据是指通过三维数字化技术得到物体的形态结构的空间分布海量点集合。高精度的点云数据能够较好地表现被测物体的三维形态特征,在汽车、五金家电、航空、陶瓷等模具及产品开发,古董、工艺品、雕塑、人像制品等快速原型制作,机械外形设计、医学整容、人体外形制作、人体形状测量及植物形态获取等领域均有重要的应用。3D point cloud data refers to the spatially distributed mass point collection of the morphological structure of an object obtained through 3D digital technology. High-precision point cloud data can better represent the three-dimensional shape characteristics of the measured object. It is used in the development of molds and products such as automobiles, hardware appliances, aviation, ceramics, rapid prototyping of antiques, handicrafts, sculptures, portrait products, and mechanical shape design. It has important applications in fields such as medical plastic surgery, human body shape making, human body shape measurement and plant morphology acquisition.
传统的三维图形匹配大多是直接将待匹配的点云数据同标准点云数据进行比较,通过平移、旋转、缩放等几何变换后,进行相似性或相同性分析。这种技术广泛应用在医学、土木建筑、以及工业上的反向工程等各个领域。精度高,匹配效果好,但是传统的匹配方法时空复杂度高,并且当模型的点云数据过多时,其计算时间将大大增加,这种方法虽然适合精确匹配,但该方法只适合两两单独匹配,不适合同标准库中数个点云数据进行匹配的场合,也不适合需要快速识别的场合。Most of the traditional 3D graphic matching is to directly compare the point cloud data to be matched with the standard point cloud data, and perform similarity or identity analysis after geometric transformations such as translation, rotation, and scaling. This technology is widely used in various fields such as medicine, civil engineering, and reverse engineering in industry. High precision and good matching effect, but the traditional matching method has high space-time complexity, and when the model has too much point cloud data, its calculation time will be greatly increased. Although this method is suitable for precise matching, it is only suitable for pairwise Matching is not suitable for the occasion of matching several point cloud data in the same standard library, nor is it suitable for the occasion that requires quick identification.
发明内容Contents of the invention
有鉴于此,本发明针对现有方法的快速性不足的缺点提出一种三维点云平面化快速搜索比对的方法,此方法具有精度高,快速性好,高度灵活等特点,适用于各种需要同标准库中点云数据进行快速配准的场合。In view of this, the present invention proposes a method for rapid search and comparison of three-dimensional point cloud planarization for the shortcomings of existing methods. This method has the characteristics of high precision, good speed, and high flexibility, and is suitable for various Occasions that need to be quickly registered with the point cloud data in the standard library.
本发明的技术解决方案为:对三维点云进行预处理,将三维点云图像转化为二维图形,将二维图形进行网格化分割,根据分割后的图形生成用于匹配的近似二值图像,根据该近似二值图像同标准库中的图像进行匹配,从而找出标准库中与待测物体三维点云数据最接近的一组点云数据。具体包括以下步骤:The technical solution of the present invention is: preprocessing the 3D point cloud, converting the 3D point cloud image into a 2D graphic, performing grid segmentation on the 2D graphic, and generating an approximate binary value for matching based on the segmented graphic The image is matched with the image in the standard library according to the approximate binary image, so as to find out a set of point cloud data in the standard library that is closest to the three-dimensional point cloud data of the object to be measured. Specifically include the following steps:
步骤一:获得待测物体的三维点云,然后利用双边滤波去噪算法对点云数据进行光顺处理,之后利用随机采样法对光顺后的点云数据进行数据精简,最后对精简后的点云数据进行二维变换,选取的视图(指正视图、侧视图)要同标准库中建库时采用的视图保持一致,降维后生成二维点云图像;Step 1: Obtain the 3D point cloud of the object to be measured, and then use the bilateral filter denoising algorithm to smooth the point cloud data, and then use the random sampling method to simplify the data of the smoothed point cloud data, and finally the streamlined point cloud data The point cloud data is subjected to two-dimensional transformation, and the selected view (referring to the front view and side view) must be consistent with the view used when building the library in the standard library, and the two-dimensional point cloud image is generated after dimensionality reduction;
步骤二:通过快速排序,找到二维点云图像的四个边界点,然后根据这四个边界点生成二维点云图像边界,并对图像进行网格化分割,之后检索每一个被分割的网格,根据不同网格中点云的密度差异,做出对应标记,并根据标记填充相应的颜色,生成对应梯度的近似二值图像;Step 2: Find the four boundary points of the two-dimensional point cloud image through quick sorting, then generate the two-dimensional point cloud image boundary according to these four boundary points, and perform grid segmentation on the image, and then retrieve each segmented The grid, according to the density difference of the point cloud in different grids, makes corresponding marks, and fills the corresponding colors according to the marks, and generates an approximate binary image corresponding to the gradient;
步骤三:采用尺度不变特征变换匹配算法将得到的图像同标准库中的近似二值图像进行比较,全部比较完毕后,采用顺序查找,通过遍历标准库中每一组图像,比较每一组的特征点匹配情况,找到对应特征点最多的一组,从而完成比对。Step 3: Use the scale-invariant feature transformation matching algorithm to compare the obtained image with the approximate binary image in the standard library. After all the comparisons are completed, use sequential search to compare each group of images in the standard library Match the feature points of the group, find the group with the most corresponding feature points, and complete the comparison.
进一步,所述步骤一中的二维变换是指:根据比对需要选择生成三维图像的三视图的某一种,所采用的视图必须与标准库中的图像的视图为同一视图,保证准确性。Further, the two-dimensional transformation in the step 1 refers to: according to the need to select one of the three views for generating a three-dimensional image according to the comparison, the adopted view must be the same view as the image in the standard library to ensure accuracy .
进一步,所述步骤一中双边滤波去噪算法具体的步骤是:3.1建立K-邻域;3.2法矢估算;3.3定义视平面;3.4引入双边滤波算子得到光顺后的坐标。Further, the specific steps of the bilateral filtering denoising algorithm in the first step are: 3.1 establishing K-neighborhood; 3.2 normal vector estimation; 3.3 defining the viewing plane; 3.4 introducing a bilateral filtering operator to obtain smoothed coordinates.
进一步,所述步骤一中的随机采样法是指:建立一个函数,使它产生的随机数能刚好包括所有的点云,然后产生接连不断的一组随机数,再从原始点云中找到其对应的点并剔除,直到总点数达到既定的要求。Further, the random sampling method in step 1 refers to: establish a function so that the random numbers it generates can just include all point clouds, and then generate a continuous set of random numbers, and then find the other random numbers from the original point cloud. Corresponding points are eliminated until the total points meet the established requirements.
进一步,所述步骤二中的边界点特指图像中上下左右的最远点,例如在正视图中,则采用快速排序法分别找出X值最小和最大的两点,Y值最小和最大的两点,共四个坐标点为它的边界点。Further, the boundary point in the step 2 specifically refers to the farthest point in the image, for example, in the front view, the quick sorting method is used to find the two points with the smallest and largest X values, and the points with the smallest and largest Y values. Two points, a total of four coordinate points are its boundary points.
进一步,所述步骤二中的网格化分割是指:以本步骤确定的边界为基准,将图像分割为n×n(n∈R,n>0)个网格块。Further, the grid division in step 2 refers to dividing the image into n×n (n∈R, n>0) grid blocks based on the boundary determined in this step.
进一步,所述步骤二中的近似二值图像是指:由于做出的图像颜色和步骤二中标定的密度标记有关,所做出的图像介于灰度图像和二值图像之间,为了方便表达,将用于比对的二维图像统一称为近似二值图,该图像通过标记密度能产生较多特征点,提高方法准确性。Further, the approximate binary image in the step 2 refers to: because the color of the image made is related to the density mark marked in step 2, the image made is between the grayscale image and the binary image, for convenience Expression, the two-dimensional image used for comparison is collectively called an approximate binary image, which can generate more feature points through the marking density to improve the accuracy of the method.
进一步,所述步骤二中生成对应梯度的近似二值图像具体包括以下步骤:遍历每一个网格中的点云,根据点云在网格中的数量,对每一格网格做出标记,根据得到的对应标记,依次向对应的网格填充相应的颜色,生成大小为n×n(n∈R,n>0)的像素图,为了计算方便且在保证能够产生足够特征点的情况下,取黑色和灰色两种颜色表示不同密度,无点云网格以白色表示。Further, generating an approximate binary image corresponding to the gradient in step 2 specifically includes the following steps: traversing the point cloud in each grid, marking each grid according to the number of point clouds in the grid, According to the obtained corresponding marks, fill the corresponding grid with the corresponding color in turn, and generate a pixel map with a size of n×n (n∈R, n>0). For the convenience of calculation and to ensure that enough feature points can be generated , take two colors of black and gray to represent different densities, and the grid without point cloud is represented by white.
进一步,所述步骤三中的尺度不变特征变换匹配算法具体包括以下步骤:9.1建立图像尺度空间;9.2检测关键点;9.3关键点方向的分配;9.4特征点描述符;9.5采用穷举法,比较两幅图的特征点,统计出相匹配的特征点数量,用于搜索比对。Further, the scale-invariant feature transformation matching algorithm in the step 3 specifically includes the following steps: 9.1 Establishing image scale space; 9.2 Detecting key points; 9.3 Allocation of key point directions; 9.4 Feature point descriptors; 9.5 Using exhaustive method, Compare the feature points of the two images, and count the number of matching feature points for search and comparison.
本发明与现有技术相比的优点在于:(1)时空复杂度低,在保证较高精度的情况下,提高了搜索比对的快速性。(2)适用于各种需要快速比对识别的场合,不仅适用于两两单独匹配,也可以自行建立标准库,同标准库中数据进行比对。(3)抗干扰能力强,通过对关键作图参数(网格中点云密度的标记)的修改,可以再光顺效果不佳的情况下依然保证比对精度。(4)本算法的重点在于识别,所以在建立标准库时可以仅扫描物体特征最明显的某一面,也可以全部扫描,根据不同需求采用不同方法,具有极高的灵活性,可以大量节省存储空间,提高比对速度。Compared with the prior art, the present invention has the following advantages: (1) The time and space complexity is low, and the rapidity of search and comparison is improved under the condition of ensuring high precision. (2) It is suitable for various occasions that require quick comparison and identification. It is not only suitable for pairwise matching, but also can establish a standard library by itself and compare it with the data in the standard library. (3) Strong anti-interference ability. By modifying the key drawing parameters (the mark of the point cloud density in the grid), the comparison accuracy can still be guaranteed even when the smoothing effect is not good. (4) The focus of this algorithm is recognition, so when building a standard library, you can scan only one side with the most obvious features of the object, or you can scan all of them. Different methods are used according to different needs, which has extremely high flexibility and can save a lot of storage. Space, improve the comparison speed.
附图说明Description of drawings
为了使本发明的目的、技术方案和有益效果更加清楚,本发明提供如下附图进行说明:In order to make the purpose, technical scheme and beneficial effect of the present invention clearer, the present invention provides the following drawings for illustration:
图1为本发明所述一种三维点云平面化快速搜索比对的方法的流程图Fig. 1 is the flow chart of a kind of three-dimensional point cloud planarization fast search comparison method of the present invention
图2为本发明所述网格化分割示意图Fig. 2 is a schematic diagram of grid segmentation according to the present invention
图3为本发明所述近似二值图像示意图Fig. 3 is a schematic diagram of an approximate binary image according to the present invention
图4为点云数据采用斯坦福Bunny兔的效果图Figure 4 is the rendering of the point cloud data using the Stanford Bunny rabbit
具体实施方式detailed description
下面将结合附图,对本发明的优选实施例进行详细的描述。The preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
图1为本发明所述方法的流程图,本方法包括以下步骤:Fig. 1 is the flow chart of method of the present invention, and this method comprises the following steps:
步骤一:根据需要决定扫描单个视场或多个视场。Step 1: Decide to scan a single field of view or multiple fields of view as needed.
步骤二:用三维测量系统对待比对物体进行扫描,获得待比对物体的三维点云。Step 2: Scan the object to be compared with a 3D measurement system to obtain a 3D point cloud of the object to be compared.
步骤三:如需扫描多个视场,则将所有单视场三维点云拼接到同一个测量坐标系下。采用的拼接方法包括:机械臂辅助拼接法、粘贴标志点等,是三维点云拼接的通用方法。Step 3: If multiple fields of view need to be scanned, stitch all single-field 3D point clouds into the same measurement coordinate system. The stitching methods used include: mechanical arm-assisted stitching method, pasting marker points, etc., which are general methods for 3D point cloud stitching.
步骤四:将得到的点云数据c={p1,p2,…,pn}pi∈R3采用双边滤波去噪算法进行光顺。Step 4: Smooth the obtained point cloud data c={p 1 ,p 2 ,...,p n }p i ∈ R 3 with a bilateral filter denoising algorithm.
步骤五:将光顺后的三维点云数据按照随机采样法进行精简,保留其中40%的点云数据。Step 5: Streamline the smoothed 3D point cloud data according to the random sampling method, and retain 40% of the point cloud data.
步骤六:完成预处理的点云数据c",根据需要变换成正视图或俯视图,在此例中变换成正视图。将c"中点云数据的Z轴部分去掉,保留为(X,Y),二维化的点云数据记为c*。Step 6: Complete the preprocessed point cloud data c", and transform it into a front view or a top view as required, in this case into a front view. Remove the Z-axis part of the point cloud data in c" and keep it as (X,Y ), and the two-dimensional point cloud data is denoted as c * .
步骤七:通过遍历c*找到四个边界点,图像最右侧即X的最大值的点图像最左侧即X的最小值点图像最上侧即Y的最大值点图像最下侧即Y的最小值点 Step 7: Find four boundary points by traversing c * , the rightmost point of the image is the maximum value of X The leftmost point of the image is the minimum value point of X The uppermost side of the image is the maximum point of Y The lowermost side of the image is the minimum point of Y
步骤八:将图像以四个点为边界,按图2示意,分割为n×n(n∈R,n>0)个网格,每个网格大小为长宽其中表示点中X的值,以此类推。Step Eight: Convert the image to Four points are the boundary, as shown in Figure 2, divided into n×n (n ∈ R, n>0) grids, each grid size is long width in Represent a point The value of X in , and so on.
步骤九:遍历每一个网格中的点云,根据点云密度,对每一格网格做出标记,如图2所示意,当点云数量少于X1时,标记为0,当点云数量大于X1,小于X2时,标记为1,当点云数量大于X2时,标记为2,直到Xi。其中X1、X2至Xi的大小和数量根据实际情况自行设置,在此例中,只设置X1和X2。Step 9: Traverse the point cloud in each grid, and mark each grid according to the point cloud density, as shown in Figure 2. When the number of point clouds is less than X 1 , the mark is 0, when the point cloud When the number of cloud is greater than X 1 and less than X 2 , it is marked as 1, when the number of point cloud is greater than X 2 , it is marked as 2 until X i . The sizes and numbers of X 1 , X 2 to X i are set according to the actual situation. In this example, only X 1 and X 2 are set.
步骤十:根据得到的网格和对应标记,做出近似二值图像,在此例中,标记为0的网格画白色,标记为1的网格画黑色,标记为2的网格画灰色,所作颜色可自行调整,画图效果如图3和图4所示。Step 10: Make an approximate binary image based on the obtained grid and corresponding marks. In this example, the grid marked 0 is painted white, the grid marked 1 is drawn black, and the grid marked 2 is drawn gray , the color can be adjusted by yourself, and the drawing effect is shown in Figure 3 and Figure 4.
步骤十一:采用尺度不变特征变换匹配算法将得到的图像同标准库中的近似二值图像进行比较,全部比较完毕后,采用顺序查找,通过遍历标准库中每一组图像,比较每一组的特征点匹配情况,找到对应特征点最多的一组,从而完成比对。Step 11: Use the scale-invariant feature transformation matching algorithm to compare the obtained image with the approximate binary image in the standard library. After all the comparisons are completed, use sequential search to compare each Match the feature points of a group, find the group with the most corresponding feature points, and complete the comparison.
进一步,在步骤四中采用的双边滤波去噪算法具体包括以下步骤:Further, the bilateral filtering denoising algorithm adopted in step 4 specifically includes the following steps:
4.1:建立K-邻域4.1: Establish K-Neighborhood
点云数据c中任意测点p,满足p∈c,则与测点p距离最近的k个点,称为p的K-邻域,记作N(p)。其中k取25。Any measuring point p in the point cloud data c satisfies p∈c, then the k points closest to the measuring point p are called the K-neighborhood of p, denoted as N(p). Where k is 25.
4.2:法矢估算4.2: Normal Vector Estimation
将上一步骤得到的N(p),通过最小二乘法在N(p)内构造平面,该平面称为点p在邻域N(p)上的切平面,记为T(p)。T(p)的单位法矢即是点p处的单位法矢 The N(p) obtained in the previous step is used to construct a plane within N(p) by the least square method, which is called the tangent plane of point p on the neighborhood N(p), denoted as T(p). The unit normal vector of T(p) is the unit normal vector at point p
4.3:定义视平面4.3: Defining the view plane
将空间R3分解为两个子空间的直和:其中,N为邻域在p点沿法矢方向的一维空间,而S2为过点p的二维切平面空间。在局部范围内,定义S2为视平面,邻域点在S2平面上的投影为像素点的位置,领域点到投影点的距离定义为像素值的大小,类似于图像处理。Decompose the space R3 into a direct sum of two subspaces: Among them, N is the one-dimensional space along the normal vector direction of the neighborhood at point p, and S2 is the two-dimensional tangent plane space passing through point p. In the local area, S2 is defined as the viewing plane, the projection of the neighborhood point on the S2 plane is the pixel position, and the distance from the domain point to the projected point is defined as the pixel value, which is similar to image processing.
4.4:双边滤波算子4.4: Bilateral filter operator
引入双边滤波算子Introduce bilateral filter operator
其中,N(p)为p的邻域,pi∈N(p)。p’为p在S2上的投影。这里并没有直接采用直接的三维空间距离而是利用的投影平面上的距离。为p点法矢,为pi法矢。WC和Ws分别是以σc、σs为标准差的高斯核函数,σc控制光顺程度而σs控制特征保持程度,Wc为空间域权重;Ws为特征域权重。d为法矢方向调整的距离,根据式子得到光顺后的坐标c^。Among them, N(p) is the neighborhood of p, p i ∈ N(p). p' is the projection of p on S2. Here, the direct three-dimensional space distance is not directly used, but the distance on the projection plane is used. is the normal vector of point p, is the normal vector of pi . W C and W s are Gaussian kernel functions with σ c and σ s as the standard deviation respectively, σ c controls the degree of smoothness and σ s controls the degree of feature preservation, Wc is the weight of the space domain; Ws is the weight of the feature domain. d is the distance adjusted in the direction of the normal vector, according to the formula Get the coordinate c^ after smoothing.
进一步,在步骤十一中采用的尺度不变特征变换匹配算法具体包括以下步骤:Further, the scale-invariant feature transformation matching algorithm adopted in step eleven specifically includes the following steps:
11.1:建立图像尺度空间(或高斯金字塔),并检测极值点,这里和下文所说的点,均为图中的像素。11.1: Establish an image scale space (or Gaussian pyramid), and detect extreme points. The points mentioned here and below are all pixels in the image.
本算法采用高斯函数来建立尺度空间,高斯函数公式为:This algorithm uses the Gaussian function to establish the scale space, and the formula of the Gaussian function is:
上述公式G(x,y,σ)为尺度可变高斯函数。The above formula G(x, y, σ) is a scale-variable Gaussian function.
一个图像的尺度空间,L(x,y,σ)定义为一个尺度可变的高斯函数G(x,y,σ)与原图像I(x,y)的卷积The scale space of an image, L(x,y,σ) is defined as the convolution of a scale-variable Gaussian function G(x,y,σ) with the original image I(x,y)
L(x,y,σ)=G(x,y,σ)*I(x,y) (3)L(x,y,σ)=G(x,y,σ)*I(x,y) (3)
尺度空间在实现时使用高斯金字塔表示,图像的金字塔模型是指,将原始图像不断降阶采样,得到一系列大小不一的图像,由大到小,从下到上构成的塔状模型。原图像为金子塔的第一层,每次降采样所得到的新图像为金字塔的一层(每层一张图像),每个金字塔共n层。金字塔的层数根据图像的原始大小和塔顶图像的大小共同决定,其计算公式如下:The scale space is represented by a Gaussian pyramid when it is implemented. The pyramid model of an image refers to a tower model that continuously downsamples the original image to obtain a series of images of different sizes, from large to small, and from bottom to top. The original image is the first layer of the pyramid, and the new image obtained by downsampling each time is a layer of the pyramid (one image for each layer), and each pyramid has n layers in total. The number of layers of the pyramid is determined according to the original size of the image and the size of the image on the top of the tower. The calculation formula is as follows:
n=log2{min(M,N)}-t,t∈[0,log2{min(M,N)}] (4)n=log 2 {min(M,N)}-t,t∈[0,log 2 {min(M,N)}] (4)
其中M,N为原图像的大小,t为塔顶图像的最小维数的对数值。Among them, M and N are the size of the original image, and t is the logarithm of the minimum dimension of the tower top image.
在尺度空间在尺度空间建立完毕后,为了能够找到稳定的特征点,采用高斯差分的方法来检测那些在局部位置的极值点,即采用两个相邻的尺度中的图像相减,即公式定义为:After the scale space is established in the scale space, in order to find stable feature points, the Gaussian difference method is used to detect those extreme points in the local position, that is, the images in two adjacent scales are subtracted, that is, the formula defined as:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y) (5)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y) (5)
=L(x,y,kσ)-L(x,y,σ)=L(x,y,kσ)-L(x,y,σ)
11.2:检测关键点11.2: Detecting Keypoints
为了找到尺度空间的极值点,每一个采样点要和它所有的相邻点比较,看其是否比它的图像域和尺度域的相邻点大或者小。因为需要同相邻尺度进行比较,所以在一组高斯差分图像中只能检测到两个尺度的极值点,而其他尺度的极值点检测需要在图像金字塔的上一层高斯差分图像中进行,从而依次完成图像金字塔中不同层的高斯差分图像中不同尺度极值的检测。In order to find the extreme points of the scale space, each sampling point is compared with all its neighbors to see whether it is larger or smaller than its neighbors in the image domain and scale domain. Because it needs to be compared with adjacent scales, only extreme points of two scales can be detected in a set of Gaussian difference images, while extreme point detection of other scales needs to be performed in the upper layer of Gaussian difference images of the image pyramid , so as to sequentially complete the detection of extreme values of different scales in the difference of Gaussian images of different layers in the image pyramid.
11.3:关键点方向的分配11.3: Assignment of Keypoint Orientation
为了让描述符具有旋转不变性需要利用图像的局部特征为每一个特征点分配一个方向。利用关键点邻域像素的梯度及方向分布的特性,可以得到梯度模值和方向如下:In order to make the descriptor invariant to rotation, it is necessary to use the local features of the image to assign a direction to each feature point. Using the characteristics of the gradient and direction distribution of the pixels in the neighborhood of the key point, the gradient modulus and direction can be obtained as follows:
尺度为每个特征点各自所在的尺度。The scale is the scale of each feature point.
在以关键点为中心的邻域窗口内采样,并用直方图统计邻域像素的梯度方向。梯度直方图的范围是0~360度,其中每10度一个方向,总共36个方向。Sampling in the neighborhood window centered on the key point, and use the histogram to count the gradient direction of the neighborhood pixels. The gradient histogram ranges from 0 to 360 degrees, with one direction every 10 degrees, and a total of 36 directions.
直方图的峰值则代表了该关键点处邻域梯度的主方向,即作为该关键点的方向。The peak of the histogram represents the main direction of the neighborhood gradient at the key point, which is the direction of the key point.
11.4:特征点描述符11.4: Feature Point Descriptor
经过上述步骤计算,已经对每一个特征点赋予了三个信息:位置、尺度以及方向。在这之后就可以为每一个特征点建立描述符,最终形成特征向量,此时的特征向量已经具有尺度不变性和旋转不变性。After the calculation of the above steps, three pieces of information have been given to each feature point: position, scale and direction. After that, a descriptor can be established for each feature point, and finally a feature vector is formed. At this time, the feature vector already has scale invariance and rotation invariance.
11.5:采用穷举方法,对两幅图中的特征点进行比对,取待测图像中的某个特征点,找出它在标准图像中欧氏距离最近的前两个特征点,在这两个特征点中,如果最近的距离除以次近的距离少于某个比例阈值,则接受这一对匹配点,这个阈值一般在0.4到0.6之间。11.5: Use the exhaustive method to compare the feature points in the two pictures, take a certain feature point in the image to be tested, find out the first two feature points with the closest Euclidean distance in the standard image, and use the Among the feature points, if the nearest distance divided by the next closest distance is less than a certain ratio threshold, the pair of matching points is accepted. This threshold is generally between 0.4 and 0.6.
如上所述,本发明将三维点云数据转化为二维平面数据,将复杂的三维点云间的比对简化为图形匹配,大大缩短比对时间,网格化的图像处理对三维点云的光顺要求大大降低,适用于各种快速扫描快速比对的场合,仅需在进行比对前,采用同样的方法建立好标准库即可,而且可以根据需要随时改变分割数量,显示颜色,随时增加删减库中数据,灵活性高,拓展性强,尺度不变特征变换匹配算法保证了匹配的旋转不变性,尺度不变性,保证了比对的准确性。As mentioned above, the present invention converts 3D point cloud data into 2D plane data, simplifies the comparison between complex 3D point clouds into graphic matching, greatly shortens the comparison time, and the gridded image processing can greatly improve the accuracy of 3D point clouds. The smoothing requirements are greatly reduced, and it is suitable for various occasions of fast scanning and fast comparison. It is only necessary to use the same method to establish a standard library before the comparison, and the number of divisions and display colors can be changed at any time according to needs. Adding and deleting data in the database has high flexibility and strong scalability. The scale-invariant feature transformation matching algorithm ensures the rotation invariance and scale invariance of the matching, and ensures the accuracy of the comparison.
最后说明的是,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照较佳实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的宗旨和范围,其均应涵盖在本发明的权利要求范围当中。Finally, it is noted that the above embodiments are only used to illustrate the technical solutions of the present invention without limitation. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be carried out Modifications or equivalent replacements without departing from the spirit and scope of the technical solution of the present invention shall be covered by the claims of the present invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410671969.1A CN104331699B (en) | 2014-11-19 | 2014-11-19 | A kind of method that three-dimensional point cloud planarization fast search compares |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410671969.1A CN104331699B (en) | 2014-11-19 | 2014-11-19 | A kind of method that three-dimensional point cloud planarization fast search compares |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104331699A CN104331699A (en) | 2015-02-04 |
CN104331699B true CN104331699B (en) | 2017-11-14 |
Family
ID=52406421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410671969.1A Active CN104331699B (en) | 2014-11-19 | 2014-11-19 | A kind of method that three-dimensional point cloud planarization fast search compares |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104331699B (en) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404898B (en) * | 2015-11-26 | 2018-11-06 | 福州华鹰重工机械有限公司 | A kind of loose type point cloud data segmentation method and equipment |
CN106251353A (en) * | 2016-08-01 | 2016-12-21 | 上海交通大学 | Weak texture workpiece and the recognition detection method and system of three-dimensional pose thereof |
CN107590829B (en) * | 2017-09-18 | 2020-06-30 | 西安电子科技大学 | Seed point picking method suitable for multi-view dense point cloud data registration |
CN108109150B (en) * | 2017-12-15 | 2021-02-05 | 上海兴芯微电子科技有限公司 | Image segmentation method and terminal |
CN108466265B (en) * | 2018-03-12 | 2020-08-07 | 珠海市万瑙特健康科技有限公司 | Mechanical arm path planning and operation method, device and computer equipment |
CN108961419B (en) * | 2018-06-15 | 2023-06-06 | 重庆大学 | Method and system for digitizing microscopic viewing field space of microscopic vision system of microassembly system |
CN108986162B (en) * | 2018-06-28 | 2022-02-22 | 杭州吉吉知识产权运营有限公司 | Dish and background segmentation method based on inertial measurement unit and visual information |
CN109118500B (en) * | 2018-07-16 | 2022-05-10 | 重庆大学产业技术研究院 | Image-based three-dimensional laser scanning point cloud data segmentation method |
CN109840882B (en) * | 2018-12-24 | 2021-05-28 | 中国农业大学 | Site matching method and device based on point cloud data |
CN109767464B (en) * | 2019-01-11 | 2023-03-28 | 西南交通大学 | Point cloud registration method with low overlapping rate |
CN109978885B (en) * | 2019-03-15 | 2022-09-13 | 广西师范大学 | Tree three-dimensional point cloud segmentation method and system |
CN110458805B (en) * | 2019-03-26 | 2022-05-13 | 华为技术有限公司 | Plane detection method, computing device and circuit system |
CN110555824B (en) * | 2019-07-22 | 2022-07-08 | 深圳供电局有限公司 | Switch position discrimination method and control method of switch position detection system |
CN111091594B (en) * | 2019-10-17 | 2023-04-11 | 如你所视(北京)科技有限公司 | Multi-point cloud plane fusion method and device |
CN111445385B (en) * | 2020-03-28 | 2023-06-09 | 哈尔滨工程大学 | Three-dimensional object planarization method based on RGB color mode |
CN112287481B (en) * | 2020-10-27 | 2023-11-21 | 上海设序科技有限公司 | Mechanical design scheme searching method and device based on three-dimensional point cloud |
CN113362461B (en) * | 2021-06-18 | 2024-04-02 | 盎锐(杭州)信息科技有限公司 | Point cloud matching method and system based on semantic segmentation and scanning terminal |
CN113658238B (en) * | 2021-08-23 | 2023-08-08 | 重庆大学 | Near infrared vein image high-precision matching method based on improved feature detection |
CN115641553B (en) * | 2022-12-26 | 2023-03-10 | 太原理工大学 | Online detection device and method for invaders in heading machine working environment |
CN118135116B (en) * | 2024-04-30 | 2024-07-23 | 壹仟零壹艺数字科技(合肥)有限公司 | Automatic generation method and system based on CAD two-dimensional conversion three-dimensional entity |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268363A (en) * | 2013-06-06 | 2013-08-28 | 哈尔滨工业大学 | A Chinese Calligraphy Image Retrieval Method Based on Elastic HOG Features and DDTW Matching |
CN104007444A (en) * | 2014-06-09 | 2014-08-27 | 北京建筑大学 | Ground laser radar reflection intensity image generation method based on central projection |
CN104112115A (en) * | 2014-05-14 | 2014-10-22 | 南京国安光电科技有限公司 | Three-dimensional face detection and identification technology |
-
2014
- 2014-11-19 CN CN201410671969.1A patent/CN104331699B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103268363A (en) * | 2013-06-06 | 2013-08-28 | 哈尔滨工业大学 | A Chinese Calligraphy Image Retrieval Method Based on Elastic HOG Features and DDTW Matching |
CN104112115A (en) * | 2014-05-14 | 2014-10-22 | 南京国安光电科技有限公司 | Three-dimensional face detection and identification technology |
CN104007444A (en) * | 2014-06-09 | 2014-08-27 | 北京建筑大学 | Ground laser radar reflection intensity image generation method based on central projection |
Non-Patent Citations (2)
Title |
---|
《三维点云数据处理的技术研究》;王丽辉;《中国博士学位论文全文数据库 信息科技辑 》;20120115;I138-46 * |
《格网划分的最邻近点搜索方法》;杨容浩等;《测绘科学》;20120930;第37卷(第5期);第90-93页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104331699A (en) | 2015-02-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104331699B (en) | A kind of method that three-dimensional point cloud planarization fast search compares | |
CN109655019B (en) | A cargo volume measurement method based on deep learning and 3D reconstruction | |
CN108549873B (en) | Three-dimensional face recognition method and three-dimensional face recognition system | |
CN109872397B (en) | Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision | |
CN104299260B (en) | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration | |
CN104778701B (en) | A kind of topography based on RGB-D sensors describes method | |
CN105809651B (en) | Image saliency detection method based on edge dissimilarity contrast | |
CN111046843B (en) | Monocular ranging method in intelligent driving environment | |
CN109712223B (en) | Three-dimensional model automatic coloring method based on texture synthesis | |
CN106709947A (en) | RGBD camera-based three-dimensional human body rapid modeling system | |
Gupta et al. | Object based information extraction from high resolution satellite imagery using eCognition | |
CN110610505A (en) | Image segmentation method fusing depth and color information | |
CN107369158B (en) | Indoor scene layout estimation and target area extraction method based on RGB-D images | |
CN109101981B (en) | A loop closure detection method based on global image stripe code in street scene scene | |
CN112364881B (en) | An Advanced Sampling Consistency Image Matching Method | |
CN108182705A (en) | A kind of three-dimensional coordinate localization method based on machine vision | |
CN102446356A (en) | Parallel self-adaptive matching method for obtaining remote sensing images with uniformly distributed matching points | |
CN106446925A (en) | Dolphin identity recognition method based on image processing | |
CN102663762B (en) | The dividing method of symmetrical organ in medical image | |
CN113688846A (en) | Object size recognition method, readable storage medium, and object size recognition system | |
CN107886471A (en) | A kind of unnecessary object minimizing technology of photo based on super-pixel Voting Model | |
Omidalizarandi et al. | Segmentation and classification of point clouds from dense aerial image matching | |
CN109785261A (en) | A kind of airborne LIDAR three-dimensional filtering method based on gray scale volume element model | |
CN102722906A (en) | Feature-based top-down image modeling method | |
CN106934846B (en) | A kind of cloth image processing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |