CN117011704A - Feature extraction method based on dotted line feature fusion and self-adaptive threshold - Google Patents
Feature extraction method based on dotted line feature fusion and self-adaptive threshold Download PDFInfo
- Publication number
- CN117011704A CN117011704A CN202310835924.2A CN202310835924A CN117011704A CN 117011704 A CN117011704 A CN 117011704A CN 202310835924 A CN202310835924 A CN 202310835924A CN 117011704 A CN117011704 A CN 117011704A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- point
- sub
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 55
- 230000004927 fusion Effects 0.000 title claims abstract description 18
- 230000003044 adaptive effect Effects 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000000034 method Methods 0.000 claims description 9
- 230000000694 effects Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 4
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 239000006185 dispersion Substances 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 238000003706 image smoothing Methods 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims description 3
- 238000000265 homogenisation Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 9
- 238000013507 mapping Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000011160 research Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于点线特征融合与自适应阈值的特征提取方法,属于自动驾驶领域,为了解决基于视觉SLAM无人车自主导航在低纹理和短时间内快速运动场景中的定位精度差,鲁棒性弱等问题,本发明提供了一种基于点线特征融合与自适应阈值的特征点提取方法。首先输入图像,将图像设置固定分辨率,其次通过双线程并行处理图像,分别对图像进行线特征提取与自适应阈值的ORB点均匀化特征提取,最后在进行特征融合,来达到适应低纹理、低光照场景下的特征提取。本发明对低纹理复杂场景下的特征提取,对快速运动场景中精准定位与建图具有重要意义。The invention discloses a feature extraction method based on point-line feature fusion and adaptive threshold, which belongs to the field of automatic driving. In order to solve the positioning accuracy difference of autonomous navigation of unmanned vehicles based on visual SLAM in low texture and short-term fast motion scenes. , weak robustness and other issues, the present invention provides a feature point extraction method based on point-line feature fusion and adaptive threshold. First, the image is input, and the image is set to a fixed resolution. Secondly, the image is processed in parallel through two threads. Line feature extraction and ORB point uniformity feature extraction with adaptive threshold are performed on the image. Finally, feature fusion is performed to adapt to low texture, Feature extraction in low light scenes. The present invention is of great significance for feature extraction in low-texture complex scenes and accurate positioning and mapping in fast-moving scenes.
Description
技术领域Technical field
本发明属于自动驾驶领域,特别涉及一种基于点线特征融合与自适应阈值的特征提取方法。The invention belongs to the field of automatic driving, and in particular relates to a feature extraction method based on point-line feature fusion and adaptive threshold.
背景技术Background technique
视觉SLAM是指移动设备利用图像信息,不仅可以计算位姿,还可以同时构建环境地图,被广泛应用于多个领域,例如三维重构、机器人、无人车自动驾驶等。但由于无人车的实际应用场景多变且复杂,尤其在短时间内快速运动、低纹理等环境下,会发生运动模糊、两帧之间的重叠区域太少等现象,从而导致出现定位精度低、稳定性差等问题。这也使得如何解决基于视觉SLAM无人车自主导航在低纹理和短时间内快速运动场景中的定位精度差,鲁棒性弱等问题成为近些年来的研究热点之一。Visual SLAM means that mobile devices use image information to not only calculate poses, but also construct environment maps at the same time. It is widely used in many fields, such as three-dimensional reconstruction, robots, autonomous driving, etc. However, because the actual application scenarios of unmanned vehicles are changeable and complex, especially in environments such as rapid movement in a short period of time and low texture, motion blur and too little overlapping area between two frames will occur, resulting in poor positioning accuracy. Low, poor stability and other issues. This also makes how to solve the problems of poor positioning accuracy and weak robustness in autonomous navigation of unmanned vehicles based on visual SLAM in low-texture and short-term fast-moving scenes, which has become one of the research hotspots in recent years.
近些年来,凭借科研工作者的不断努力创新和研究,不管理论,还是技术,视觉SLAM都已逐步完善,但这都是建立在理想状态下的,例如纹理清晰且丰富等环境下。但很少有实际环境达到理想环境状态,目前很多应用比较广泛的视觉SLAM技术都是基于点特征搭建,这类系统在低纹理场景下的定位精度就比较差,主要是因为无法提取足够多的点特征来满足位姿估计的需求。因此,如何提高在低纹理场景下视觉SLAM的定位精度已成为学者们关注的焦点之一。In recent years, with the continuous efforts, innovation and research of scientific researchers, visual SLAM has been gradually improved regardless of theory or technology, but this is all based on ideal conditions, such as clear and rich textures. However, few actual environments reach the ideal environmental state. Currently, many widely used visual SLAM technologies are based on point features. The positioning accuracy of such systems in low-texture scenes is relatively poor, mainly because they cannot extract enough point features to meet the needs of pose estimation. Therefore, how to improve the positioning accuracy of visual SLAM in low-texture scenes has become one of the focuses of scholars.
发明内容Contents of the invention
有鉴于此,本发明目的在于针对目前在低纹理场景下无法提取足够多的特征点来满足位姿估计的需求,设计一种基于点线特征融合与自适应阈值的特征提取方法以解决上述背景技术中的问题。In view of this, the purpose of the present invention is to design a feature extraction method based on point-line feature fusion and adaptive thresholding to solve the above background problem of being unable to extract enough feature points to meet the needs of pose estimation in low-texture scenes. Problems in technology.
本发明所述是一种点线特征点融合提取方法,该方法通过如下方案进行实现:The present invention is a point-line feature point fusion extraction method, which is implemented through the following scheme:
步骤1:图像输入,准备特征提取:实时通过双目摄像头来采集周围图像,通过固定帧数采集图像,并将图片分辨率设置为752*480。Step 1: Image input, prepare for feature extraction: collect surrounding images through binocular cameras in real time, collect images through a fixed number of frames, and set the image resolution to 752*480.
步骤2:根据自适应阈值采用ORBSLAM3中的特征点提取算法进行ORB特征点提取;Step 2: Use the feature point extraction algorithm in ORBSLAM3 to extract ORB feature points based on the adaptive threshold;
进一步的,所述步骤2中根据自适应阈值采用ORBSLAM3中的特征点提取算法进行ORB特征点提取又包括以下步骤:Further, in step 2, using the feature point extraction algorithm in ORBSLAM3 to extract ORB feature points based on the adaptive threshold also includes the following steps:
步骤a:将输入图像划分为四个区域,由各区域对应的子图像灰度值混乱程度不同,定义不同的FAST角点检测阈值;Step a: Divide the input image into four areas, and define different FAST corner detection thresholds based on the different degrees of confusion of the sub-image grayscale values corresponding to each area;
步骤b:定义图像I1为待提取特征点图像,I1的高度为hi,宽度为wi,为提高ORB特征的均匀化程度,将I1划分为高度为hi/2,宽度为wi/2的四张子图像;Step b: Define image I 1 as the feature point image to be extracted. The height of I 1 is h i and the width is w i . In order to improve the uniformity of ORB features, I 1 is divided into a height of h i /2 and a width of Four sub-images of w i /2;
步骤c:使用变异系数描述每一张子图像中所有像素点灰度值的离散程度,变异系数s是一组数据标准差与平均值的比值,计算公式为:Step c: Use the coefficient of variation to describe the degree of dispersion of the gray value of all pixels in each sub-image. The coefficient of variation s is the ratio of the standard deviation of a set of data to the average value. The calculation formula is:
其中gi表示子图像中某一像素点的灰度值,表示子图像平均灰度值,t表示像素点个数,变异系数越大表示灰度的混乱程度越高,定义gs=s*30,使用如下公式定义初始角点检测阈值gth:where g i represents the gray value of a certain pixel in the sub-image, represents the average gray value of the sub-image, t represents the number of pixels, the larger the variation coefficient, the higher the degree of gray confusion. Define g s = s*30, and use the following formula to define the initial corner detection threshold g th :
步骤d:根据变异系数s设置自适应的检测阈值后,为保证尺度不变性,对子图像构建八层图像金字塔,接着对各层金字塔提取特征点,定义需要从I1中提取的ORB特征点总数为xt,图像金字塔的缩放尺度因子γs,则每张子图像第i层需要提取的特征点数目xti可表示为:Step d: After setting the adaptive detection threshold according to the variation coefficient s, in order to ensure scale invariance, construct an eight-layer image pyramid for the sub-image, then extract feature points from each layer of the pyramid, and define the ORB feature points that need to be extracted from I 1 The total number is x t , and the scaling factor γ s of the image pyramid, then the number of feature points x ti that need to be extracted in the i-th layer of each sub-image can be expressed as:
其中n为金字塔层数,inv(γs)表示缩放尺度因子的倒数;where n is the number of pyramid layers, inv(γ s ) represents the reciprocal of the scaling factor;
步骤e:计算每层金字塔需要提取特征点数后,使用边长为30像素的正方形网格对金字塔进行区域划分,在每个网格内进行FAST角点提取,若网格内提取的角点数量为0,则将该区域检测阈值降低为gth/2,再次进行提取,若仍提取不到角点,则放弃这个网格,保证特征点数量,重复执行如上操作,直至所有网格中的角点提取完成;Step e: After calculating the number of feature points that need to be extracted for each layer of the pyramid, use a square grid with a side length of 30 pixels to divide the pyramid into regions, and perform FAST corner point extraction in each grid. If the number of corner points extracted in the grid is is 0, then reduce the detection threshold of the area to g th /2, and extract again. If the corner points are still not extracted, abandon the grid to ensure the number of feature points, and repeat the above operation until all the corner points in the grid are Corner point extraction is completed;
步骤f:提取所有角点后,利用四叉树管理角点,定义四叉树的根节点为整张子图像,将子图像分为4个区域,作为根的子节点,若子节点中角点数量为2个(及以上),则对该子节点继续划分四叉树,若角点数量为1,保留该子节点且不继续划分,若角点数为0则删除该子节点,直至金字塔中每层图像提取的特征点数目达到设定的阈值之后,将各子节点中的Harris相应值最高的角点保留,最后将四张子图像中提取到的FAST角点合并,并计算相应的描述子,即可完成自适应阈值均匀化提取ORB特征点。Step f: After extracting all corner points, use the quadtree to manage the corner points. Define the root node of the quadtree as the entire sub-image. Divide the sub-image into 4 areas as child nodes of the root. If the corner point in the child node If the number is 2 (and above), continue to divide the quadtree into the sub-node. If the number of corner points is 1, keep the sub-node and not continue to divide it. If the number of corner points is 0, delete the sub-node until the pyramid After the number of feature points extracted from each layer of images reaches the set threshold, the corner points with the highest Harris corresponding values in each sub-node are retained. Finally, the FAST corner points extracted from the four sub-images are merged and the corresponding descriptions are calculated. , the adaptive threshold homogenization can be completed to extract ORB feature points.
步骤3:基于EDLines算法进行线特征提取;Step 3: Line feature extraction based on EDLines algorithm;
进一步的,所述步骤3中基于EDLines算法进行线特征提取又包括以下步骤:Further, line feature extraction based on the EDLines algorithm in step 3 includes the following steps:
步骤Ⅰ:为提升识别效果,先对输入图像添加畸变矫正,畸变设置参数值为相机出厂自带;Step Ⅰ: In order to improve the recognition effect, first add distortion correction to the input image. The distortion setting parameter value is the one that comes with the camera at the factory;
步骤Ⅱ:图像平滑处理,采用高斯滤波器去除噪声,通过滤波对图像中噪声加以抑制,高斯核为5×5,σ=1;Step Ⅱ: Image smoothing processing, using Gaussian filter to remove noise, and filtering to suppress noise in the image. Gaussian kernel is 5×5, σ=1;
步骤Ⅲ:使用Sobel梯度算子计算图像中的梯度与方向,计算公式为:Step III: Use the Sobel gradient operator to calculate the gradient and direction in the image. The calculation formula is:
其中I(x,y)是图像在(x,y)处的像素值,g(x,y)是梯度的大小,angle(x,y)是水平线的角度;Where I(x,y) is the pixel value of the image at (x,y), g(x,y) is the size of the gradient, and angle(x,y) is the angle of the horizontal line;
步骤Ⅳ:遍历每个像素,选取在梯度方向上梯度值大于或等于相邻像素梯度值的像素,定义为锚点;Step IV: Traverse each pixel, select the pixel whose gradient value in the gradient direction is greater than or equal to the gradient value of the adjacent pixel, and define it as an anchor point;
步骤Ⅴ:选择一个锚点作为开始点,通过比较相邻像素点的幅度值,选择梯度值最大像素点作为下一个锚点,最后连接相邻锚点形成边缘像素链;Step V: Select an anchor point as the starting point, compare the amplitude values of adjacent pixels, select the pixel with the largest gradient value as the next anchor point, and finally connect adjacent anchor points to form an edge pixel chain;
步骤Ⅵ:从边缘像素链中分割出一个或者多个线段,按顺序遍历像素,利用最小二乘法对像素拟合,表达式为:Step VI: Divide one or more line segments from the edge pixel chain, traverse the pixels in order, and use the least squares method to fit the pixels. The expression is:
式中xi,yi是像素坐标,至结果超过阈值,例如1个像素无误差,进行截断,直至处理完所有像素,线段拟合过程中涉及最大均方根拟合误差以及最短线段长度,拟合误差利用式(5)计算,最短线段长度计算公式为:In the formula, x i and y i are pixel coordinates. When the result exceeds the threshold, for example, 1 pixel has no error, it is truncated until all pixels are processed. The maximum root mean square fitting error and the shortest line segment length are involved in the line segment fitting process. The fitting error is calculated using equation (5), and the calculation formula for the shortest line segment length is:
式中N为输入图像的宽度。where N is the width of the input image.
步骤4:双线程提取特征,进行点线特征融合。Step 4: Double-thread extraction of features and fusion of point and line features.
本发明采用以上技术方案与现有技术相比,具有以下技术效果:Compared with the existing technology, the present invention adopts the above technical solution and has the following technical effects:
1)本发明基于点线特征融合与自适应阈值的特征提取方法,在低纹理场景下,存在丰富的线特征,这是因为相对于点特征,线特征基本没有受光线、遮挡、视角改变等因素影响,可适用于多种场景,因此在该场景下,引入丰富的线特征,将点线特征融合可以弥补特征匹配不足的问题、从而提高系统的定位精度和鲁棒性;2)针对特征点法视觉里程计由于特征点冗余,对运动物体不敏感,导致视觉里程计精度下降的问题,在视觉里程计的特征提取阶段,对图像进行区域划分,根据区域灰度的变异系数自适应设定特征点提取阈值,并利用四叉树结构管理特征点,实现ORB(Oriented FAST and Rotated BRIEF,ORB)特征均匀化提取;3)通过点线特征融合的方式进行特征提取,首先在纹理正常场景中可以通过点特征自适应的方式来均匀化提取特征点,其次在低纹理场景中由于线特征更具优势,不仅仅自适应阈值提取点特征而且通过线特征的适用性来补充描述场景信息,更好的为机器人位姿估计提供参数,为后续的特征匹配、定位与建图实现更准确、更具鲁棒性,最后通过双线程的方式,高并发的来分别提取点线特征,进行特征融合。1) The present invention is based on the feature extraction method of point-line feature fusion and adaptive threshold. In low-texture scenes, there are abundant line features. This is because compared to point features, line features are basically not affected by light, occlusion, viewing angle changes, etc. Affected by factors, it can be applied to a variety of scenarios. Therefore, in this scenario, introducing rich line features and fusing point-line features can make up for the problem of insufficient feature matching, thereby improving the positioning accuracy and robustness of the system; 2) Targeting features The point method visual odometry is insensitive to moving objects due to redundant feature points, which leads to the problem of reduced accuracy of the visual odometry. In the feature extraction stage of the visual odometry, the image is divided into regions and adapted according to the coefficient of variation of the regional grayscale. Set the feature point extraction threshold and use the quad-tree structure to manage the feature points to achieve uniform extraction of ORB (Oriented FAST and Rotated BRIEF, ORB) features; 3) Feature extraction is performed through point and line feature fusion. First, the texture is normal Feature points can be uniformly extracted in the scene through point feature adaptation. Secondly, in low-texture scenes, since line features are more advantageous, not only adaptive thresholds are used to extract point features, but also the applicability of line features is used to supplement the scene information. , better provides parameters for robot pose estimation, and achieves more accuracy and robustness for subsequent feature matching, positioning and mapping. Finally, through dual-threading, point and line features are extracted separately with high concurrency. Feature fusion.
附图说明Description of the drawings
图1为基于点线特征融合与自适应阈值的特征提取方法流程图;Figure 1 is a flow chart of the feature extraction method based on point-line feature fusion and adaptive threshold;
图2为采用自适应阈值ORB特征点提取方法流程图;Figure 2 is a flow chart of the method for extracting feature points using adaptive threshold ORB;
图3为子图像区域划分示意图;Figure 3 is a schematic diagram of sub-image area division;
图4为构建子图像金字塔示意图;Figure 4 is a schematic diagram of constructing a sub-image pyramid;
图5为基于四叉树的特征点提取示意图;Figure 5 is a schematic diagram of feature point extraction based on quadtree;
图6为特征提取实验测试图;Figure 6 is the feature extraction experimental test chart;
图7为基于EDLines算法进行线特征提取方法流程图;Figure 7 is a flow chart of the line feature extraction method based on the EDLines algorithm;
图8为低纹理场景下点线特征分布情况示意图。Figure 8 is a schematic diagram of the distribution of point and line features in a low-texture scene.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清晰,下面通过附图中的具体实例来描述本发明。但是应该理解,这些描述只是示例性的,并非要限制本发明的范围。本说明书附图所示的结构、比例、大小等,均仅用以配合说明书所揭示的内容,以供熟悉此技术的专业人士了解与阅读,并非用以限定本发明可实施的限定条件,故不具技术上的实质意义,任何结构的修饰、比例关系的改变或大小的调整,在不影响本发明所能产生的功效及所能达成的目的下,均应仍落在本发明所揭示的技术内容能涵盖的范围内。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本发明的概念。In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention is described below through specific examples in the drawings. However, it should be understood that these descriptions are exemplary only and are not intended to limit the scope of the invention. The structures, proportions, sizes, etc. shown in the drawings of this specification are only used to coordinate with the content disclosed in the specification for the understanding and reading of professionals familiar with this technology. They are not used to limit the conditions under which the present invention can be implemented. Therefore, It has no technical substantive significance. Any structural modifications, changes in proportions or adjustments in size shall still fall within the scope of the technology disclosed in the present invention as long as it does not affect the effectiveness and purpose of the present invention. within the scope of the content. Furthermore, in the following description, descriptions of well-known structures and techniques are omitted to avoid unnecessarily confusing the concepts of the present invention.
在此,还需要说明的是,为了避免因不必要的细节而模糊了本发明,在附图中仅仅示出与本发明的方案密切相关的结构和处理步骤,而省略与本发明关系不大的其他细节。Here, it should also be noted that, in order to avoid obscuring the present invention with unnecessary details, only the structures and processing steps closely related to the solution of the present invention are shown in the drawings, and those that are not closely related to the present invention are omitted. other details.
如图1所示,本发明具体实施方案包括以下步骤:As shown in Figure 1, the specific embodiment of the present invention includes the following steps:
步骤1:图像输入,准备特征提取:实时通过双目摄像头来采集周围图像,通过固定帧数才采集双目图像,并将图片分辨率设置为752*480,从图1中可以看出整个过程是并行处理,点线特征分别处理,这是为了对低纹理图片有更好的鲁棒性,可以更好的补充特征,从而对无人车的位姿估计更为准确。Step 1: Image input, prepare for feature extraction: collect surrounding images through binocular cameras in real time, collect binocular images through a fixed number of frames, and set the image resolution to 752*480. The entire process can be seen in Figure 1 It is processed in parallel, and the point and line features are processed separately. This is to have better robustness to low-texture images and to better supplement the features, so that the pose estimation of the unmanned vehicle is more accurate.
步骤2:根据自适应阈值采用ORBSLAM3中的特征点提取算法进行ORB特征点提取;Step 2: Use the feature point extraction algorithm in ORBSLAM3 to extract ORB feature points based on the adaptive threshold;
进一步的,如图2所示,所述步骤2中根据自适应阈值采用ORBSLAM3中的特征点提取算法进行ORB特征点提取又包括以下步骤:Further, as shown in Figure 2, in step 2, using the feature point extraction algorithm in ORBSLAM3 according to the adaptive threshold to extract ORB feature points also includes the following steps:
步骤a:如图3所示,将输入图像划分为四个区域(区域一、区域二、区域三、区域四),由各区域对应的子图像灰度值混乱程度不同,定义不同的FAST角点检测阈值;Step a: As shown in Figure 3, the input image is divided into four areas (area one, area two, area three, and area four). Different FAST angles are defined based on the different degrees of confusion of the grayscale values of the sub-images corresponding to each area. point detection threshold;
步骤b:定义图像I1为待提取特征点图像,I1的高度为hi,宽度为wi,为提高ORB特征的均匀化程度,将I1划分为高度为hi/2,宽度为wi/2的四张子图像;Step b: Define image I 1 as the feature point image to be extracted. The height of I 1 is h i and the width is w i . In order to improve the uniformity of ORB features, I 1 is divided into a height of h i /2 and a width of Four sub-images of w i /2;
步骤c:使用变异系数描述每一张子图像中所有像素点灰度值的离散程度,变异系数s是一组数据标准差与平均值的比值,计算公式为:Step c: Use the coefficient of variation to describe the degree of dispersion of the gray value of all pixels in each sub-image. The coefficient of variation s is the ratio of the standard deviation of a set of data to the average value. The calculation formula is:
其中gi表示子图像中某一像素点的灰度值,表示子图像平均灰度值,t表示像素点个数,变异系数越大表示灰度的混乱程度越高,定义gs=s*30,使用如下公式定义初始角点检测阈值gth:where g i represents the gray value of a certain pixel in the sub-image, represents the average gray value of the sub-image, and t represents the number of pixels. The larger the variation coefficient, the higher the degree of gray confusion. Define g s = s*30, and use the following formula to define the initial corner detection threshold g th :
步骤d:如图4所示,根据变异系数s设置自适应的检测阈值后,为保证尺度不变性,对子图像构建八层图像金字塔,接着对各层金字塔提取特征点,定义需要从I1中提取的ORB特征点总数为xt,图像金字塔的缩放尺度因子γs,则每张子图像第i层需要提取的特征点数目xti可表示为:Step d: As shown in Figure 4, after setting the adaptive detection threshold according to the variation coefficient s, in order to ensure scale invariance, an eight-layer image pyramid is constructed for the sub-image, and then feature points are extracted for each layer of the pyramid. The definition needs to be from I 1 The total number of ORB feature points extracted in is x t , and the scaling factor γ s of the image pyramid, then the number of feature points x ti that needs to be extracted in the i-th layer of each sub-image can be expressed as:
其中n为金字塔层数,inv(γs)表示缩放尺度因子的倒数;where n is the number of pyramid layers, inv(γ s ) represents the reciprocal of the scaling factor;
步骤e:计算每层金字塔需要提取特征点数后,使用边长为30像素的正方形网格对金字塔进行区域划分,在每个网格内进行FAST角点提取,若网格内提取的角点数量为0,则将该区域检测阈值降低为gth/2,再次进行提取,若仍提取不到角点,则放弃这个网格,保证特征点数量,重复执行如上操作,直至所有网格中的角点提取完成;Step e: After calculating the number of feature points that need to be extracted for each layer of the pyramid, use a square grid with a side length of 30 pixels to divide the pyramid into regions, and perform FAST corner point extraction in each grid. If the number of corner points extracted in the grid is is 0, then reduce the detection threshold of the area to g th /2, and extract again. If the corner points are still not extracted, abandon the grid to ensure the number of feature points, and repeat the above operation until all the corner points in the grid are Corner point extraction is completed;
步骤f:如图5所示,提取所有角点后,利用四叉树管理角点,定义四叉树的根节点为整张子图像,将子图像分为4个区域,作为根的子节点,若子节点中角点数量为2个(及以上),则对该子节点继续划分四叉树,若角点数量为1,保留该子节点且不继续划分,若角点数为0则删除该子节点,直至金字塔中每层图像提取的特征点数目达到设定的阈值之后,将各子节点中的Harris相应值最高的角点保留,最后将四张子图像中提取到的FAST角点合并,并计算相应的描述子,即可完成自适应阈值均匀化提取ORB特征点,利用计算机视觉库(OPENCV)中自带的标准ORB特征提取算法和本改进算法进行对比实验,设定提取ORB特征点的数目分别为500与1000个,提取结果如表1所示:Step f: As shown in Figure 5, after extracting all corner points, use the quadtree to manage the corner points, define the root node of the quadtree as the entire sub-image, and divide the sub-image into 4 areas as child nodes of the root. , if the number of corner points in the child node is 2 (or more), continue to divide the quadtree into the child node. If the number of corner points is 1, retain the child node and not continue to divide it. If the number of corner points is 0, delete the child node. sub-nodes until the number of feature points extracted from each layer of the pyramid reaches the set threshold, the corner points with the highest Harris corresponding values in each sub-node are retained, and finally the FAST corner points extracted from the four sub-images are merged , and calculate the corresponding descriptor, you can complete the adaptive threshold uniformization to extract ORB feature points. Use the standard ORB feature extraction algorithm in the computer vision library (OPENCV) and this improved algorithm to conduct comparative experiments, and set the ORB feature extraction The numbers of points are 500 and 1000 respectively. The extraction results are shown in Table 1:
表1特征提取实验结果Table 1 Feature extraction experimental results
从表中可以看出本改进的特征点提取算法实现了特征均匀化提取,增强了特征点描述图像能力,由于每层金字塔最终提取结果可能大于设定提取数量,所以最终提取的特征点个数略多于设定提取个数。It can be seen from the table that this improved feature point extraction algorithm achieves uniform feature extraction and enhances the ability of feature points to describe images. Since the final extraction result of each layer of the pyramid may be greater than the set extraction number, the number of final extracted feature points is Slightly more than the set extraction number.
步骤3:基于EDLines算法进行线特征提取;Step 3: Line feature extraction based on EDLines algorithm;
进一步的,如图7所示,所述步骤3中基于EDLines算法进行线特征提取又包括以下步骤:Further, as shown in Figure 7, the line feature extraction based on the EDLines algorithm in step 3 includes the following steps:
步骤Ⅰ:为提升识别效果,先对输入图像添加畸变矫正,畸变设置参数值为相机出厂自带;Step Ⅰ: In order to improve the recognition effect, first add distortion correction to the input image. The distortion setting parameter value is the one that comes with the camera at the factory;
步骤Ⅱ:图像平滑处理,采用高斯滤波器去除噪声,通过滤波对图像中噪声加以抑制,高斯核为5×5,σ=1;Step Ⅱ: Image smoothing processing, using Gaussian filter to remove noise, and filtering to suppress noise in the image. Gaussian kernel is 5×5, σ=1;
步骤Ⅲ:使用Sobel梯度算子计算图像中的梯度与方向,计算公式为:Step III: Use the Sobel gradient operator to calculate the gradient and direction in the image. The calculation formula is:
其中I(x,y)是图像在(x,y)处的像素值,g(x,y)是梯度的大小,angle(x,y)是水平线的角度;Where I(x,y) is the pixel value of the image at (x,y), g(x,y) is the size of the gradient, and angle(x,y) is the angle of the horizontal line;
步骤Ⅳ:遍历每个像素,选取在梯度方向上梯度值大于或等于相邻像素梯度值的像素,定义为锚点;Step IV: Traverse each pixel, select the pixel whose gradient value in the gradient direction is greater than or equal to the gradient value of the adjacent pixel, and define it as an anchor point;
步骤Ⅴ:选择一个锚点作为开始点,通过比较相邻像素点的幅度值,选择梯度值最大像素点作为下一个锚点,最后连接相邻锚点形成边缘像素链;Step V: Select an anchor point as the starting point, compare the amplitude values of adjacent pixels, select the pixel with the largest gradient value as the next anchor point, and finally connect adjacent anchor points to form an edge pixel chain;
步骤Ⅵ:从边缘像素链中分割出一个或者多个线段,按顺序遍历像素,利用最小二乘法对像素拟合,表达式为:Step VI: Divide one or more line segments from the edge pixel chain, traverse the pixels in order, and use the least squares method to fit the pixels. The expression is:
式中xi,yi是像素坐标,至结果超过阈值,例如1个像素无误差,进行截断,直至处理完所有像素,线段拟合过程中涉及最大均方根拟合误差以及最短线段长度,拟合误差利用式(5)计算,最短线段长度计算公式为:In the formula, x i and y i are pixel coordinates. When the result exceeds the threshold, for example, 1 pixel has no error, it is truncated until all pixels are processed. The maximum root mean square fitting error and the shortest line segment length are involved in the line segment fitting process. The fitting error is calculated using equation (5), and the calculation formula for the shortest line segment length is:
式中N为输入图像的宽度。where N is the width of the input image.
步骤4:双线程提取特征,进行点线特征融合。Step 4: Double-thread extraction of features and fusion of point and line features.
如图8所示,在低纹理或弱光照的场景,稀疏的点特征不能满足实际特征需求,导致系统的性能很难得到保障,通过点线特征融合的方法,可以避免只有点特征的影响,而且两者并行进行,不仅可以提升算法的定位精度与鲁棒性,还可以提升系统的建图效果,使其地图更加直观。As shown in Figure 8, in scenes with low texture or low lighting, sparse point features cannot meet actual feature requirements, making it difficult to guarantee system performance. Through the fusion of point and line features, the impact of only point features can be avoided. Moreover, running the two in parallel can not only improve the positioning accuracy and robustness of the algorithm, but also improve the mapping effect of the system and make the map more intuitive.
对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节,而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的。It is obvious to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, and that the present invention can be implemented in other specific forms without departing from the spirit or essential characteristics of the present invention. Therefore, the embodiments should be considered illustrative and not restrictive in any respect.
此外,应当理解,,以上实施例仅用以说明本发明的技术方案而非限制,尽管参照实施例对本发明进行了详细说明,本领域的普通技术人员应当理解,各实施例中的技术方案也可以经适当组合,形成本领域技术人员可以理解的其他实施方式,而不脱离本发明技术方案的宗旨和范围,则均应涵盖在本发明的权利要求范围当中。In addition, it should be understood that the above embodiments are only used to illustrate the technical solutions of the present invention and are not limiting. Although the present invention has been described in detail with reference to the embodiments, those of ordinary skill in the art should understand that the technical solutions in each embodiment are also They can be appropriately combined to form other embodiments that can be understood by those skilled in the art. Without departing from the spirit and scope of the technical solution of the present invention, they should all be included in the scope of the claims of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310835924.2A CN117011704A (en) | 2023-07-07 | 2023-07-07 | Feature extraction method based on dotted line feature fusion and self-adaptive threshold |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310835924.2A CN117011704A (en) | 2023-07-07 | 2023-07-07 | Feature extraction method based on dotted line feature fusion and self-adaptive threshold |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117011704A true CN117011704A (en) | 2023-11-07 |
Family
ID=88564700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310835924.2A Pending CN117011704A (en) | 2023-07-07 | 2023-07-07 | Feature extraction method based on dotted line feature fusion and self-adaptive threshold |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117011704A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117315274A (en) * | 2023-11-28 | 2023-12-29 | 淄博纽氏达特机器人系统技术有限公司 | Visual SLAM method based on self-adaptive feature extraction |
CN117710467A (en) * | 2024-02-06 | 2024-03-15 | 天津云圣智能科技有限责任公司 | Unmanned plane positioning method, unmanned plane positioning equipment and aircraft |
-
2023
- 2023-07-07 CN CN202310835924.2A patent/CN117011704A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117315274A (en) * | 2023-11-28 | 2023-12-29 | 淄博纽氏达特机器人系统技术有限公司 | Visual SLAM method based on self-adaptive feature extraction |
CN117315274B (en) * | 2023-11-28 | 2024-03-19 | 淄博纽氏达特机器人系统技术有限公司 | Visual SLAM method based on self-adaptive feature extraction |
CN117710467A (en) * | 2024-02-06 | 2024-03-15 | 天津云圣智能科技有限责任公司 | Unmanned plane positioning method, unmanned plane positioning equipment and aircraft |
CN117710467B (en) * | 2024-02-06 | 2024-05-28 | 天津云圣智能科技有限责任公司 | Unmanned plane positioning method, unmanned plane positioning equipment and aircraft |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117011704A (en) | Feature extraction method based on dotted line feature fusion and self-adaptive threshold | |
CN113362247B (en) | Semantic real scene three-dimensional reconstruction method and system for laser fusion multi-view camera | |
CN107045634B (en) | Text positioning method based on maximum stable extremum region and stroke width | |
CN102750708B (en) | Affine motion target tracing algorithm based on fast robust feature matching | |
CN107895375B (en) | Complex road route extraction method based on visual multi-features | |
CN110348496B (en) | Face image fusion method and system | |
CN104063711B (en) | A kind of corridor end point fast algorithm of detecting based on K means methods | |
CN111462210A (en) | Monocular line feature map construction method based on epipolar constraint | |
CN104751421B (en) | An image defogging method realized on FPGA | |
CN111553862B (en) | A method for defogging and binocular stereo vision positioning of sea and sky background images | |
CN109544635B (en) | An automatic camera calibration method based on enumeration and heuristic | |
CN111402292B (en) | Image sequence optical flow calculation method based on characteristic deformation error occlusion detection | |
CN111161219B (en) | Robust monocular vision SLAM method suitable for shadow environment | |
CN116524026B (en) | A dynamic visual SLAM method based on frequency domain and semantics | |
CN104537342A (en) | Quick lane line detection method combined with ridge boundary detection and Hough transformation | |
CN106408596A (en) | Edge-based local stereo matching method | |
CN112801021B (en) | Method and system for lane line detection based on multi-level semantic information | |
CN114332394A (en) | Semantic information assistance-based dynamic scene three-dimensional reconstruction method | |
CN116879870A (en) | A dynamic obstacle removal method suitable for low-beam 3D lidar | |
CN111709893B (en) | ORB-SLAM2 improved algorithm based on information entropy and sharpening adjustment | |
CN118587281A (en) | A fast visual simultaneous localization and mapping method based on bidirectional optical flow | |
CN108550119B (en) | An Image Denoising Method Combined with Edge Information | |
CN117218195A (en) | Simultaneous positioning and mapping method based on point and line feature constraint | |
CN115035492B (en) | Vehicle identification method, device, equipment and storage medium | |
CN114842359B (en) | Method for detecting autonomous landing runway of fixed-wing unmanned aerial vehicle based on vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |