CN104036492B - A kind of fruit image matching process based on spot extraction with neighbor point vector method - Google Patents
A kind of fruit image matching process based on spot extraction with neighbor point vector method Download PDFInfo
- Publication number
- CN104036492B CN104036492B CN201410215892.7A CN201410215892A CN104036492B CN 104036492 B CN104036492 B CN 104036492B CN 201410215892 A CN201410215892 A CN 201410215892A CN 104036492 B CN104036492 B CN 104036492B
- Authority
- CN
- China
- Prior art keywords
- point
- vector
- matching
- image
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 239000013598 vector Substances 0.000 title claims abstract description 206
- 235000013399 edible fruits Nutrition 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000000605 extraction Methods 0.000 title claims abstract description 22
- 230000008569 process Effects 0.000 title description 7
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 238000003708 edge detection Methods 0.000 claims abstract description 11
- 230000000877 morphologic effect Effects 0.000 claims description 12
- 241001164374 Calyx Species 0.000 claims description 6
- 230000003628 erosive effect Effects 0.000 claims description 6
- 238000003379 elimination reaction Methods 0.000 claims description 5
- 235000006596 Salacca edulis Nutrition 0.000 claims description 4
- 244000208345 Salacca edulis Species 0.000 claims description 4
- 230000008030 elimination Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明公开了一种基于斑点提取与邻近点向量法的水果图像匹配方法。获取水果的左、右侧面图像:进行斑点提取,分别采用极值点检测法、Harris角点检测法、Canny边缘检测法得到左图斑点集和右图斑点集;对其中每一个点进行匹配判断,得到待选匹配点集;对待选匹配点集进行误匹配点剔除,筛选得到正确的匹配点集;完成水果图像的匹配。本发明通过斑点提取以及对斑点进行向量法判断使得水果图像匹配取得了优良的稳定性、精确性和实时性。
The invention discloses a fruit image matching method based on spot extraction and adjacent point vector method. Obtain the left and right side images of the fruit: perform spot extraction, and use the extreme point detection method, Harris corner point detection method, and Canny edge detection method to obtain the left image spot set and the right image spot set; match each point Judgment to obtain the matching point set to be selected; remove the wrong matching points from the matching point set to be selected, and filter to obtain the correct matching point set; complete the matching of the fruit image. The present invention achieves excellent stability, accuracy and real-time performance in fruit image matching through spot extraction and vector method judgment on the spots.
Description
技术领域technical field
本发明涉及一种水果图像匹配方法,尤其是涉及图像处理技术领域的一种基于斑点提取与邻近点向量法的水果图像匹配方法。The invention relates to a fruit image matching method, in particular to a fruit image matching method based on spot extraction and adjacent point vector method in the technical field of image processing.
背景技术Background technique
图像匹配是指通过一定的匹配方法在两幅或多幅图像之间识别同一个点。近年来,图像匹配己成为物体辨识、机器人地图感知与导航、影像缝合、3D模型建立、手势辨识、影像追踪和动作比对等图像分析处理领域的关键技术和研究热点。Image matching refers to identifying the same point between two or more images through a certain matching method. In recent years, image matching has become a key technology and research hotspot in the fields of image analysis and processing such as object recognition, robot map perception and navigation, image stitching, 3D model building, gesture recognition, image tracking, and motion comparison.
水果表面信息的获取是水果的大小、形状、表面颜色和表面缺陷等品质指标检测的基础。表面颜色和表面缺陷检测的准确度依赖于水果完整表面图像的获取,图像拼接技术是实现水果完整表面图像获取的关键,而图像匹配技术是图像拼接技术的基础。The acquisition of fruit surface information is the basis for the detection of fruit quality indicators such as size, shape, surface color and surface defects. The accuracy of surface color and surface defect detection depends on the acquisition of complete fruit surface images. Image mosaic technology is the key to achieve complete fruit surface image acquisition, and image matching technology is the basis of image mosaic technology.
SIFT(Scale Invariant Feature Transform)方法是David Lowe于1999年提出的局部特征描述子(David.G.Lowe.Object recognition from local scale-invariantfeatures.International Conference on Computer Vision,Corfu,Greece,1999:1150-1157),并于2004年进行了更深入的发展和完善(David.G.Lowe.Distinctiveimage features from scale-invariant keypoints[J].International Journal of ComputerVision,2004,60(2):91-110)。提取的SIFT特征向量对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定性。The SIFT (Scale Invariant Feature Transform) method is a local feature descriptor proposed by David Lowe in 1999 (David.G.Lowe.Object recognition from local scale-invariant features.International Conference on Computer Vision, Corfu,Greece,1999:1150-1157 ), and carried out further development and improvement in 2004 (David.G.Lowe.Distinctiveimage features from scale-invariant keypoints[J].International Journal of ComputerVision,2004,60(2):91-110). The extracted SIFT feature vector remains invariant to rotation, scaling, and brightness changes, and also maintains a certain degree of stability to viewing angle changes, affine transformations, and noise.
但是水果的在线检测与分级对方法要求较高,一方面,SIFT方法为提高匹配适应性造成了该方法的复杂性,因此计算量大、耗时长,原SIFT方法无法满足在线要求;另一方面,常用的误匹配点剔除方法RANSAC方法(Chum O,Matas J.Optimal randomized RANSAC[J].IEEE Trans.On Pattern Analysis andMachine Intelligence,2008,30(8):1472-1482)的基本假设是样本中包含正确数据(inliers,可以被模型描述的数据),也包含异常数据(Outliers,偏离正常范围很远、无法适应数学模型的数据),即数据集中含有噪声。这些异常数据可能是由于错误的测量、错误的假设、错误的计算等产生的,而水果编码颜色特征差异较小使得基于SIFT方法的水果图像匹配无法稳定地得到匹配点,一次匹配得到的匹配点数为0到多个,因此无法采用该类方法进行水果图像的误匹配点剔除。However, the online detection and grading of fruits has high requirements on the method. On the one hand, the SIFT method has caused the complexity of the method to improve the matching adaptability, so the calculation amount is large and time-consuming, and the original SIFT method cannot meet the online requirements; on the other hand , the commonly used false matching point elimination method RANSAC method (Chum O, Matas J.Optimized randomized RANSAC[J].IEEE Trans.On Pattern Analysis and Machine Intelligence,2008,30(8):1472-1482) The basic assumption is that in the sample Contains correct data (inliers, data that can be described by the model) and abnormal data (outliers, data that deviates far from the normal range and cannot adapt to the mathematical model), that is, the data set contains noise. These abnormal data may be caused by wrong measurements, wrong assumptions, wrong calculations, etc., and the small difference in fruit coding color features makes it impossible to obtain matching points stably for fruit image matching based on the SIFT method. The number of matching points obtained by one match is 0 to more, so this type of method cannot be used to eliminate false matching points of fruit images.
发明内容Contents of the invention
为了解决背景技术中存在的问题,本发明提出一种基于斑点提取与邻近点向量法的水果图像匹配方法。In order to solve the problems existing in the background technology, the present invention proposes a fruit image matching method based on spot extraction and adjacent point vector method.
本发明解决其技术问题所采用的技术方案是包括如下步骤:The technical scheme that the present invention solves its technical problem adopts is to comprise the following steps:
1)获取水果侧面图像:1) Obtain the side image of the fruit:
将水果放置在水果托盘上,使得水果的花萼果梗连线与水平面基本垂直,从水果侧面采集一幅左侧面图像,再以水果的花萼果梗连线为轴,旋转水果60°后再采集一幅右侧面图像;Place the fruit on the fruit tray so that the calyx and fruit stem of the fruit are basically perpendicular to the horizontal plane, collect a left side image from the side of the fruit, and then use the fruit calyx and fruit stem as the axis, rotate the fruit 60° and then Acquire a right side image;
2)进行斑点提取,得到左图斑点集和右图斑点集;2) Perform speckle extraction to obtain the speckle set in the left image and the speckle set in the right image;
3)对步骤2)得到的左、右图斑点集的每一个点进行匹配判断,得到待选匹配点集;3) carry out matching judgment to each point of the left and right image spot sets that step 2) obtains, obtain the matching point set to be selected;
4)对步骤3)得到的待选匹配点集中每一对待选匹配点进行误匹配点剔除,筛选得到正确的匹配点;4) Carry out false matching point elimination for each matching point to be selected in the set of matching points to be selected obtained in step 3), and filter to obtain the correct matching point;
5)完成水果图像的匹配。5) Complete the matching of fruit images.
所述的步骤2)中进行斑点提取包括以下具体步骤:Carrying out speckle extraction in described step 2) comprises the following concrete steps:
2.1)极值点检测:2.1) Extreme point detection:
首先将左侧面图像和右侧面图像按以下公式(1)进行计算,得到初始高斯左图、初始高斯右图:First, the left side image and the right side image are calculated according to the following formula (1) to obtain the initial Gaussian left image and the initial Gaussian right image:
L(x,y,σ)=G(x,y,σ)*I(x,y) (1)L(x,y,σ)=G(x,y,σ)*I(x,y) (1)
其中,L(x,y,σ)为计算得到的初始高斯图像,I(x,y)为待计算的侧面图像,G(x,y,σ)的表达式为以下公式(2):Among them, L(x, y, σ) is the calculated initial Gaussian image, I(x, y) is the side image to be calculated, and the expression of G(x, y, σ) is the following formula (2):
其中,σ——尺度坐标,x、y分别为左侧面图像或者右侧面图像的横纵坐标;Among them, σ——scale coordinates, x and y are the horizontal and vertical coordinates of the left side image or the right side image respectively;
然后将初始高斯左图、初始高斯右图放大一倍,得到第一层高斯左图和第一层高斯右图;再将第一层高斯左图和第一层高斯右图按上述公式(1)再计算,得到第二层高斯左图和第二层高斯右图;Then the initial Gaussian left image and the initial Gaussian right image are doubled to obtain the first layer of Gaussian left image and the first layer of Gaussian right image; ) to calculate again to obtain the second layer of Gaussian left image and the second layer of Gaussian right image;
最后将第二层高斯左图减去第一层高斯左图得到的图像得到高斯差分左图,将第二层高斯右图减去第一层高斯右图得到的图像得到高斯差分右图;Finally, the Gaussian difference left image is obtained by subtracting the Gaussian left image of the second layer from the Gaussian left image of the first layer, and the Gaussian difference right image is obtained by subtracting the Gaussian right image of the second layer from the Gaussian right image of the first layer;
对于高斯差分左图和高斯差分右图中每一个像素点p,如果像素点p的灰度值均小于或均大于以像素点p为中心的3×3邻域中其余各个像素点的灰度值,则将像素点p标记为极值点;For each pixel p in the left image of Gaussian difference and the right image of Gaussian difference, if the gray value of pixel p is smaller or larger than the gray value of other pixels in the 3×3 neighborhood centered on pixel p value, the pixel point p is marked as an extreme point;
对高斯差分左图和高斯差分右图的每一个像素点进行遍历,分别得到左图高斯差分极值点集和右图高斯差分极值点集;Traverse each pixel in the left image of Gaussian difference and the right image of Gaussian difference to obtain the extreme point set of Gaussian difference in the left image and the extreme point set of Gaussian difference in the right image respectively;
2.2)Harris角点检测:2.2) Harris corner detection:
对步骤1)得到的左侧面图像和右侧面图像分别进行Harris角点检测,分别得到左图Harris角点集和右图Harris角点集;Carry out Harris corner point detection respectively to the left side image and the right side side image that step 1) obtains, obtain the left figure Harris corner point set and the right figure Harris corner point set respectively;
2.3)Canny边缘检测:2.3) Canny edge detection:
将步骤1)得到的左侧面图像和右侧面图像分别进行Canny边缘检测,得到左图边缘图像和右图边缘图像,对左图边缘图像和右图边缘图像进行一次形态学膨胀运算,再对形态学膨胀运算后得到的图像作轮廓提取并填充所有轮廓,剔除其中面积大于100个像素点的轮廓后再做一次形态学腐蚀运算,然后再对形态学腐蚀运算后得到的图像作轮廓提取并将所有轮廓中心点坐标输出,分别得到左图轮廓中心点集和右图轮廓中心点集;The left side image and the right side image obtained in step 1) are respectively subjected to Canny edge detection to obtain the left edge image and the right edge image, and a morphological expansion operation is performed on the left edge image and the right edge image, and then Extract the contours of the image obtained after the morphological expansion operation and fill all the contours, remove the contours with an area larger than 100 pixels, and then perform a morphological erosion operation, and then perform contour extraction on the image obtained after the morphological erosion operation And output the coordinates of all the contour center points to obtain the left contour center point set and the right contour center point set respectively;
然后将左图高斯差分极值点集、左图Harris角点集和左图轮廓中心点集进行合并,对于坐标重复的点,保留其中一个并剔除其余坐标重复的点,得到左图斑点集;将右图高斯差分极值点集、右图Harris角点集和右图轮廓中心点集进行合并,对于坐标重复的点,保留其中一个并剔除其余坐标重复的点,得到右图斑点集。Then merge the Gaussian difference extreme point set in the left image, the Harris corner point set in the left image, and the contour center point set in the left image. For the points with repeated coordinates, keep one of them and remove the remaining points with repeated coordinates to obtain the spot set in the left image; Merge the Gaussian difference extreme point set on the right image, the Harris corner point set on the right image, and the contour center point set on the right image. For the points with repeated coordinates, keep one of them and remove the remaining points with repeated coordinates to obtain the spot set on the right image.
所述的步骤3)中对左图斑点集的每一个点与右图斑点集的每一个点进行匹配判断包括以下具体步骤:In the described step 3), matching and judging each point of the speckle set of the left figure and each point of the speckle set of the right figure comprises the following specific steps:
3.1)以左图斑点集中的待选点A为中心,选择70×70的矩形区域,搜索左图斑点集中处于该矩形区域内与待选点A距离最近的点,记为点B,并计算待选点A到点B的向量,记为 3.1) Take the point A to be selected in the spot set in the left image as the center, select a rectangular area of 70×70, search for the point in the spot set in the left image that is within the rectangular area and the point closest to the point A to be selected, record it as point B, and calculate The vector from point A to point B to be selected is denoted as
3.2)搜索右图斑点集中与待选点A纵坐标差在6个像素点以内的待选点A’,以待选点A’为中心,选择70×70的矩形区域,搜索右图斑点集中处于该矩形区域内与点B纵坐标差在6个像素点以内的点且与待选点A’最近的点,记为点B’,计算待选点A’到点B’的向量若找不到该点B’则舍弃待选点A与待选点A’作为一对匹配点,并结束对待选点A与待选点A’匹配判断的余下步骤;3.2) Search for the candidate point A' whose ordinate difference is within 6 pixels between the spot concentration on the right image and the point A to be selected. With the point A' as the center, select a rectangular area of 70×70 and search for the spot concentration on the right image. The point within the rectangular area that is within 6 pixels of the ordinate difference from point B and the point closest to point A' to be selected is denoted as point B', and the vector from point A' to point B' to be selected is calculated If the point B' cannot be found, discard the point A to be selected and the point A' to be selected as a pair of matching points, and end the remaining steps of judging the matching between the point A to be selected and the point A' to be selected;
3.3)搜索左图斑点集中处于以待选点A为中心的70×70矩形区域内与待选点A距离次近的点,记为点C,计算待选点A到点C的向量若找不到该点C则舍弃待选点A与待选点A’作为一对匹配点,并结束对待选点A与待选点A’匹配判断的余下步骤;3.3) Search for the spot on the left image that is located in the 70×70 rectangular area centered on the point A to be selected and the second closest point to the point A to be selected, denoted as point C, and calculate the vector from point A to point C to be selected If the point C cannot be found, the point A to be selected and the point A' to be selected are discarded as a pair of matching points, and the remaining steps of matching judgment between the point A to be selected and the point A' to be selected are terminated;
3.4)搜索右图斑点集中处于以待选点A’为中心的70×70矩形区域内与点C纵坐标差在6个像素点以内的点且与待选点A’最近的点,记为点C’,并计算待选点A’到点C’的向量若找不到该点C’则舍弃待选点A与待选点A’作为一对匹配点,并结束对待选点A与待选点A’匹配判断的余下步骤;3.4) Search for the points on the right image that are located in the 70×70 rectangular area centered on the point A' to be selected, and the points that are within 6 pixels of the vertical coordinate difference of point C and are closest to the point A' to be selected, denoted as Point C', and calculate the vector from point A' to point C' to be selected If the point C' cannot be found, discard the point A to be selected and the point A' to be selected as a pair of matching points, and end the remaining steps of judging the matching between the point A to be selected and the point A' to be selected;
3.5)采用公式(3)分别计算向量与向量的夹角α和向量与向量的夹角β,采用公式(4)计算向量与向量的模值之差dis1和向量与向量的模值之差dis2:3.5) Use formula (3) to calculate vectors respectively with vector The included angle α and the vector with vector The included angle β of , using the formula (4) to calculate the vector with vector The difference between the modulus values of dis 1 and the vector with vector The modulus difference dis 2 :
其中,向量a,b分别为向量的横纵坐标,向量c,d分别为向量的横纵坐标;Among them, the vector a, b are vectors respectively The horizontal and vertical coordinates of the vector c, d are vectors respectively the horizontal and vertical coordinates;
若向量与向量的夹角α<15°且与向量的模值之差dis1<8且向量与向量的夹角β<18°且向量与向量的模值之差dis2<10,则将待选点A与待选点A’作为一对待选匹配点,并结束对待选点A与右图斑点集中剩余点的匹配判断,否则舍弃待选点A与待选点A’作为一对待选匹配点。if vector with vector The included angle α<15° and with vector The difference between the modulus dis 1 <8 and the vector with vector The included angle β<18° and the vector with vector dis 2 <10, then the candidate point A and the candidate point A' will be regarded as a pair of candidate matching points, and the matching judgment between the candidate point A and the remaining points in the spot set on the right will be ended, otherwise the candidate will be discarded Point A and candidate point A' are used as a pair of candidate matching points.
所述的步骤4)中进行误匹配点剔除具体采用以下步骤:In the described step 4), the following steps are specifically adopted for removing mismatching points:
4.1)边缘点剔除:4.1) Edge point removal:
待选匹配点集中某一对待选匹配点在左侧面图像和右侧面图像中分别为待选匹配点P和待选匹配点P’,并以待选匹配点P和待选匹配点P’为中心,分别选取20×20的矩形区域,若两个矩形区域内任何一个矩形区域内存在红、绿、蓝三分量值均大于200的像素点,则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点;A matching point to be selected in the set of matching points to be selected is a matching point to be selected P and a matching point to be selected in the left side image and the right side image respectively, and the matching point P to be selected and the matching point P to be selected ' as the center, respectively select a 20×20 rectangular area, if any of the two rectangular areas has red, green, and blue three-component values greater than 200 pixels, discard the matching point P and the candidate matching point Match point P' as a pair of correct matching points;
4.2)进行第一轮向量判断:4.2) Perform the first round of vector judgment:
4.2.1)若上述得到待选匹配点集中匹配点的对数小于3,则直接进行步骤4.3);4.2.1) If the logarithm of the matching points in the set of matching points to be selected obtained above is less than 3, then directly proceed to step 4.3);
4.2.2)左侧面图像中,以待选匹配点P为中心,选择70×70的矩形区域,搜索该矩形区域内与点P距离最近的待选匹配点,记为待选匹配点Q,计算待选匹配点P到待选匹配点Q的向量 4.2.2) In the image on the left side, with the matching point P as the center, select a 70×70 rectangular area, search for the matching point within the rectangular area that is closest to the point P, and record it as the matching point Q , calculate the vector from the candidate matching point P to the candidate matching point Q
若找不到该待选匹配点Q则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点,并结束对待选匹配点P与待选匹配点P’的余下判断步骤;If the candidate matching point Q cannot be found, discard the candidate matching point P and the candidate matching point P' as a pair of correct matching points, and end the remaining judgment steps of the candidate matching point P and the candidate matching point P' ;
4.2.3)与左侧面图像中待选匹配点Q对应的右侧面图像中待选匹配点为Q’,计算待选匹配点P’到待选匹配点Q’的向量 4.2.3) The matching point to be selected in the right side image corresponding to the matching point Q to be selected in the left side image is Q', calculate the vector from the matching point P' to the matching point Q' to be selected
4.2.4)左侧面图像中,搜索以待选匹配点P为中心的70×70矩形区域内与待选匹配点P距离次近的待选匹配点,记为待选匹配点R,计算待选匹配点P到待选匹配点R的向量 4.2.4) In the left side image, search for the candidate matching point with the next closest distance to the candidate matching point P in the 70×70 rectangular area centered on the candidate matching point P, and record it as the candidate matching point R, and calculate The vector from the candidate matching point P to the candidate matching point R
若找不到该待选匹配点R则舍弃待选匹配点P和待选匹配点P’作为一对匹配点,并结束对待选匹配点P与待选匹配点P’的余下判断步骤;If can not find this matching point R to be selected then discard matching point P to be selected and matching point P' to be selected as a pair of matching points, and end the remaining judgment steps of matching point P to be selected and matching point P' to be selected;
4.2.5)与左侧面图像中待选匹配点R对应的右侧面图像中待选匹配点为R’,计算待选匹配点P’到待选匹配点R’的向量 4.2.5) The matching point to be selected in the right side image corresponding to the matching point R to be selected in the left side image is R', and the vector from the matching point P' to the matching point R' to be selected is calculated
4.2.6)采用公式(3)分别计算向量与向量的夹角ρ和向量与向量的夹角μ,采用公式(4)计算向量与向量的模值之差dis3和向量与向量的模值之差dis4;4.2.6) Use the formula (3) to calculate the vector with vector The included angle ρ and the vector with vector The included angle μ, using the formula (4) to calculate the vector with vector The difference between the modulus values of dis 3 and the vector with vector The difference between the modulus value dis 4 ;
4.2.7)若满足向量与向量的夹角ρ<15°且向量与向量的模值之差dis3<8,或满足向量与向量的夹角μ<18°且向量与向量的模值之差dis4<10,则继续进行下面步骤,否则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点;4.2.7) If satisfy the vector with vector The included angle ρ<15° and the vector with vector The difference between the modulus value dis 3 <8, or satisfy the vector with vector The included angle μ<18° and the vector with vector dis 4 <10, then proceed to the following steps, otherwise discard the candidate matching point P and the candidate matching point P' as a pair of correct matching points;
4.3)第二轮向量判断;4.3) The second round of vector judgment;
对步骤4.2.7)得到的待选匹配点P和待选匹配点P’重复上述步骤4.2.1)~4.2.6),再次得到向量与向量的夹角ρ、向量与向量的模值之差dis3、向量与向量的夹角μ、向量与向量的模值之差dis4,如果满足向量与向量的夹角ρ<15°且向量与向量的模值之差dis3<8且向量与向量的夹角μ<18°且向量与向量的模值之差dis4<10,该对待选匹配点P和待选匹配点P’为正确的匹配点,否则舍弃待选匹配点P和待选匹配点P’作为一对正确匹配点。Repeat the above steps 4.2.1) to 4.2.6) for the candidate matching point P and candidate matching point P' obtained in step 4.2.7), and obtain the vector again with vector The included angle ρ, vector with vector The modulus difference dis 3 , the vector with vector The included angle μ, vector with vector The difference between the modulus value dis 4 , if the vector with vector The included angle ρ<15° and the vector with vector The difference between the modulus dis 3 <8 and the vector with vector The included angle μ<18° and the vector with vector If the difference between the modulus dis 4 <10, the candidate matching point P and the candidate matching point P' are the correct matching points, otherwise the candidate matching point P and the candidate matching point P' are discarded as a pair of correct matching points.
所述的水果为蛇果。Described fruit is snake fruit.
所述的左侧面图像和右侧面图像的分辨率均为0.146mm/pixel。The resolutions of the left side image and the right side image are both 0.146mm/pixel.
本发明的有益之处是:The benefits of the present invention are:
本发明通过斑点提取以及对斑点进行向量法判断使得水果图像匹配获得了优良的稳定性、精确性和实时性。The invention achieves excellent stability, accuracy and real-time performance in fruit image matching through spot extraction and vector method judgment on the spots.
附图说明Description of drawings
图1是本发明的匹配方法主要流程图。Fig. 1 is a main flowchart of the matching method of the present invention.
图2是本发明实施例的两幅水果原始图像。Fig. 2 is two pieces of fruit original images of the embodiment of the present invention.
图3是本发明实施例的第一层高斯左图和第二层高斯左图。Fig. 3 is a first-layer Gaussian left image and a second-layer Gaussian left image of an embodiment of the present invention.
图4是本发明实施例的高斯差分左图。Fig. 4 is the left diagram of Gaussian difference according to the embodiment of the present invention.
图5是本发明实施例基于高斯差分图像极值点检测得到的斑点坐标图。Fig. 5 is a speckle coordinate diagram obtained based on the detection of extreme points of a Gaussian difference image according to an embodiment of the present invention.
图6是本发明实施例基于Harris角点检测得到的斑点坐标图。FIG. 6 is a spot coordinate diagram obtained based on Harris corner detection according to an embodiment of the present invention.
图7是本发明实施例基于Canny边缘检测获取斑点坐标的步骤。FIG. 7 is a step of acquiring spot coordinates based on Canny edge detection according to an embodiment of the present invention.
图8是本发明实施例基于Canny边缘检测得到的斑点坐标图。FIG. 8 is a spot coordinate diagram obtained based on Canny edge detection according to an embodiment of the present invention.
图9是本发明实施例斑点匹配过程中最近点与次近点选取示意图。Fig. 9 is a schematic diagram of selection of the closest point and the next closest point in the blob matching process according to the embodiment of the present invention.
图10是本发明实施例斑点匹配判断过程实现图。Fig. 10 is an implementation diagram of the speckle matching judging process according to the embodiment of the present invention.
图11是本发明实施例边缘点剔除示意图。Fig. 11 is a schematic diagram of edge point removal according to an embodiment of the present invention.
图12是本发明实施例误匹配点剔除过程实现图。Fig. 12 is an implementation diagram of the elimination process of mismatching points according to the embodiment of the present invention.
图13是本发明实施例获得的匹配点图。Fig. 13 is a matching point map obtained by the embodiment of the present invention.
具体实施方式detailed description
下面结合附图和实施例对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
如图1所示,本发明方法包括如下步骤:As shown in Figure 1, the inventive method comprises the steps:
1)获取水果侧面图像:1) Obtain the side image of the fruit:
将水果放置在水果托盘上,使得水果的花萼果梗连线与水平面基本垂直,从水果侧面采集一幅左侧面图像,再以水果的花萼果梗连线为轴,旋转水果60°后再采集一幅右侧面图像。Place the fruit on the fruit tray so that the calyx and fruit stem of the fruit are basically perpendicular to the horizontal plane, collect a left side image from the side of the fruit, and then use the fruit calyx and fruit stem as the axis, rotate the fruit 60° and then Acquire a right profile image.
2)进行斑点提取,斑点提取包括以下三个方法,得到左图斑点集和右图斑点集:2) Carry out speckle extraction, speckle extraction comprises following three methods, obtains the speckle set of left figure and the speckle set of right figure:
2.1)极值点检测:2.1) Extreme point detection:
首先将左侧面图像和右侧面图像按以下公式(1)进行计算,得到初始高斯左图、初始高斯右图:First, the left side image and the right side image are calculated according to the following formula (1) to obtain the initial Gaussian left image and the initial Gaussian right image:
L(x,y,σ)=G(x,y,σ)*I(x,y) (1)L(x,y,σ)=G(x,y,σ)*I(x,y) (1)
其中,L(x,y,σ)为计算得到的初始高斯图像,I(x,y)为待计算的侧面图像,G(x,y,σ)的表达式为以下公式(2):Among them, L(x, y, σ) is the calculated initial Gaussian image, I(x, y) is the side image to be calculated, and the expression of G(x, y, σ) is the following formula (2):
其中,σ——尺度坐标,x、y分别为左侧面图像或者右侧面图像的横纵坐标;Among them, σ——scale coordinates, x and y are the horizontal and vertical coordinates of the left side image or the right side image respectively;
然后将初始高斯左图、初始高斯右图放大一倍,得到第一层高斯左图和第一层高斯右图;再将第一层高斯左图和第一层高斯右图按上述公式(1)再计算,得到第二层高斯左图和第二层高斯右图;Then the initial Gaussian left image and the initial Gaussian right image are doubled to obtain the first layer of Gaussian left image and the first layer of Gaussian right image; ) to calculate again to obtain the second layer of Gaussian left image and the second layer of Gaussian right image;
最后将第二层高斯左图减去第一层高斯左图得到的图像得到高斯差分左图,将第二层高斯右图减去第一层高斯右图得到的图像得到高斯差分右图;Finally, the Gaussian difference left image is obtained by subtracting the Gaussian left image of the second layer from the Gaussian left image of the first layer, and the Gaussian difference right image is obtained by subtracting the Gaussian right image of the second layer from the Gaussian right image of the first layer;
对于高斯差分左图和高斯差分右图中每一个像素点p,如果像素点p的灰度值均小于或均大于以像素点p为中心的3×3邻域中其余各个像素点的灰度值,则将像素点p标记为极值点;For each pixel p in the left image of Gaussian difference and the right image of Gaussian difference, if the gray value of pixel p is smaller or larger than the gray value of other pixels in the 3×3 neighborhood centered on pixel p value, the pixel point p is marked as an extreme point;
对高斯差分左图和高斯差分右图的每一个像素点进行遍历,分别得到左图高斯差分极值点集和右图高斯差分极值点集。Traverse each pixel in the left image of Gaussian difference and the right image of Gaussian difference to obtain the extreme point set of Gaussian difference in the left image and the extreme point set of Gaussian difference in the right image respectively.
2.2)Harris角点检测:2.2) Harris corner detection:
对步骤1)得到的左侧面图像和右侧面图像分别进行Harris角点检测(ChrisHarris,Mike Stephens,A Combined Corner and Edge Detector,4th Alvey VisionConference,1988,pp147-151),分别得到左图Harris角点集和右图Harris角点集。Harris corner detection (ChrisHarris, Mike Stephens, A Combined Corner and Edge Detector, 4th Alvey Vision Conference, 1988, pp147-151) was performed on the left side image and the right side image obtained in step 1), respectively, and the left image Harris The corner set and the Harris corner set on the right.
2.3)Canny边缘检测:2.3) Canny edge detection:
将步骤1)得到的左侧面图像和右侧面图像分别进行Canny边缘检测(Canny,J.,A Computational Approach To Edge Detection,IEEE Trans.Pattern Analysisand Machine Intelligence,8:679-714,1986.),得到左图边缘图像和右图边缘图像,对左图边缘图像和右图边缘图像再进行一次形态学膨胀运算(Rafael C.Gonzalez,Richard E.Woods,Digital Image Processing,Third Edition,2010,pp402-442),再对形态学膨胀运算后得到的图像作轮廓提取并填充所有轮廓,剔除其中面积大于100个像素点的轮廓后再做一次形态学腐蚀运算(Rafael C.Gonzalez,Richard E.Woods,Digital Image Processing,Third Edition,2010,pp402-442),然后再对形态学腐蚀运算后得到的图像作轮廓提取并将所有轮廓中心点坐标输出,分别得到左图轮廓中心点集和右图轮廓中心点集;The left side image and the right side image obtained in step 1) are respectively subjected to Canny edge detection (Canny, J., A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679-714, 1986.) , to obtain the left edge image and the right edge image, and then perform a morphological expansion operation on the left edge image and the right edge image (Rafael C.Gonzalez, Richard E. Woods, Digital Image Processing, Third Edition, 2010, pp402 -442), and then extract the outline of the image obtained after the morphological expansion operation and fill all the outlines, remove the outline with an area greater than 100 pixels, and then perform a morphological erosion operation (Rafael C.Gonzalez, Richard E.Woods ,Digital Image Processing,Third Edition,2010,pp402-442), and then extract the outline of the image obtained after the morphological erosion operation and output the coordinates of all the outline center points, and obtain the left image outline center point set and the right image outline respectively center point set;
然后将左图高斯差分极值点集、左图Harris角点集和左图轮廓中心点集进行合并,对于坐标重复的点,保留其中一个并剔除其余坐标重复的点,得到左图斑点集;将右图高斯差分极值点集、右图Harris角点集和右图轮廓中心点集进行合并,对于坐标重复的点,保留其中一个并剔除其余坐标重复的点,得到右图斑点集。Then merge the Gaussian difference extreme point set in the left image, the Harris corner point set in the left image, and the contour center point set in the left image. For the points with repeated coordinates, keep one of them and remove the remaining points with repeated coordinates to obtain the spot set in the left image; Merge the Gaussian difference extreme point set on the right image, the Harris corner point set on the right image, and the contour center point set on the right image. For the points with repeated coordinates, keep one of them and remove the remaining points with repeated coordinates to obtain the spot set on the right image.
3)进行斑点匹配,对步骤2)得到的左图斑点集的每一个点与右图斑点集的每一个点进行匹配判断包括以下具体步骤:3) Carry out speckle matching, step 2) each point of the speckle set in the left figure that obtains and each point of the speckle set in the right figure carry out matching judgment and include the following specific steps:
3.1)以左图斑点集中的待选点A为中心,选择70×70的矩形区域,搜索左图斑点集中处于该矩形区域内与待选点A距离最近的点,记为点B,并计算待选点A到点B的向量,记为 3.1) Take the point A to be selected in the spot set in the left image as the center, select a rectangular area of 70×70, search for the point in the spot set in the left image that is within the rectangular area and the point closest to the point A to be selected, record it as point B, and calculate The vector from point A to point B to be selected is denoted as
3.2)搜索右图斑点集中与待选点A纵坐标差在6个像素点以内的待选点A’,以待选点A’为中心,选择70×70的矩形区域,搜索右图斑点集中处于该矩形区域内与点B纵坐标差在6个像素点以内的点且与待选点A’最近的点,记为点B’,计算待选点A’到点B’的向量 3.2) Search for the candidate point A' whose ordinate difference is within 6 pixels between the spot concentration on the right image and the point A to be selected. With the point A' as the center, select a rectangular area of 70×70 and search for the spot concentration on the right image. The point within the rectangular area that is within 6 pixels of the ordinate difference from point B and the point closest to point A' to be selected is denoted as point B', and the vector from point A' to point B' to be selected is calculated
若找不到该点B’则舍弃待选点A与待选点A’作为一对匹配点,并结束对点待选A与待选点A’匹配判断的余下步骤;If the point B' cannot be found, the point to be selected A and the point to be selected A' are discarded as a pair of matching points, and the remaining steps of matching judgment to the point to be selected A and the point to be selected to be selected are completed;
3.3)搜索左图斑点集中处于以待选点A为中心的70×70矩形区域内与待选点A距离次近的点,记为点C,计算待选点A到点C的向量若找不到该点C则舍弃待选点A与待选点A’作为一对匹配点,并结束对点A与点A’匹配判断的的余下步骤。3.3) Search for the spot on the left image that is located in the 70×70 rectangular area centered on the point A to be selected and the second closest point to the point A to be selected, denoted as point C, and calculate the vector from point A to point C to be selected If the point C cannot be found, the candidate point A and the candidate point A' are discarded as a pair of matching points, and the remaining steps of judging the matching of the point A and the point A' are ended.
3.4)搜索右图斑点集中处于以待选点A’为中心的70×70矩形区域内与点C纵坐标差在6个像素点以内的点且与待选点A’最近的点,记为点C’,并计算待选点A’到点C’的向量若找不到该点C’则舍弃待选点A与待选点A’作为一对匹配点,并结束对待选点A与待选点A’匹配判断的余下步骤。3.4) Search for the points on the right image that are located in the 70×70 rectangular area centered on the point A' to be selected, and the points that are within 6 pixels of the vertical coordinate difference of point C and are closest to the point A' to be selected, denoted as Point C', and calculate the vector from point A' to point C' to be selected If the point C' cannot be found, discard the candidate point A and the candidate point A' as a pair of matching points, and end the remaining steps of matching judgment between the candidate point A and the candidate point A'.
3.5)采用公式(3)分别计算向量与向量的夹角α和向量与向量的夹角β,采用公式(4)计算向量与向量的模值之差dis1和向量与向量的模值之差dis2:3.5) Use formula (3) to calculate vectors respectively with vector The included angle α and the vector with vector The included angle β of , using the formula (4) to calculate the vector with vector The difference between the modulus values of dis 1 and the vector with vector The modulus difference dis 2 :
其中向量a,b分别为向量的横纵坐标,向量c,d分别为向量的横纵坐标;代入时,向量向量分别为向量和向量 where the vector a, b are vectors respectively The horizontal and vertical coordinates of the vector c, d are vectors respectively The horizontal and vertical coordinates of ; when substituting, the vector vector are vectors and vector
若向量与向量的夹角α<15°且与向量的模值之差dis1<8且向量与向量的夹角β<18°且向量与向量的模值之差dis2<10,则将待选点A与待选点A’作为一对待选匹配点,并结束对待选点A与右图斑点集中剩余点的匹配判断,否则舍弃待选点A与待选点A’作为一对待选匹配点。if vector with vector The included angle α<15° and with vector The difference between the modulus dis 1 <8 and the vector with vector The included angle β<18° and the vector with vector dis 2 <10, then the candidate point A and the candidate point A' will be regarded as a pair of candidate matching points, and the matching judgment between the candidate point A and the remaining points in the spot set on the right will be ended, otherwise the candidate will be discarded Point A and candidate point A' are used as a pair of candidate matching points.
4)误匹配点剔除,对步骤3)得到的待选匹配点集中每一对待选匹配点按以下步骤分别进行筛选,得到正确的匹配点:4) false matching points are eliminated, and each matching point to be selected in the set of matching points to be selected obtained in step 3) is screened respectively according to the following steps to obtain the correct matching point:
4.1)边缘点剔除:4.1) Edge point removal:
待选匹配点集中某一对待选匹配点在左侧面图像和右侧面图像中分别为待选匹配点P和待选匹配点P’,并以待选匹配点P和待选匹配点P’为中心,分别选取20×20的矩形区域,若两个矩形区域内任何一个矩形区域内存在红、绿、蓝三分量值均大于200的像素点,则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点。A matching point to be selected in the set of matching points to be selected is a matching point to be selected P and a matching point to be selected in the left side image and the right side image respectively, and the matching point P to be selected and the matching point P to be selected ' as the center, respectively select a 20×20 rectangular area, if any of the two rectangular areas has red, green, and blue three-component values greater than 200 pixels, discard the matching point P and the candidate matching point The matching point P' is regarded as a pair of correct matching points.
4.2)进行第一轮向量判断:4.2) Perform the first round of vector judgment:
4.2.1)若上述得到待选匹配点集中匹配点的对数小于3,则直接进行步骤4.3);4.2.1) If the logarithm of the matching points in the set of matching points to be selected obtained above is less than 3, then directly proceed to step 4.3);
4.2.2)左侧面图像中,以待选匹配点P为中心,选择70×70的矩形区域;4.2.2) In the left side image, take the matching point P to be selected as the center and select a 70×70 rectangular area;
搜索该矩形区域内与点P距离最近的待选匹配点,记为待选匹配点Q,计算待选匹配点P到待选匹配点Q的向量 Search for the candidate matching point closest to the point P in the rectangular area, record it as the candidate matching point Q, and calculate the vector from the candidate matching point P to the candidate matching point Q
若找不到该待选匹配点Q则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点,并结束对待选匹配点P与待选匹配点P’的余下判断步骤;If the candidate matching point Q cannot be found, discard the candidate matching point P and the candidate matching point P' as a pair of correct matching points, and end the remaining judgment steps of the candidate matching point P and the candidate matching point P' ;
4.2.3)与左侧面图像中待选匹配点Q对应的右侧面图像中待选匹配点为Q’,计算待选匹配点P’到待选匹配点Q’的向量 4.2.3) The matching point to be selected in the right side image corresponding to the matching point Q to be selected in the left side image is Q', calculate the vector from the matching point P' to the matching point Q' to be selected
4.2.4)左侧面图像中,搜索以待选匹配点P为中心的70×70矩形区域内与待选匹配点P距离次近的待选匹配点,记为待选匹配点R,计算待选匹配点P到待选匹配点R的向量 4.2.4) In the left side image, search for the candidate matching point with the next closest distance to the candidate matching point P in the 70×70 rectangular area centered on the candidate matching point P, and record it as the candidate matching point R, and calculate The vector from the candidate matching point P to the candidate matching point R
若找不到该待选匹配点R则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点,并结束对待选匹配点P与待选匹配点P’的余下判断步骤;If the candidate matching point R cannot be found, discard the candidate matching point P and the candidate matching point P' as a pair of correct matching points, and end the remaining judgment steps of the candidate matching point P and the candidate matching point P' ;
4.2.5)与左侧面图像中待选匹配点R对应的右侧面图像中待选匹配点为R’,计算待选匹配点P’到待选匹配点R’的向量 4.2.5) The matching point to be selected in the right side image corresponding to the matching point R to be selected in the left side image is R', and the vector from the matching point P' to the matching point R' to be selected is calculated
4.2.6)采用公式(3)分别计算向量与向量的夹角ρ和向量与向量的夹角μ,采用公式(4)分别计算向量与向量的模值之差dis3和向量与向量的模值之差dis4;4.2.6) Use the formula (3) to calculate the vector with vector The included angle ρ and the vector with vector The included angle μ, use the formula (4) to calculate the vector with vector The difference between the modulus values of dis 3 and the vector with vector The difference between the modulus value dis 4 ;
代入时,公式(3)和公式(4)中的向量向量分别为向量和向量 When substituting, the vectors in formula (3) and formula (4) vector are vectors and vector
4.2.7)若满足向量与向量的夹角ρ<15°且向量与向量的模值之差dis3<8,或满足向量与向量的夹角μ<18°且向量与向量的模值之差dis4<10,则继续进行下面步骤,否则舍弃待选匹配点P和待选匹配点P’作为一对正确的匹配点;4.2.7) If satisfy the vector with vector The included angle ρ<15° and the vector with vector The difference between the modulus value dis 3 <8, or satisfy the vector with vector The included angle μ<18° and the vector with vector dis 4 <10, then proceed to the following steps, otherwise discard the candidate matching point P and the candidate matching point P' as a pair of correct matching points;
4.3)第二轮向量判断;4.3) The second round of vector judgment;
对步骤4.2.7)得到的待选匹配点P和待选匹配点P’重复上述步骤4.2.1)~4.2.6),再次得到向量与向量的夹角ρ、向量与向量的模值之差dis3、向量与向量的夹角μ、向量与向量的模值之差dis4,如果满足向量与向量的夹角ρ<15°且向量与向量的模值之差dis3<8且向量与向量的夹角μ<18°且向量与向量的模值之差dis4<10,该对待选匹配点P和待选匹配点P’为正确的匹配点,否则舍弃。Repeat the above steps 4.2.1) to 4.2.6) for the candidate matching point P and candidate matching point P' obtained in step 4.2.7), and obtain the vector again with vector The included angle ρ, vector with vector The modulus difference dis 3 , the vector with vector The included angle μ, vector with vector The difference between the modulus value dis 4 , if the vector with vector The included angle ρ<15° and the vector with vector The difference between the modulus dis 3 <8 and the vector with vector The included angle μ<18° and the vector with vector If the difference between the modulus values of dis 4 <10, the matching point P to be selected and the matching point P' to be selected are correct matching points, otherwise they are discarded.
5)完成水果图像的匹配。5) Complete the matching of fruit images.
所述的水果为蛇果。Described fruit is snake fruit.
所述的左侧面图像和右侧面图像的分辨率均为0.146mm/pixel。The resolutions of the left side image and the right side image are both 0.146mm/pixel.
左侧面图像和右侧面图像获取时,可调节物距为970mm,调节相机的变焦镜头使得焦距为25mm,相机CCD尺寸为1/3英寸,使得采集到的图像分辨率为0.146mm/pixel。When the left side image and the right side image are acquired, the adjustable object distance is 970mm, the zoom lens of the camera is adjusted so that the focal length is 25mm, and the CCD size of the camera is 1/3 inch, so that the captured image resolution is 0.146mm/pixel .
本发明的实施例采用蛇果作为实验对象,具体为:Embodiments of the present invention adopt red snake fruit as experimental object, specifically:
步骤(1),原始图像获取得到的左侧面图像和右侧面图像如图2所示。Step (1), the left side image and the right side image obtained from the original image are shown in Fig. 2 .
步骤(2),斑点提取。Step (2), spot extraction.
2.1)极值点检测。2.1) Extreme point detection.
生成4层的高斯图像。首先将左侧面图像和右侧面图像按公式(1)计算,此处取σ=0.5,得到初始高斯左图、初始高斯右图,通过上采样将得到的初始高斯左图、初始高斯右图放大一倍,得到第一层高斯左图和第一层高斯右图;再将第一层高斯左图和第一层高斯右图按公式(1)计算,此时取σ=1,得到第二层高斯左图和第二层高斯右图。得到的第一层高斯左图和第二层高斯左图分别如图3的左右两图所示。最后得到的高斯差分左图如图4所示。Generate a Gaussian image with 4 layers. Firstly, calculate the left side image and the right side image according to formula (1), where σ=0.5 to get the initial Gaussian left image and the initial Gaussian right image, and the initial Gaussian left image and the initial Gaussian right image obtained by upsampling The image is doubled to obtain the first-layer Gaussian left image and the first-layer Gaussian right image; then the first-layer Gaussian left image and the first-layer Gaussian right image are calculated according to formula (1), and at this time σ=1 is obtained, The second layer Gaussian left image and the second layer Gaussian right image. The obtained first-layer Gaussian left image and the second-layer Gaussian left image are shown in the left and right images of Figure 3, respectively. The left image of the final Gaussian difference is shown in Figure 4.
然后标记极值点;对高斯差分左图和高斯差分右图进行遍历,分别得到左图高斯差分极值点集和右图高斯差分极值点集,左图高斯差分极值点集坐标如图5所示。Then mark the extreme points; traverse the left image of Gaussian difference and the right image of Gaussian difference to obtain the extreme point set of Gaussian difference in the left image and the extreme point set of Gaussian difference in the right image respectively. The coordinates of the extreme point set of Gaussian difference in the left image are shown in the figure 5.
2.2)将左侧面图像和右侧面图像分别进行Harris角点检测,分别得到左图Harris角点集和右图Harris角点集,左图Harris角点集坐标如图6所示。2.2) Perform Harris corner detection on the left side image and the right side image respectively, and obtain the Harris corner set of the left picture and the Harris corner set of the right picture respectively, and the coordinates of the Harris corner set of the left picture are shown in Fig. 6 .
2.3)如图7所示,将左侧面图像和右侧面图像分别进行Canny边缘检测及其相关运算,分别得到左图轮廓中心点集和右图轮廓中心点集,左图轮廓中心点集坐标如图8所示。2.3) As shown in Figure 7, the left side image and the right side image are respectively subjected to Canny edge detection and related operations, and the left image contour center point set and the right image contour center point set are respectively obtained, and the left image contour center point set is The coordinates are shown in Figure 8.
然后进行点集合并,得到左图斑点集和右图斑点集。Then the point sets are merged to obtain the left image spot set and the right image spot set.
步骤(3),斑点匹配,如图9所示,对步骤(2)得到的左图斑点集和右图斑点集中每一个点进行匹配判断,其中判断过程的实现效果如图10所示。Step (3), blob matching, as shown in Figure 9, performs matching judgment on each point in the blob set in the left image and the blob set in the right image obtained in step (2), and the effect of the judgment process is shown in Figure 10.
步骤(4),误匹配点剔除。对步骤(3)得到的待选匹配点集中每一对待选匹配点分别进行筛选:Step (4), elimination of mismatching points. Each matching point to be selected in the set of matching points to be selected obtained in step (3) is screened separately:
4.1)边缘点剔除。如图11所示,记待选匹配点集中在左图和右图中的待选匹配点分别为P和P’,并以P和P’为中心,分别选取20×20的矩形区域,考察两个矩形区域内是否存在红、绿、蓝三分量值均大于200的点。图11的左图中,位于左下角的待选匹配点靠近水果边缘,其矩形区域内存在白色背景,该白色背景的红、绿、蓝三分量值均大于200,因此舍弃该对匹配点为正确的匹配点;4.1) Edge point removal. As shown in Figure 11, remember that the matching points to be selected are concentrated in the left picture and the right picture. The matching points to be selected are P and P' respectively, and take P and P' as the center, respectively select a 20×20 rectangular area, and investigate Whether there are points with red, green, and blue three-component values greater than 200 in the two rectangular areas. In the left picture of Figure 11, the matching point to be selected in the lower left corner is close to the edge of the fruit, and there is a white background in the rectangular area. The red, green, and blue three-component values of the white background are all greater than 200, so discarding the pair of matching points is the correct matching point;
4.2)依次进行第一轮向量判断和第二轮向量判断。4.2) Carry out the first round of vector judgment and the second round of vector judgment in sequence.
步骤(5),遍历待选匹配点集得到正确的匹配点集。Step (5), traversing the matching point set to be selected to obtain the correct matching point set.
上述过程中待选匹配点判断过程的实现效果如图12所示,本发明方法最终得到的匹配点示意图如图13所示。The realization effect of the matching point judgment process in the above process is shown in FIG. 12 , and the matching point schematic diagram finally obtained by the method of the present invention is shown in FIG. 13 .
通过实验验证,结果显示,匹配成功率达到100%,误匹配点率为4.4%。对于图像分辨率为0.15mm/pixel的水果图像,整个匹配流程总用时为0.53秒。Through experimental verification, the results show that the matching success rate reaches 100%, and the false matching point rate is 4.4%. For a fruit image with an image resolution of 0.15mm/pixel, the total time for the entire matching process is 0.53 seconds.
由此本发明提出通过斑点提取以及对斑点进行向量法判断进行水果图像匹配,使得水果图像匹配获得了优良的稳定性、精确性和实时性。并且通过试验验证了本发明匹配点计算方法的可靠性。Therefore, the present invention proposes to perform fruit image matching by spot extraction and vector method judgment on the spots, so that the fruit image matching can obtain excellent stability, accuracy and real-time performance. And the reliability of the matching point calculation method of the present invention is verified through experiments.
上述具体实施方式用来解释说明本发明,而不是对本发明进行限制,在本发明的精神和权利要求的保护范围内,对本发明作出的任何修改和改变,都落入本发明的保护范围。The specific embodiments above are used to explain the present invention, rather than to limit the present invention. Within the spirit of the present invention and the protection scope of the claims, any modification and change made to the present invention will fall into the protection scope of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410215892.7A CN104036492B (en) | 2014-05-21 | 2014-05-21 | A kind of fruit image matching process based on spot extraction with neighbor point vector method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410215892.7A CN104036492B (en) | 2014-05-21 | 2014-05-21 | A kind of fruit image matching process based on spot extraction with neighbor point vector method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104036492A CN104036492A (en) | 2014-09-10 |
CN104036492B true CN104036492B (en) | 2016-08-31 |
Family
ID=51467251
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410215892.7A Active CN104036492B (en) | 2014-05-21 | 2014-05-21 | A kind of fruit image matching process based on spot extraction with neighbor point vector method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104036492B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107247953B (en) * | 2017-05-31 | 2020-05-19 | 大连理工大学 | An edge rate-based feature point type selection method |
CN113109240B (en) * | 2021-04-08 | 2022-09-09 | 国家粮食和物资储备局标准质量中心 | Method and system for determining imperfect grains of grains implemented by computer |
CN113177925B (en) * | 2021-05-11 | 2022-11-11 | 昆明理工大学 | Method for nondestructive detection of fruit surface defects |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177435A (en) * | 2013-04-10 | 2013-06-26 | 浙江大学 | Apple surface non-redundancy information image processing method based on machine vision |
CN103336946A (en) * | 2013-06-17 | 2013-10-02 | 浙江大学 | Binocular stereoscopic vision based clustered tomato identification method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09218024A (en) * | 1996-02-13 | 1997-08-19 | Daido Denki Kogyo Kk | Method for inspecting surface unevenness of vegetable and fruit |
-
2014
- 2014-05-21 CN CN201410215892.7A patent/CN104036492B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103177435A (en) * | 2013-04-10 | 2013-06-26 | 浙江大学 | Apple surface non-redundancy information image processing method based on machine vision |
CN103336946A (en) * | 2013-06-17 | 2013-10-02 | 浙江大学 | Binocular stereoscopic vision based clustered tomato identification method |
Non-Patent Citations (2)
Title |
---|
Image fusion of visible and thermal images for fruit detection;D.M.Bulanon et al;《Biosystems Engineering》;20090531;第103卷(第1期);12-22 * |
基于立体视觉的遮挡柑橘识别与空间匹配研究;李玉良;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20071105(第05期);B024-144 * |
Also Published As
Publication number | Publication date |
---|---|
CN104036492A (en) | 2014-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110866924B (en) | Line structured light center line extraction method and storage medium | |
CN109242791B (en) | Batch repair method for damaged plant leaves | |
CN110197153B (en) | Automatic wall identification method in house type graph | |
CN104091324A (en) | Quick checkerboard image feature matching algorithm based on connected domain segmentation | |
CN110569857B (en) | An image contour corner detection method based on centroid distance calculation | |
CN107239748A (en) | Robot target identification and localization method based on gridiron pattern calibration technique | |
CN108154520A (en) | A kind of moving target detecting method based on light stream and frame matching | |
CN106407983A (en) | Image body identification, correction and registration method | |
CN104463067B (en) | Method for extracting macro blocks of Grid Matrix two-dimensional bar code | |
CN108510476A (en) | A kind of Mobile phone screen wireline inspection method based on machine vision | |
CN104657728B (en) | Processing in Barcode Recognizing System based on computer vision | |
CN106529556B (en) | A visual inspection system for instrument indicator lights | |
CN106952312B (en) | A logo-free augmented reality registration method based on line feature description | |
CN111738211B (en) | PTZ camera moving object detection and recognition method based on dynamic background compensation and deep learning | |
CN109961016A (en) | Multi-gesture accurate segmentation method for smart home scene | |
CN115471682A (en) | An Image Matching Method Based on SIFT Fusion ResNet50 | |
CN115731257A (en) | Image-based Leaf Shape Information Extraction Method | |
CN116718599B (en) | A method for measuring apparent crack length based on three-dimensional point cloud data | |
CN110263778A (en) | A kind of meter register method and device based on image recognition | |
CN103116890B (en) | A kind of intelligent search matching process based on video image | |
CN115049689A (en) | Table tennis identification method based on contour detection technology | |
CN104036492B (en) | A kind of fruit image matching process based on spot extraction with neighbor point vector method | |
CN105374010A (en) | A panoramic image generation method | |
CN104036494B (en) | A kind of rapid matching computation method for fruit image | |
CN105139017B (en) | Merge the algorithm of locating license plate of vehicle of affine constant corner feature and visual color feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210521 Address after: 310012 room 1102, block B, Lishui digital building, 153 Lianchuang street, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province Patentee after: Hangzhou nuotian Intelligent Technology Co.,Ltd. Address before: 310058 Yuhang Tang Road, Xihu District, Hangzhou, Zhejiang 866 Patentee before: ZHEJIANG University |