CN101334263A - Method for locating the center of a circular target - Google Patents

Method for locating the center of a circular target Download PDF

Info

Publication number
CN101334263A
CN101334263A CNA2008100227931A CN200810022793A CN101334263A CN 101334263 A CN101334263 A CN 101334263A CN A2008100227931 A CNA2008100227931 A CN A2008100227931A CN 200810022793 A CN200810022793 A CN 200810022793A CN 101334263 A CN101334263 A CN 101334263A
Authority
CN
China
Prior art keywords
point
pixel
edge
prime
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008100227931A
Other languages
Chinese (zh)
Other versions
CN101334263B (en
Inventor
达飞鹏
张虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Outpace Building Material Equipment Co ltd
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN2008100227931A priority Critical patent/CN101334263B/en
Publication of CN101334263A publication Critical patent/CN101334263A/en
Application granted granted Critical
Publication of CN101334263B publication Critical patent/CN101334263B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种圆形目标的圆心定位方法主要涉及众多目标识别和目标定位,首先用简单的周线形心法对图像进行圆心粗定位,根据粗定位圆心和粗定位圆半径的信息提取关键的正方形区域作为感兴趣区域,对感兴趣区域内的圆形目标用canny算子进行像素级边缘定位,接着根据圆的几何特征及其灰度信息对圆形目标进行亚像素定位,得到精确的亚像素边缘点的坐标,然后针对亚像素边缘点中出现的“孤立点”和噪声,分别用曲率滤波方法和均值滤波方法进行滤波,最后用最小二乘法对滤波后的亚像素边缘点拟合圆,从而得到最终的圆心和半径。该方法不但有效地提高了圆心定位的精度,也提高了圆心定位的鲁棒性,进而提高测量系统的测量精度和改善测量系统的稳定性。

Figure 200810022793

A center positioning method of a circular target mainly involves many target recognition and target positioning. Firstly, the simple center of circumference method is used to roughly locate the center of the image, and the key square area is extracted according to the information of the center of the circle and the radius of the rough positioning circle as In the area of interest, use the canny operator to perform pixel-level edge positioning on the circular target in the interested area, and then perform sub-pixel positioning on the circular target according to the geometric characteristics of the circle and its grayscale information to obtain accurate sub-pixel edge points The coordinates of the sub-pixel edge points, and then for the "isolated points" and noise appearing in the sub-pixel edge points, use the curvature filtering method and the mean value filtering method to filter respectively, and finally use the least square method to fit the circle to the filtered sub-pixel edge points, so as to obtain The final center and radius of the circle. This method not only effectively improves the accuracy of the positioning of the center of the circle, but also improves the robustness of the positioning of the center of the circle, thereby improving the measurement accuracy and stability of the measurement system.

Figure 200810022793

Description

圆形目标的圆心定位方法 Method for locating the center of a circular target

技术领域 technical field

本发明涉及一种圆形目标的亚像素边缘定位以及亚像素边缘点滤波的方法,尤其涉及一种圆形目标的圆心定位方法。The invention relates to a sub-pixel edge positioning method of a circular target and a sub-pixel edge point filtering method, in particular to a circle center positioning method of a circular target.

背景技术 Background technique

在众多图像分析和处理系统中,快速而准确地获取圆心被广泛地应用在目标识别以及定位上,如在三维重构的系统中,高精度地获取标定板圆心对整个三维重构精度有至关重要的影响。目前求取圆心的常见算法有:形心法,Hough变换法,高斯曲面拟合法,最小二乘曲线拟合法。形心法要求图像灰度分布比较均匀,否则会产生较大误差;Hough变换法需要对每一个边界点逐点投票、记录,计算时间较长,而且占用计算机内存较大,在实际应用中受到限制;高斯曲面拟合是利用圆形光斑的灰度分布近似于高斯模型的原理进行曲面拟合,其定位精度高,但是如果处理圆面积较大,则运算量很大,计算时间长,不适用于实时光学检测系统中;最小二乘曲线拟合法是最为常见的方法,因为其实现简单,计算量小,在实际中得到广泛应用,但是目前最小二乘曲线拟合法定位圆中往往都是利用求取的像素级边缘点进行最小二乘法拟合圆心,使得其拟合精度受到了一定的限制,所以能否获取精确的图像中圆形的亚像素边缘,对于提高圆心的拟合精度有着重要的影响。In many image analysis and processing systems, quickly and accurately obtaining the center of the circle is widely used in target recognition and positioning. For example, in a 3D reconstruction system, obtaining the center of the calibration plate with high precision is of great importance to the accuracy of the entire 3D reconstruction. important influence. At present, the common algorithms for calculating the center of the circle are: centroid method, Hough transform method, Gaussian surface fitting method, and least squares curve fitting method. The centroid method requires that the gray distribution of the image is relatively uniform, otherwise there will be a large error; the Hough transform method needs to vote and record each boundary point point by point, which takes a long time to calculate and takes up a large amount of computer memory. Limitation; Gaussian surface fitting is to use the principle that the gray distribution of the circular spot is similar to the Gaussian model for surface fitting. Its positioning accuracy is high, but if the area of the processing circle is large, the amount of calculation is large, and the calculation time is long. It is suitable for real-time optical detection systems; the least squares curve fitting method is the most common method, because it is simple to implement and has a small amount of calculation, it is widely used in practice, but the current least squares curve fitting method is often used in the positioning circle Using the obtained pixel-level edge points to fit the center of the circle with the least squares method limits the fitting accuracy to a certain extent. Therefore, whether the accurate sub-pixel edge of the circle in the image can be obtained is of great importance for improving the fitting accuracy of the center of the circle. important influence.

目前常见的亚像素边缘定位算法有:参数拟合法,矩的方法,插值法以及利用图像其他几何特征的方法。文献“Subpixel edge localization and the interpolation of still images”(Jensen K,Anastassiou,IEEE Trans on PAMI,1995,4(3):285~295)是插值法,是将像素级边缘定位算法用于插值放大的图像,在实际图像灰度分布规律与插值规律相一致的前提下,使边缘定位达到亚像素精度;文献“利用切线方向信息检测亚像素边缘”(章毓晋,傅倬,模式识别与人工智能,1997,10(1):69~74)主要利用了切线方向的信息,对已知形状的目标进行亚像素边缘定位,这种方法过分依赖于粗定位的圆心和半径信息,没有充分利用图像的灰度值信息;文献“A fast subpixel edge detection method usingSobel-Zemike moments operator”(Qu Ying-Dong,Cui Cheng-Song,Chen San-Ben,LiJin-Quan,Image and Vision Computing,2005,23:11~17)中用矩的方法实现边缘亚像素定位。基于矩的方法目前应用比较广泛,但是由于要用到多个7×7模板,因此不适用于大型位图的实时处理;文献“图像的快速亚像素边缘检测方法”(刘力双,张铫,光电子激光,2005,16(8):993~996)介绍了亚像素边缘参数拟合的原理,其采用的是Sobel算子和参数拟合的方法实现亚像素边缘定位,其中存在两个问题,第一,Sobel算子对绝大多数图像都不能实现整像素级边缘定位,导致亚像素定位不准;第二,该文献提供的方法不能确保二次曲线型的边缘定位精度。At present, the common sub-pixel edge location algorithms include: parameter fitting method, moment method, interpolation method and methods using other geometric features of the image. The document "Subpixel edge localization and the interpolation of still images" (Jensen K, Anastassiou, IEEE Trans on PAMI, 1995, 4(3): 285-295) is an interpolation method, which uses the pixel-level edge localization algorithm for interpolation and amplification. Image, under the premise that the actual image grayscale distribution law is consistent with the interpolation law, the edge positioning can reach sub-pixel accuracy; the literature "Using tangent direction information to detect sub-pixel edges" (Zhang Yujin, Fu Zhuo, Pattern Recognition and Artificial Intelligence, 1997 , 10(1): 69~74) mainly used the information of tangent direction to locate the sub-pixel edge of the target of known shape. Degree value information; literature "A fast subpixel edge detection method using Sobel-Zemike moments operator" (Qu Ying-Dong, Cui Cheng-Song, Chen San-Ben, LiJin-Quan, Image and Vision Computing, 2005, 23: 11~17 ) uses the moment method to realize the edge sub-pixel positioning. Moment-based methods are currently widely used, but they are not suitable for real-time processing of large bitmaps due to the use of multiple 7×7 templates; Laser, 2005, 16(8): 993~996) introduced the principle of sub-pixel edge parameter fitting, which uses Sobel operator and parameter fitting method to realize sub-pixel edge positioning, and there are two problems, the first First, the Sobel operator cannot achieve integer-pixel-level edge positioning for most images, resulting in inaccurate sub-pixel positioning; second, the method provided in this document cannot ensure the edge positioning accuracy of the quadratic curve.

发明内容 Contents of the invention

针对现有技术所存在的缺点和限制,本发明的目的在于提供一种能够提高测量系统精度的圆形目标的圆心定位方法。In view of the shortcomings and limitations of the prior art, the purpose of the present invention is to provide a method for locating the center of a circular target that can improve the accuracy of the measurement system.

本发明设计一种首先将图像中的圆形目标进行圆心粗定位,在用canny算子获取圆形目标的像素级边缘点后,综合考虑圆的几何特征及边缘灰度分布特征求取亚像素边缘点坐标,然后对精确的亚像素边缘点进行滤波,最后用最小二乘法拟合滤波后的亚像素边缘点求得圆形目标的圆心的方法。本发明采用如下技术方案:The present invention designs a method of roughly locating the center of the circular target in the image, and after obtaining the pixel-level edge points of the circular target with the canny operator, comprehensively consider the geometric characteristics of the circle and the edge gray distribution characteristics to obtain sub-pixels Edge point coordinates, then filter the precise sub-pixel edge points, and finally use the least squares method to fit the filtered sub-pixel edge points to obtain the center of the circular target. The present invention adopts following technical scheme:

一种圆形目标的圆心定位方法,其主要步骤为:A method for locating the center of a circular target, the main steps of which are:

第1步:对图像中圆形目标的圆心和半径进行粗定位,得到圆形目标的粗定位圆心(xoc,yoc)和粗定位圆半径Rc,其中,(xoc,yoc)为图像坐标系下圆心的坐标,具体方法如下:Step 1: Perform rough positioning on the center and radius of the circular target in the image, and obtain the rough positioning center (x oc , y oc ) and rough positioning circle radius R c of the circular target, where (x oc , y oc ) is the coordinates of the center of the circle in the image coordinate system, the specific method is as follows:

第1.1步:对图像进行除噪、阈值分割,得到图像中各像素点灰度值为255或0的二值图像;Step 1.1: Carry out noise removal and threshold segmentation on the image to obtain a binary image with a gray value of 255 or 0 for each pixel in the image;

第1.2步:对二值图像中的圆形目标进行边界提取及边界跟踪;Step 1.2: Perform boundary extraction and boundary tracking on the circular target in the binary image;

第1.3步:对跟踪的边界点用周线形心法拟合圆,得到该圆形目标的粗定位圆心和粗定位圆半径,并存储粗定位圆心坐标(xoc,yoc)和粗定位圆半径RcStep 1.3: Fit the circle with the centroid method of the tracked boundary point, obtain the rough positioning circle center and the rough positioning circle radius of the circular target, and store the coarse positioning circle center coordinates (x oc , y oc ) and the rough positioning circle Radius R c ;

第2步:对图像中的圆形目标进行像素级边缘定位,具体步骤如下:Step 2: Perform pixel-level edge positioning on the circular target in the image, the specific steps are as follows:

第2.1步:根据粗定位圆心和粗定位圆半径,从图像中提取一个正方形区域,称为“感兴趣区域”,该正方形区域的提取方法为:以图像中圆形目标的粗定位圆心作为正方形区域的中心点,以圆形目标的粗定位直径加上2~6个像素点作为正方形区域的边长;Step 2.1: According to the center of the rough positioning circle and the radius of the rough positioning circle, extract a square area from the image, called the "region of interest". The center point of the area, the rough positioning diameter of the circular target plus 2 to 6 pixels as the side length of the square area;

第2.2步:用canny算子对图像中所提取的感兴趣区域进行像素级边缘检测,得到圆形目标的像素级边缘,再对canny算子提取的圆形目标的像素级边缘点进行边界跟踪,得到每个像素级边缘点的坐标,且按顺时针顺序存储每个像素级边缘点的坐标;Step 2.2: Use the canny operator to perform pixel-level edge detection on the region of interest extracted in the image to obtain the pixel-level edge of the circular target, and then perform boundary tracking on the pixel-level edge points of the circular target extracted by the canny operator , get the coordinates of each pixel-level edge point, and store the coordinates of each pixel-level edge point in clockwise order;

第3步:对图像中的圆形目标进行亚像素边缘定位,具体步骤如下:Step 3: Perform sub-pixel edge positioning on the circular target in the image, the specific steps are as follows:

第3.1步:将圆形目标的边缘像素点进行区域划分,划分方法如下:称以圆形目标的粗定位圆心为原点、以水平向右为正方向、以像素为单位长度的直线为x轴;称以圆形目标的粗定位圆心为原点、以垂直向上为正方向、以像素为单位长度的直线为y轴;以粗定位圆心为旋转点,由x轴正方向逆时针旋转的角度为θ,根据θ的大小将圆形目标的边缘像素点划分为两个部分,一部分是θ∈[0°,45°]∪[135°,225°]∪[315°,360°]所对应的边缘像素点,称为第1区域的边缘像素点,另一部分是θ∈(45°,135°)∪(225°,315°)对应的边缘像素点,称为第2区域的边缘像素点,粗定位圆心与边缘像素点的连线为该边缘像素点的梯度方向;Step 3.1: Divide the edge pixels of the circular target into regions, and the division method is as follows: the x-axis is the straight line with the rough positioning center of the circular target as the origin, the positive direction horizontally to the right, and the unit length in pixels ; It is called the y-axis with the coarse positioning center of the circular target as the origin, the vertical upward as the positive direction, and the length in units of pixels as the y-axis; the coarse positioning center as the rotation point, and the angle of counterclockwise rotation from the positive direction of the x-axis is θ, according to the size of θ, the edge pixels of the circular target are divided into two parts, one part is θ∈[0°, 45°]∪[135°, 225°]∪[315°, 360°] corresponding to The edge pixels are called the edge pixels of the first area, and the other part is the edge pixels corresponding to θ∈(45°, 135°)∪(225°, 315°), which are called the edge pixels of the second area. The line connecting the center of the coarse positioning circle and the edge pixel is the gradient direction of the edge pixel;

第3.2步:求取边缘像素点沿梯度方向的邻近点,(xp,yp)为以像素为单位的边缘点P的坐标值,具体方法如下:若边缘像素点P(xp,yp)在第1区域内,取该像素点的梯度方向直线与直线x=xp+1的交点A1(xa1,ya1)为该边缘像素点沿梯度方向的右边第一邻近点;取该像素点的梯度方向直线与直线x=xp+2的交点B1(xb1,yb1)为该边缘像素点沿梯度方向的右边第二邻近点;取该像素点的梯度方向直线与直线x=xp+3的交点F(xf,yf)为该边缘像素点沿梯度方向的右边第三邻近点;取该像素点的梯度方向直线与直线x=xp-1的交点C1(xc1,yc1)为该边缘像素点沿梯度方向的左边第一邻近点;取该像素点的梯度方向直线与直线x=xp-2的交点D1(xd1,yd1)为该边缘像素点沿梯度方向的左边第二邻近点,取该像素点的梯度方向直线与直线x=xp-3的交点E1(xe1,ye1)为该边缘像素点沿梯度方向的左边第三邻近点;若边缘像素点P(xp,yp)在第2区域内,取该像素点的梯度方向直线与直线y=yp+1的交点A2(xa2,ya2)为该边缘像素点沿梯度方向的上边第一紧邻点;取该像素点的梯度方向直线与直线y=yp+2的交点B2(xb2,yb2)为该边缘像素点沿梯度方向的上边第二紧邻点;取该像素点的梯度方向直线与直线y=yp+3的交点F2(xf2,yf2)为该边缘像素点沿梯度方向的上边第三紧邻点;取该像素点的梯度方向直线与直线y=yp-1的交点C2(xc2,yc2)为该边缘像素点沿梯度方向的下边第一紧邻点;取该像素点的梯度方向直线与直线y=yp-2的交点D2(xd2,yd2)为该边缘像素点沿梯度方向的上边第二紧邻点;取该像素点的梯度方向直线与直线y=yp-3的交点E2(xe2,ye2)为该边缘像素点沿梯度方向的上边第三紧邻点;Step 3.2: Calculate the adjacent points of the edge pixel point along the gradient direction, (x p , y p ) is the coordinate value of the edge point P in units of pixels, the specific method is as follows: if the edge pixel point P(x p , y p ) In the first area, take the intersection point A 1 (x a1 , y a1 ) of the gradient direction line of the pixel point and the line x=x p +1 as the first adjacent point on the right side of the edge pixel point along the gradient direction; Take the intersection point B 1 (x b1 , y b1 ) of the gradient direction straight line of the pixel point and the straight line x=x p +2 as the second adjacent point on the right side of the edge pixel point along the gradient direction; take the gradient direction straight line of the pixel point The point of intersection F(x f , y f ) with the straight line x=x p +3 is the third adjacent point on the right side of the edge pixel along the gradient direction; take the gradient direction straight line of the pixel point and the straight line x=x p -1 Intersection point C 1 (x c1 , y c1 ) is the first adjacent point on the left side of the edge pixel along the gradient direction; take the intersection point D 1 (x d1 , y d1 ) is the second adjacent point on the left side of the edge pixel along the gradient direction, and the intersection point E 1 (x e1 , y e1 ) of the gradient direction line of the pixel point and the line x=x p -3 is the edge of the edge pixel point The third adjacent point on the left side of the gradient direction; if the edge pixel point P(x p , y p ) is in the second area, take the intersection point A 2 (x a2 , y a2 ) is the first adjacent point above the edge pixel along the gradient direction; take the intersection point B 2 (x b2 , y b2 ) of the gradient direction line of the pixel point and the line y=y p +2 as the edge pixel The second point next to the upper side of the point along the gradient direction; take the intersection point F 2 (x f2 , y f2 ) of the gradient direction line of the pixel point and the straight line y=y p +3 as the third point of the upper side of the edge pixel point along the gradient direction Close to the point; take the intersection point C 2 (x c2 , y c2 ) of the gradient direction straight line of the pixel point and the straight line y=y p -1 as the first adjacent point of the edge pixel point along the gradient direction; take the pixel point The intersection point D 2 (x d2 , y d2 ) of the gradient direction straight line and the straight line y=y p -2 is the second adjacent point on the upper edge of the edge pixel along the gradient direction; take the gradient direction straight line of the pixel point and the straight line y=y The intersection point E 2 (x e2 , y e2 ) of p -3 is the third immediate point above the edge pixel along the gradient direction;

第3.3步:采用线性灰度插值的方法求取以像素为单位的边缘像素点P(xp,yp)沿梯度方向的邻近点的灰度值,其中,(xp,yp)为以像素为单位的边缘像素点P的坐标值,使用f(x,y)表示坐标为(x,y)的像素点的灰度值,使用符号[]表示取整数部分,所述的用于获取邻近点灰度值的线性灰度插值方法如下:Step 3.3: Use the method of linear grayscale interpolation to obtain the grayscale value of the adjacent points of the edge pixel point P(x p , y p ) along the gradient direction in units of pixels, where (x p , y p ) is The coordinate value of the edge pixel point P in pixels, use f(x, y) to represent the gray value of the pixel point whose coordinates are (x, y), use the symbol [] to represent the integer part, the above is used for The linear gray interpolation method to obtain the gray value of adjacent points is as follows:

(1)若边缘像素点P(xp,yp)在第1区域内,(1) If the edge pixel point P(x p , y p ) is in the first area,

邻近点A1(xa1,ya1)的灰度值Gray value of adjacent point A 1 (x a1 , y a1 )

f(xa1,ya1)=(1-λ)*f(xa1,[ya1])+λ*f(xa1,[ya1]+1)f(x a1 , y a1 )=(1-λ)*f(x a1 , [y a1 ])+λ*f(x a1 , [y a1 ]+1)

λ=ya1-[ya1],λ=y a1 -[y a1 ],

邻近点B1(xb1,yb1)的灰度值Gray value of adjacent point B 1 (x b1 , y b1 )

f(xb1,yb1)=(1-λ)*f(xb1,[yb1])+λ*f(xb1,[yb1]+1)f(x b1 , y b1 )=(1-λ)*f(x b1 ,[y b1 ])+λ*f(x b1 ,[y b1 ]+1)

λ=yb1-[yb1]λ=y b1 -[y b1 ]

邻近点F1(xf1,yf1)的灰度值Gray value of adjacent point F 1 (x f1 , y f1 )

f(xf1,yf1)=(1-λ)*f(xf1,[yf1])+λ*f(xf1,[yf1]+1)f(x f1 , y f1 )=(1-λ)*f(x f1 , [y f1 ])+λ*f(x f1 , [y f1 ]+1)

λ=yf1-[yf1]λ=y f1 -[y f1 ]

邻近点C1(xc1,yc1)的灰度值Gray value of adjacent point C 1 (x c1 , y c1 )

f(xc1,yc1)=(1-λ)*f(xc1,[yc1])+λ*f(xc1,[yc1]+1)f(x c1 , y c1 )=(1-λ)*f(x c1 ,[y c1 ])+λ*f(x c1 ,[y c1 ]+1)

λ=yc1-[yc1]λ=y c1 -[y c1 ]

邻近点D1(xd,yd)的灰度值Gray value of adjacent point D 1 (x d , y d )

f(xd1,yd1)=(1-λ)*f(xd1,[yd1])+λ*f(xd1,[yd1]+1)f(x d1 , y d1 )=(1-λ)*f(x d1 , [y d1 ])+λ*f(x d1 , [y d1 ]+1)

λ=yd1-[yd1]λ=y d1 -[y d1 ]

邻近点E1(xe1,ye1)的灰度值Gray value of adjacent point E 1 (x e1 , y e1 )

f(xe1,ye1)=(1-λ)*f(xe1,[ye1])+λ*f(xe1,[ye1]+1)f(x e1 , y e1 )=(1-λ)*f(x e1 ,[y e1 ])+λ*f(x e1 ,[y e1 ]+1)

λ=ye1-[ye1]λ=y e1 -[y e1 ]

(2)若边缘像素点P(xp,yp)在第2区域内,(2) If the edge pixel point P(x p , y p ) is in the second area,

邻近点A2(xa2,ya2)的灰度值Gray value of adjacent point A 2 (x a2 , y a2 )

f(xa2,ya2)=(1-λ)*f([xa2],ya2)+λ*f([xa2]+1,ya2)f(x a2 , y a2 )=(1-λ)*f([x a2 ], y a2 )+λ*f([x a2 ]+1, y a2 )

λ=xa2-[xa2]λ=x a2 -[x a2 ]

邻近点B2(xb2,yb2)的灰度值Gray value of adjacent point B 2 (x b2 , y b2 )

f(xb2,yb2)=(1-λ)*f([xb2],yb2)+λ*f([xb2]+1,yb2)f(x b2 , y b2 )=(1-λ)*f([x b2 ], y b2 )+λ*f([x b2 ]+1, y b2 )

λ=xb2-[xb2]λ=x b2 -[x b2 ]

邻近点F2(xf2,yf2)的灰度值Gray value of adjacent point F 2 (x f2 , y f2 )

f(xf2,yf2)=(1-λ)*f([xf2],yf2)+λ*f([xf2]+1,yf2)f(x f2 , y f2 )=(1-λ)*f([x f2 ], y f2 )+λ*f([x f2 ]+1, y f2 )

λ=xf2-[xf2]λ=x f2 -[x f2 ]

邻近点C2(xc2,yc2)的灰度值Gray value of adjacent point C 2 (x c2 , y c2 )

f(xc2,yc2)=(1-λ)*f([xc2],yc2)+λ*f([xc2]+1,yc2)f(x c2 , y c2 )=(1-λ)*f([x c2 ], y c2 )+λ*f([x c2 ]+1, y c2 )

λ=xc2-[xc2]λ=x c2 -[x c2 ]

邻近点D2(xd2,yd2)的灰度值Gray value of adjacent point D 2 (x d2 , y d2 )

f(xd2,yd2)=(1-λ)*f([xd2],yd2)+λ*f([xd2]+1,yd2)f(x d2 , y d2 )=(1-λ)*f([x d2 ], y d2 )+λ*f([x d2 ]+1, y d2 )

λ=xd2-[xd2]λ=x d2 -[x d2 ]

邻近点E2(xe2,ye2)的灰度值Gray value of adjacent point E 2 (x e2 , y e2 )

f(xe2,ye2)=(1-λ)*f([xe2],ye2)+λ*f([xe2]+1,ye2)f(x e2 , y e2 )=(1-λ)*f([x e2 ], y e2 )+λ*f([x e2 ]+1, y e2 )

λ=xe2-[xe2]λ=x e2 -[x e2 ]

第3.4步:根据第3.3步所求边缘点沿梯度方向的邻近点的灰度值以及边缘点P(xp,yp)本身的灰度值,选取灰度值的前向差分和后向差分的平均值作为边缘点及其沿梯度方向的邻近点的对应的灰度差,其中,(xp,yp)为以像素为单位的边缘像素点P的坐标值,使用f(x,y)表示坐标为(x,y)的像素点的灰度值,所述的求取灰度差的方法如下:Step 3.4: According to the gray value of the adjacent points of the edge point along the gradient direction obtained in step 3.3 and the gray value of the edge point P(x p , y p ) itself, select the forward difference and backward direction of the gray value The average value of the difference is taken as the corresponding gray level difference between the edge point and its adjacent points along the gradient direction, where (x p , y p ) is the coordinate value of the edge pixel point P in pixels, using f(x, y) represents the gray value of the pixel whose coordinates are (x, y), and the method for obtaining the gray difference is as follows:

(1)若边缘像素点P(xp,yp)在第1区域内,(1) If the edge pixel point P(x p , y p ) is in the first area,

邻近点A1(xa1,ya1)对应的灰度差The gray level difference corresponding to the adjacent point A 1 (x a1 , y a1 )

ff aa 11 == || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 11 ,, ythe y aa 11 )) || 22 ++ || ff (( xx aa 11 ,, ythe y aa 11 )) -- ff (( xx bb 11 ,, ythe y bb 11 )) || 22

邻近点B1(xb1,yb1)对应的灰度差The gray level difference corresponding to the adjacent point B 1 (x b1 , y b1 )

ff bb 11 == || ff (( xx aa 11 ,, ythe y aa 11 )) -- ff (( xx bb 11 ,, ythe y bb 11 )) || 22 ++ || ff (( xx bb 11 ,, ythe y bb 11 )) -- ff (( xx ff 11 ,, ythe y ff 11 )) || 22

边缘点P(xp,yp)对应的灰度差The gray level difference corresponding to the edge point P(x p , y p )

ff pp 11 == || ff (( xx cc 11 ,, ythe y cc 11 )) -- ff (( xx pp ,, ythe y pp )) || 22 ++ || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 11 ,, ythe y aa 11 )) || 22

邻近点C1(xc1,yc1)对应的灰度差The gray level difference corresponding to the adjacent point C 1 (x c1 , y c1 )

ff cc 11 == || ff (( xx dd 11 ,, ythe y dd 11 )) -- ff (( xx cc 11 ,, ythe y cc 11 )) || 22 ++ || ff (( xx cc 11 ,, ythe y cc 11 )) -- ff (( xx pp ,, ythe y pp )) || 22

邻近点D1(xd,yd)对应的灰度差The gray level difference corresponding to the adjacent point D 1 (x d , y d )

ff dd 11 == || ff (( xx ee 11 ,, ythe y ee 11 )) -- ff (( xx dd 11 ,, ythe y dd 11 )) || 22 ++ || ff (( xx dd 11 ,, ythe y dd 11 )) -- ff (( xx cc 11 ,, ythe y cc 11 )) || 22

(2)若边缘像素点P(xp,yp)在第2区域内,(2) If the edge pixel point P(x p , y p ) is in the second area,

邻近点A2(xa2,ya2)对应的灰度差The gray level difference corresponding to the adjacent point A 2 (x a2 , y a2 )

ff aa 22 == || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 22 ,, ythe y aa 22 )) || 22 ++ || ff (( xx aa 22 ,, ythe y aa 22 )) -- ff (( xx bb 22 ,, ythe y bb 22 )) || 22

邻近点B2(xb2,yb2)对应的灰度差The gray level difference corresponding to the adjacent point B 2 (x b2 , y b2 )

ff bb 22 == || ff (( xx aa 22 ,, ythe y aa 22 )) -- ff (( xx bb 22 ,, ythe y bb 22 )) || 22 ++ || ff (( xx bb 22 ,, ythe y bb 22 )) -- ff (( xx ff 22 ,, ythe y ff 22 )) || 22

边缘点P(xp,yp)对应的灰度差The gray level difference corresponding to the edge point P(x p , y p )

ff pp 22 == || ff (( xx cc 22 ,, ythe y cc 22 )) -- ff (( xx pp ,, ythe y pp )) || 22 ++ || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 22 ,, ythe y aa 22 )) || 22

邻近点C2(xc2,yc2)对应的灰度差The gray level difference corresponding to the adjacent point C 2 (x c2 , y c2 )

ff cc 22 == || ff (( xx dd 22 ,, ythe y dd 22 )) -- ff (( xx cc 22 ,, ythe y cc 22 )) || 22 ++ || ff (( xx cc 22 ,, ythe y cc 22 )) -- ff (( xx pp ,, ythe y pp )) || 22

邻近点D2(xd2,yd2)对应的灰度差The gray level difference corresponding to the adjacent point D 2 (x d2 , y d2 )

ff dd 22 == || ff (( xx ee 22 ,, ythe y ee 22 )) -- ff (( xx dd 22 ,, ythe y dd 22 )) || 22 ++ || ff (( xx dd 22 ,, ythe y dd 22 )) -- ff (( xx cc 22 ,, ythe y cc 22 )) || 22

第3.5步:根据第3.4步中所得边缘点及其沿梯度方向的邻近点的对应的灰度差,求取亚像素边缘点P′与像素边缘点P的距离差δ为Step 3.5: Calculate the distance difference δ between the sub-pixel edge point P′ and the pixel edge point P according to the corresponding gray level difference between the edge point and its adjacent points along the gradient direction obtained in step 3.4.

Figure A20081002279300156
Figure A20081002279300156

若边缘点及其沿梯度方向的邻近点对应的灰度差中有一个为0,则令δ=0,并存储δ=0出现的次数Count_zero;If one of the grayscale differences corresponding to the edge point and its adjacent points along the gradient direction is 0, then let δ=0, and store the number of occurrences of δ=0 Count_zero;

第3.6步:求取亚像素边缘点P′的坐标,具体方法为:根据3.5步所求亚像素边缘点P′与像素级边缘点P的距离差δ以及像素级边缘点P的梯度方向直线的斜率kgradient,分别求出P点与P′点在x方向和y方向上的坐标差δx,δy,公式如下:Step 3.6: Find the coordinates of the sub-pixel edge point P′, the specific method is: according to the distance difference δ between the sub-pixel edge point P′ and the pixel-level edge point P obtained in step 3.5 and the gradient direction straight line of the pixel-level edge point P The slope k gradient , respectively calculate the coordinate difference δ x , δ y between point P and point P′ in the x direction and y direction, the formula is as follows:

δδ xx == δδ kk gradgrad ientient 22 ++ 11

δδ ythe y == kk gradgrad ientient ** δδ kk gradgrad ientient 22 ++ 11

其中,像素级边缘点P的梯度方向直线的斜率kgradient可由该像素边缘点的坐标(xp,yp)和粗定位圆心坐标(xoc,yoc)获得,即:Among them, the slope k gradient of the gradient direction line of the pixel-level edge point P can be obtained from the coordinates (x p , y p ) of the pixel edge point and the coordinates (x oc , y oc ) of the coarse positioning circle, namely:

kk gradgrad ientient == ythe y ococ -- ythe y pp xx ococ -- xx pp

那么,对于像素点P(xp,yp),其对应的亚像素边缘点为P′(xpx,ypy);Then, for a pixel point P(x p , y p ), its corresponding sub-pixel edge point is P′(x px , y py );

第3.7步:对所有边缘像素点P(xp,yp),依据第3.2步~第3.6步依次求取对应的亚像素边缘点为P′(xpx,ypy),其中,(xpx,ypy)为亚像素边缘点P′的坐标;Step 3.7: For all edge pixel points P(x p , y p ), according to steps 3.2 to 3.6, obtain the corresponding sub-pixel edge points as P′(x px , y py ), where (x px , y py ) is the coordinate of the sub-pixel edge point P′;

第4步:对亚像素边缘点进行滤波,针对亚像素边缘点出现的“孤立点”和噪声,分别用曲率滤波的方法和均值滤波的方法进行处理;Step 4: Filter the sub-pixel edge points, and use the curvature filtering method and the mean value filtering method to process the "isolated points" and noises that appear at the sub-pixel edge points;

第4.1步:针对亚像素边缘出现的“孤立点”,用曲率滤波的方法进行处理,具体步骤如下:Step 4.1: For the "isolated point" that appears on the edge of the sub-pixel, use the method of curvature filtering to process it. The specific steps are as follows:

(1)求取所有亚像素边缘点的曲率,方法如下:在顺序存储的亚像素边缘点中,(x′p2,y′p2)为亚像素边缘点P′的坐标,(x′p1,y′p1)为P′点的紧邻前一点的坐标,(x′p3,y′p3)为P′点的紧邻后一点的坐标,那么亚像素边缘点P′的曲率为(1) Calculate the curvature of all sub-pixel edge points, the method is as follows: in the sequentially stored sub-pixel edge points, (x′ p2 , y′ p2 ) is the coordinate of the sub-pixel edge point P′, (x′ p1 , y′ p1 ) is the coordinate of the point immediately before P′, (x′ p3 , y′ p3 ) is the coordinate of the point immediately after P′, then the curvature of the sub-pixel edge point P′ is

kk == 11 rr == 11 (( (( xx 00 -- xx pp 22 ′′ )) 22 ++ (( ythe y 00 -- ythe y pp 22 ′′ )) 22 ))

其中,(x0,y0)为P′、P′点的紧邻前一点、P′点的紧邻后一点所组成圆的圆心,Among them, (x 0 , y 0 ) is the center of the circle formed by P', the point immediately before P', and the point immediately after P',

xx 00 == aa -- bb ++ cc dd ,, ythe y 00 == ee -- ff ++ gg -- dd ,,

a=(x′p1+x′p2)(x′p2-x′p1)(y′p3-y′p2)a=(x′ p1 +x′ p2 )(x′ p2 -x′ p1 )(y′ p3 -y′ p2 )

b=(x′p2+x′p3)(x′p3-x′p2)(y′p2-y′p1)b=(x′ p2 +x′ p3 )(x′ p3 -x′ p2 )(y′ p2 -y′ p1 )

c=(y′p1-y′p3)(y′p2-y′p1)(y′p3-y′p2)c=(y′ p1 -y′ p3 )(y′ p2 -y′ p1 )(y′ p3 -y′ p2 )

d=2[(x′p2-x′p1)(y′p3-y′p2)-(x′p3-x′p2)(y′p2-y′p1)]d=2[(x′ p2 -x′ p1 )(y′ p3 -y′ p2 )-(x′ p3 -x′ p2 )(y′ p2 -y′ p1 )]

e=(y′p1+y′p2)(y′p2-y′p1)(x′p3-x′p2)e=(y′ p1 +y′ p2 )(y′ p2 -y′ p1 )(x′ p3 -x′ p2 )

f=(y′p2+y′p3)(y′p3-y′p2)(x′p2-x′p1)f=(y′ p2 +y′ p3 )(y′ p3 -y′ p2 )(x′ p2 -x′ p1 )

g=(x′p1-x′p3)(x′p2-x′p1)(x′p3-x′p2)g=(x′ p1 -x′ p3 )(x′ p2 -x′ p1 )(x′ p3 -x′ p2 )

(2)根据亚像素边缘点的曲率对所有亚像素边缘点进行滤波,滤波准则如下:(2) Filter all sub-pixel edge points according to the curvature of the sub-pixel edge points, and the filtering criteria are as follows:

首先将各亚像素边缘点的曲率进行降序排列,取曲率阈值为该序列曲率值由高到低的第3*n个元素,其中,n为孤立点的个数,其数值为第3.5步中亚像素边缘定位时δ=0的出现次数Count_zero;然后用曲率阈值对亚像素边缘点的曲率进行分割,如果亚像素边缘点P的曲率大于曲率阈值,且P的曲率大于其前后紧邻点的曲率,则认为点P是孤立点,给予滤除;First, arrange the curvature of each sub-pixel edge point in descending order, and take the curvature threshold as the 3*nth element of the curvature value of the sequence from high to low, where n is the number of isolated points, and its value is in step 3.5 Count_zero the number of occurrences of δ=0 during sub-pixel edge positioning; then use the curvature threshold to segment the curvature of the sub-pixel edge point, if the curvature of the sub-pixel edge point P is greater than the curvature threshold, and the curvature of P is greater than the curvature of its immediate front and rear points , the point P is considered to be an isolated point and is filtered out;

第4.2步:将滤除“孤立点”后的所有亚像素边缘点用均值方法进行滤波,具体方法为:选取亚像素边缘点前后紧邻的两点的横、纵坐标与亚像素边缘点本身的横、纵坐标的均值作为该亚像素边缘点的新的横、纵坐标;Step 4.2: Filter all sub-pixel edge points after filtering out "isolated points" with the mean value method, the specific method is: select the horizontal and vertical coordinates of the two points immediately before and after the sub-pixel edge point and the sub-pixel edge point itself The mean value of the horizontal and vertical coordinates is used as the new horizontal and vertical coordinates of the sub-pixel edge point;

第5步:对滤波后的亚像素边缘点用最小二乘法拟合圆,最终得到圆形目标的圆心和半径。Step 5: Use the least square method to fit the circle to the filtered sub-pixel edge points, and finally get the center and radius of the circular target.

与现有技术相比,本发明具有如下优点:Compared with prior art, the present invention has following advantage:

(1)与传统的二维图像边缘定位方法相比,本发明提出的亚像素边缘定位的方法基于粗定位圆心所确定的感兴趣区域进行,而且在拟合边缘灰度差时将高斯曲面拟合简化为沿梯度方向的高斯曲线拟合,简化了计算,提高算法的快速性。(1) Compared with the traditional two-dimensional image edge positioning method, the sub-pixel edge positioning method proposed by the present invention is based on the region of interest determined by the rough positioning center, and the Gaussian surface is simulated when fitting the edge gray level difference The combination is simplified as a Gaussian curve fitting along the gradient direction, which simplifies the calculation and improves the speed of the algorithm.

(2)本发明提出的亚像素边缘定位的方法在考虑每个边缘像素点的梯度方向时,不拘泥于传统的边缘检测算子确定的几个特殊的方向,而是根据圆的特征考虑了每个边缘像素点的具体梯度方向,并且利用了线性灰度插值获取边缘像素点沿梯度方向的邻近点的灰度值,进而使边缘点的定位更加准确。(2) When considering the gradient direction of each edge pixel in the sub-pixel edge location method proposed by the present invention, it does not stick to several special directions determined by the traditional edge detection operator, but considers it according to the characteristics of the circle. The specific gradient direction of each edge pixel, and using linear grayscale interpolation to obtain the grayscale value of the adjacent points of the edge pixel along the gradient direction, so as to make the positioning of the edge point more accurate.

(3)本发明针对亚像素边缘点中出现的“孤立点”和噪声,提出了一种简单实用的滤波方法,该方法不仅原理和算法简单,而且滤波效果很好。(3) The present invention proposes a simple and practical filtering method aimed at "isolated points" and noise appearing in sub-pixel edge points. The method is not only simple in principle and algorithm, but also has a good filtering effect.

(4)本发明提出的圆形目标的圆心定位方法与已有的方法相比,有更高的精度和更好的稳定性,并且具有很强的实用性。(4) Compared with existing methods, the method for locating the center of a circular target proposed by the present invention has higher precision and better stability, and has strong practicability.

附图说明 Description of drawings

图1是圆形目标的圆心定位方法具体步骤的流程图。Fig. 1 is a flow chart of specific steps of a method for locating the center of a circular target.

图2是感兴趣区域提取图。Figure 2 is a map of region of interest extraction.

图3是非极大值抑制示意图,其中图3(a)为根据梯度方向划分的四个扇区图;3(b)为3*3邻域图。Figure 3 is a schematic diagram of non-maximum value suppression, in which Figure 3(a) is a four-sector map divided according to the gradient direction; 3(b) is a 3*3 neighborhood map.

图4(a)是边缘像素点P在第1区域时的定义像素点沿梯度方向邻近点的原理图;图4(b)是边缘像素点P在第2区域时的定义像素点沿梯度方向邻近点的原理图。Figure 4(a) is a schematic diagram of the adjacent points of the defined pixel point along the gradient direction when the edge pixel point P is in the first area; Figure 4(b) is the defined pixel point along the gradient direction when the edge pixel point P is in the second area Schematic of proximity points.

图5(a)是当边缘像素点P在第1区域时,边缘像素点P沿梯度方向的邻近点定义图;图5(b)是当边缘像素点P在第2区域时,边缘像素点P沿梯度方向的邻近点定义图。Figure 5(a) is a definition diagram of the adjacent points of the edge pixel point P along the gradient direction when the edge pixel point P is in the first area; Figure 5(b) is when the edge pixel point P is in the second area, the edge pixel point The graph is defined by the neighbors of P along the gradient direction.

图6是亚像素边缘点求取原理图。Fig. 6 is a schematic diagram of sub-pixel edge point calculation.

具体实施方式 Detailed ways

下面结合附图对本发明的具体实施方式作进一步描述。根据上述方法,在Windows操作系统中通过VC++6.0平台用C++编程实现了圆形目标的圆心定位操作。The specific embodiments of the present invention will be further described below in conjunction with the accompanying drawings. According to the above method, the center positioning operation of the circular target is realized by programming in C++ through the VC++6.0 platform in the Windows operating system.

应用本方法进行圆形目标的圆心定位主要包括圆心粗定位、像素级边缘定位、亚像素边缘定位、亚像素边缘点滤波、最小二乘拟合求得圆心五个操作步骤,具体步骤的流程图如图1所示,应用本方法进行圆形目标的圆心定位的具体步骤如下(步骤中未标明的单位都以像素为单位):Applying this method to locate the center of a circular target mainly includes five operational steps: coarse center positioning, pixel-level edge positioning, sub-pixel edge positioning, sub-pixel edge point filtering, and least squares fitting to obtain the center of the circle. The flow chart of the specific steps As shown in Figure 1, the specific steps of applying this method to locate the center of a circular target are as follows (unmarked units in the steps are all in pixels):

第1步:对图像中的圆形目标的圆心和半径进行粗定位,得到圆形目标的粗定位圆心(xoc,yoc)和粗定位圆半径Rc,其中,(xoc,yoc)为图像坐标系下圆心的坐标,具体步骤如下:Step 1: Perform rough positioning on the center and radius of the circular target in the image, and obtain the rough positioning circle center (x oc , y oc ) and rough positioning circle radius R c of the circular target, where (x oc , y oc ) is the coordinates of the center of the circle in the image coordinate system, and the specific steps are as follows:

第1.1步:对图像进行除噪、阈值分割,得到图像中各像素点灰度值为255或0的二值图像。其中,阈值分割采用迭代阈值法。迭代阈值法能自动计算出比较合适的分割阈值,其计算步骤如下:Step 1.1: Perform denoising and threshold segmentation on the image to obtain a binary image with a gray value of 255 or 0 for each pixel in the image. Among them, the threshold segmentation adopts the iterative threshold method. The iterative threshold method can automatically calculate a more appropriate segmentation threshold, and the calculation steps are as follows:

(1)选择阈值T,通常选择图像的平均灰度值作为初始阈值;(1) Select the threshold T, usually choose the average gray value of the image as the initial threshold;

(2)通过初始阈值T,将图像的平均灰度值分成为两组R1和R2(2) by the initial threshold T, the average gray value of the image is divided into two groups R 1 and R 2 ;

(3)计算两组平均灰度值μ1和μ2(3) Calculate the average gray value μ 1 and μ 2 of the two groups;

(4)重新选择阈值T,新的T定义为:T=(μ12)/2;(4) Reselect the threshold T, and the new T is defined as: T=(μ 12 )/2;

循环(2)到(4),直到两组的平均灰度值μ1和μ2不再发生改变,因此得到所需要的阈值,再根据所得到的阈值对图像进行二值化。Cycle (2) to (4) until the average gray value μ 1 and μ 2 of the two groups no longer change, so the required threshold is obtained, and then the image is binarized according to the obtained threshold.

第1.2步:对二值图像中的圆形目标进行边界提取及边界跟踪。在二值图像中,灰度值为255的像素即为圆形目标,对圆形目标进行边界提取的方法如下:如果原图中有一点为白,而相邻的8个像素全部为黑,则将该像素点的灰度值设为0。经过边界提取后,对该图中的目标轮廓线进行跟踪并存储,可得到圆形目标的边界点及其坐标。Step 1.2: Perform boundary extraction and boundary tracking on the circular target in the binary image. In a binary image, a pixel with a grayscale value of 255 is a circular target. The method of boundary extraction for a circular target is as follows: If one point in the original image is white, and the adjacent 8 pixels are all black, Then the gray value of the pixel is set to 0. After the boundary extraction, the target contour line in the figure is tracked and stored, and the boundary points and coordinates of the circular target can be obtained.

第1.3步:对跟踪的边界点用周线形心法拟合圆,得到该圆形目标的粗定位圆心和粗定位圆半径,并存储粗定位圆心坐标(xoc,yoc)和粗定位圆半径Rc。周线形心法是一种最为简单的亚像素定位算法,算法如下:设跟踪的边界点坐标为(i,j),跟踪边界点总数为N,根据周线形心法,圆形目标的圆心坐标为:Step 1.3: Fit the circle with the centroid method of the tracked boundary point, obtain the rough positioning circle center and the rough positioning circle radius of the circular target, and store the coarse positioning circle center coordinates (x oc , y oc ) and the rough positioning circle Radius R c . The contour centroid method is the simplest sub-pixel positioning algorithm. The algorithm is as follows: Set the coordinates of the tracking boundary point as (i, j), and the total number of tracking boundary points is N. According to the contour centroid method, the circle center coordinates of the circular target for:

xx ococ == ΣiΣ i NN ythe y ococ == ΣjΣj NN

圆形目标的半径为各边界点到圆心距离的平均值,即:The radius of the circular target is the average distance from each boundary point to the center of the circle, namely:

RR cc == ΣΣ (( ii -- xx ococ )) 22 ++ (( jj -- ythe y ococ )) 22 NN

第2步:对图像中的圆形目标进行像素级边缘定位,具体步骤如下:Step 2: Perform pixel-level edge positioning on the circular target in the image, the specific steps are as follows:

第2.1步:根据粗定位圆心和粗定位圆半径,从图像中提取一个正方形区域,称为“感兴趣区域”,该正方形区域的提取方法为:以图像中圆形目标的粗定位圆心作为正方形区域的中心点,以圆形目标的粗定位直径加上2~6个像素点作为正方形区域的边长。如图2所示,图中圆代表图像中的圆形目标,正方形区域即为感兴趣区域,其中,O为粗定位圆心,Rc为粗定位半径,d一般取2~6个像素。后续步骤只针对感兴趣区域进行处理。Step 2.1: According to the center of the rough positioning circle and the radius of the rough positioning circle, extract a square area from the image, called the "region of interest". For the center point of the area, the rough positioning diameter of the circular target plus 2 to 6 pixels is used as the side length of the square area. As shown in Figure 2, the circle in the figure represents the circular target in the image, and the square area is the region of interest, where O is the center of the rough positioning circle, R c is the radius of the rough positioning, and d generally takes 2 to 6 pixels. Subsequent steps only target regions of interest.

第2.2步:用canny算子对图像中所提取的感兴趣区域进行像素级边缘检测,得到圆形目标的像素级边缘,再对canny算子提取的圆形目标的像素级边缘点进行边界跟踪,得到每个像素级边缘点的坐标,且按顺时针顺序存储每个像素级边缘点的坐标。其中,Canny边缘检测对信噪比与定位乘积进行测度,属于先平滑后再求导的方法,其主要步骤和基本原理如下:Step 2.2: Use the canny operator to perform pixel-level edge detection on the region of interest extracted in the image to obtain the pixel-level edge of the circular target, and then perform boundary tracking on the pixel-level edge points of the circular target extracted by the canny operator , get the coordinates of each pixel-level edge point, and store the coordinates of each pixel-level edge point in clockwise order. Among them, Canny edge detection measures the signal-to-noise ratio and the positioning product, which belongs to the method of first smoothing and then derivation. The main steps and basic principles are as follows:

(1)用高斯滤波器平滑图像,高斯平滑函数如下:(1) Smooth the image with a Gaussian filter, and the Gaussian smoothing function is as follows:

Hh (( xx ,, ythe y )) == ee -- xx 22 ++ ythe y 22 22 σσ 22

其中,σ为高斯函数的标准偏差,x,y分别是图像中像素点的横、纵坐标。对图像进行高斯滤波,即对图像fI(x,y)用高斯函数H(x,y)进行卷积,得到滤波后的平滑图像G(x,y):Among them, σ is the standard deviation of the Gaussian function, and x and y are the horizontal and vertical coordinates of the pixels in the image, respectively. Gaussian filtering is performed on the image, that is, the image f I (x, y) is convolved with the Gaussian function H(x, y) to obtain a filtered smooth image G(x, y):

G(x,y)=fI(x,y)*H(x,y)G(x, y) = f I (x, y)*H(x, y)

(2)用一阶偏导数的有限差分来计算梯度的幅值和方向。一阶差分卷积模板如下:(2) Use the finite difference of the first-order partial derivatives to calculate the magnitude and direction of the gradient. The first-order differential convolution template is as follows:

Hh 11 == -- 11 -- 11 11 11 Hh 22 == 11 -- 11 11 -- 11

将平滑后的图像G(x,y)用以上两个模板进行卷积,得到水平和垂直方向上的梯度图像分别为进而由

Figure A20081002279300204
可得到梯度的幅值
Figure A20081002279300205
和方向
Figure A20081002279300206
推算式如下:The smoothed image G(x, y) is convolved with the above two templates, and the gradient images in the horizontal and vertical directions are obtained as And then by
Figure A20081002279300204
The magnitude of the gradient can be obtained
Figure A20081002279300205
and direction
Figure A20081002279300206
The calculation formula is as follows:

Figure A20081002279300207
Figure A20081002279300207

Figure A20081002279300208
Figure A20081002279300208

(3)对梯度幅值进行非极大值抑制(non-maxima suppression,NMS):图3是非极大值抑制示意图。根据梯度的方向,四个扇区的标号为0到3,如图3(a)所示,对应3*3邻域的四种可能组合,3*3邻域如图3(b)所示。在每一个像素点上,将邻域的中心象素Pcp与沿着梯度线的两个像素相比,如果Pcp的梯度值不比沿梯度线的两个相邻像素梯度值大,则该点不为边缘点。(3) Perform non-maxima suppression (NMS) on the gradient amplitude: FIG. 3 is a schematic diagram of non-maxima suppression. According to the direction of the gradient, the four sectors are labeled from 0 to 3, as shown in Figure 3(a), corresponding to four possible combinations of 3*3 neighborhoods, as shown in Figure 3(b) . At each pixel point, compare the central pixel P cp of the neighborhood with the two pixels along the gradient line, if the gradient value of P cp is not greater than the gradient value of the two adjacent pixels along the gradient line, then the Point is not an edge point.

(4)用双阈值算法检测和连接边缘。双阈值算法对非极大值抑制图像作用两个阈值τ1和τ2,且2τ1≈τ2,从而可以得到两个阈值边缘图像N1[i,j]和N2[i,j]。由于N2[i,j]使用高阈值得到,因而含有很少的假边缘,但有间断。双阈值法要在N2[i,j]中把边缘连接成轮廓,当到达轮廓的端点时,该算法就在N1[i,j]的8邻域点中寻找可以连接到轮廓上的边缘,这样,算法不断地在N1[i,j]中收集边缘,直到将N2[i,j]连接起来为止。(4) Detect and connect edges with a double-threshold algorithm. The double-threshold algorithm applies two thresholds τ1 and τ2 to the non-maximum suppressed image, and 2τ1≈τ2, so that two threshold edge images N 1 [i, j] and N 2 [i, j] can be obtained. Since N 2 [i, j] is obtained using a high threshold, it contains few false edges, but there are discontinuities. The double-threshold method needs to connect the edges into a contour in N 2 [i, j]. When the end point of the contour is reached, the algorithm searches for edges that can be connected to the contour in the 8 neighborhood points of N 1 [i, j]. , so that the algorithm keeps collecting edges in N 1 [i, j] until N 2 [i, j] is connected.

第3步:对图像中的圆形目标进行亚像素边缘定位,具体步骤如下:Step 3: Perform sub-pixel edge positioning on the circular target in the image, the specific steps are as follows:

第3.1步:将圆形目标的边缘像素点进行区域划分,划分方法如下:称以圆形目标的粗定位圆心为原点、以水平向右为正方向、以像素为单位长度的直线为x轴;称以圆形目标的粗定位圆心为原点、以垂直向上为正方向、以像素为单位长度的直线为y轴;以粗定位圆心为旋转点,由x轴正方向逆时针旋转的角度为θ,根据θ的大小将圆形目标的边缘像素点划分为两个部分,一部分是θ∈[0°,45°]∪[135°,225°]∪[315°,360°]所对应的边缘像素点,称为第1区域的边缘像素点,另一部分是θ∈(45°,135°)∪(225°,315°)对应的边缘像素点,称为第2区域的边缘像素点,粗定位圆心与边缘像素点的连线为该边缘像素点的梯度方向。Step 3.1: Divide the edge pixels of the circular target into regions, and the division method is as follows: the x-axis is the straight line with the rough positioning center of the circular target as the origin, the positive direction horizontally to the right, and the unit length in pixels ; It is called the y-axis with the coarse positioning center of the circular target as the origin, the vertical upward as the positive direction, and the length in units of pixels as the y-axis; the coarse positioning center as the rotation point, and the angle of counterclockwise rotation from the positive direction of the x-axis is θ, according to the size of θ, the edge pixels of the circular target are divided into two parts, one part is θ∈[0°, 45°]∪[135°, 225°]∪[315°, 360°] corresponding to The edge pixels are called the edge pixels of the first area, and the other part is the edge pixels corresponding to θ∈(45°, 135°)∪(225°, 315°), which are called the edge pixels of the second area. The line connecting the center of the coarse positioning circle and the edge pixel point is the gradient direction of the edge pixel point.

第3.2步:求取边缘像素点沿梯度方向的邻近点,具体原理如下:若边缘像素点P(xp,yp)在第1区域内,图4(a)是定义像素点沿梯度方向邻近点的原理图,(xp,yp)为以像素为单位的边缘点P的坐标值,LG为P点的梯度方向直线,直线y1、y2、y3、y4为P点附近以像素点为单位的四条横线,分别与直线LG相交于点B′1、A′1、C′1、D′1,直线x1,x2,x3,x4为P点附近以像素点为单位的四条纵线,分别与直线LG相交于点D1、C1、A1、B1,选取{B′1、A′1、C′1、D′1}和{B1、A1、C1、D1}两组点作为边缘点沿梯度方向的候选邻近点,由于P点在第1区域,梯度方向以x方向为主,即LG的斜率的绝对值小于1,因此,候选邻近点{B′1、A′1、C′1、D′1}到P点的距离明显要大于候选邻近点{B1、A1、C1、D1}到P点的距离,若选用{B′1、A′1、C′1、D′1}作为P点沿梯度方向的邻近点,会导致边缘定位不够精确,因此,选用{B1、A1、C1、D1}作为其邻近点;若边缘像素点P(xp,yp)在第2区域内,图4(b)是定义像素点沿梯度方向邻近点的原理图,(xp,yp)为以像素为单位的边缘点P的坐标值,LG为P点的梯度方向直线,直线y1、y2、y3、y4为P点附近以像素点为单位的四条横线,分别与直线IG相交于点B2、A2、C2、D2,直线x1,x2,x3,x4为P点附近以像素点为单位的四条纵线,分别与直线LG相交于点D′2、C′2、A′2、B′2,选取{B′2、A′2、C′2、D′2}和{B2、A2、C2、D2}两组点作为边缘点沿梯度方向的候选邻近点。由于P点在第2区域,梯度方向以y方向为主,即LG的斜率的绝对值大于1,因此,候选邻近点{B′2、A′2、C′2、D′2}到P点的距离明显要大于候选邻近点{B2、A2、C2、D2}到P点的距离,若选用{B′2、A′2、C′2、D′2}作为P点沿梯度方向的邻近点,会导致边缘定位不够精确,因此,选用{B2、A2、C2、D2}作为其邻近点。因此,针对第3.1步划分的两个不同区域的像素点,其沿梯度方向的邻近点的定义如下:(xp,yp)为以像素为单位的边缘点P的坐标值,若边缘像素点P(xp,yp)在第1区域内,图5(a)是像素点沿梯度方向的邻近点定义图,LG为边缘像素点P(xp,yp)的梯度方向直线,L1代表直线x=xp-3,L2代表直线x=xp-2,L3代表直线x=xp-1,L4代表直线x=xp+1,L5代表直线x=xp+2,L6代表直线x=xp+3 。取该像素点的梯度方向直线与直线x=xp+1的交点A1(xa1,ya1)为该边缘像素点沿梯度方向的右边第一邻近点;取该像素点的梯度方向直线与直线x=xp+2的交点B1(xb1,yb1)为该边缘像素点沿梯度方向的右边第二邻近点;取该像素点的梯度方向直线与直线x=xp+3的交点F(xf,yf)为该边缘像素点沿梯度方向的右边第三邻近点;取该像素点的梯度方向直线与直线x=xp-1的交点C1(xc1,yc1)为该边缘像素点沿梯度方向的左边第一邻近点;取该像素点的梯度方向直线与直线x=xp-2的交点D1(xd1,yd1)为该边缘像素点沿梯度方向的左边第二邻近点,取该像素点的梯度方向直线与直线x=xp-3的交点E1(xe1,ye1)为该边缘像素点沿梯度方向的左边第三邻近点;若边缘像素点P(xp,yp)在第2区域内,图5(b)是像素点沿梯度方向的邻近点定义图,LG为边缘像素点P(xp,yp)的梯度方向直线,取该像素点的梯度方向直线与直线y=yp+1的交点A2(xa2,ya2)为该边缘像素点沿梯度方向的上边第一紧邻点;取该像素点的梯度方向直线与直线y=yp+2的交点B2(xb2,yb2)为该边缘像素点沿梯度方向的上边第二紧邻点;取该像素点的梯度方向直线与直线y=yp+3的交点F2(xf2,yf2)为该边缘像素点沿梯度方向的上边第三紧邻点;取该像素点的梯度方向直线与直线y=yp-1的交点C2(xc2,yc2)为该边缘像素点沿梯度方向的下边第一紧邻点;取该像素点的梯度方向直线与直线y=yp-2的交点D2(xd2,yd2)为该边缘像素点沿梯度方向的上边第二紧邻点;取该像素点的梯度方向直线与直线y=yp-3的交点E2(xe2,ye2)为该边缘像素点沿梯度方向的上边第三紧邻点。Step 3.2: Calculate the adjacent points of the edge pixel along the gradient direction, the specific principle is as follows: if the edge pixel P(x p , y p ) is in the first area, Figure 4(a) is to define the pixel along the gradient direction The schematic diagram of adjacent points, (x p , y p ) is the coordinate value of the edge point P in pixels, L G is the gradient direction straight line of point P, and the straight lines y 1 , y 2 , y 3 , and y 4 are P The four horizontal lines in pixel units near the point respectively intersect with the straight line L G at points B′ 1 , A′ 1 , C′ 1 , D′ 1 , and the straight lines x 1 , x 2 , x 3 , and x 4 are P The four vertical lines in pixel units near the point respectively intersect with the straight line L G at points D 1 , C 1 , A 1 , and B 1 , and select {B′ 1 , A′ 1 , C′ 1 , D′ 1 } and {B 1 , A 1 , C 1 , D 1 } two groups of points are used as candidate adjacent points of the edge point along the gradient direction. Since point P is in the first area, the gradient direction is dominated by the x direction, that is, the slope of LG The absolute value is less than 1, therefore, the distance between the candidate neighboring points {B′ 1 , A′ 1 , C′ 1 , D′ 1 } and point P is obviously greater than that of the candidate neighboring points {B 1 , A 1 , C 1 , D 1 } to point P. If {B′ 1 , A′ 1 , C′ 1 , D′ 1 } are selected as the adjacent points of point P along the gradient direction, the edge positioning will not be accurate enough. Therefore, {B 1 , A 1 , C 1 , D 1 } are their neighboring points; if the edge pixel point P(x p , y p ) is in the second area, Figure 4(b) is a schematic diagram of defining the neighboring points of the pixel point along the gradient direction, (x p , y p ) is the coordinate value of the edge point P in units of pixels, L G is the gradient direction straight line of point P, and the straight lines y 1 , y 2 , y 3 , and y 4 are the pixels near point P. The four horizontal lines of the unit intersect with the straight line I G at points B 2 , A 2 , C 2 , and D 2 , and the straight lines x 1 , x 2 , x 3 , and x 4 are the four vertical lines in units of pixels near point P. Lines intersect with straight line L G at points D′ 2 , C′ 2 , A′ 2 , B′ 2 , and select {B′ 2 , A′ 2 , C′ 2 , D′ 2 } and {B 2 , A 2 , C 2 , D 2 } two groups of points are used as candidate adjacent points of the edge point along the gradient direction. Since point P is in the second area, the gradient direction is dominated by the y direction, that is, the absolute value of the slope of L G is greater than 1. Therefore, the candidate adjacent points {B′ 2 , A′ 2 , C′ 2 , D′ 2 } to The distance of point P is obviously greater than the distance from candidate neighboring points {B 2 , A 2 , C 2 , D 2 } to point P. If {B′ 2 , A′ 2 , C′ 2 , D′ 2 } is selected as P The adjacent points along the gradient direction will lead to inaccurate edge positioning. Therefore, {B 2 , A 2 , C 2 , D 2 } are selected as their adjacent points. Therefore, for the pixel points in the two different areas divided in step 3.1, the definition of the adjacent points along the gradient direction is as follows: (x p , y p ) is the coordinate value of the edge point P in units of pixels, if the edge pixel The point P(x p , y p ) is in the first area, Figure 5(a) is the definition map of the adjacent points of the pixel point along the gradient direction, L G is the gradient direction straight line of the edge pixel point P(x p , y p ) , L 1 represents the straight line x=x p -3, L 2 represents the straight line x=x p -2, L 3 represents the straight line x=x p -1, L 4 represents the straight line x=x p +1, L 5 represents the straight line x =x p +2, L 6 represents the straight line x=x p +3 . Take the intersection point A 1 (x a1 , y a1 ) of the gradient direction straight line of the pixel point and the straight line x=x p +1 as the first adjacent point on the right side of the edge pixel point along the gradient direction; take the gradient direction straight line of the pixel point The intersection point B 1 (x b1 , y b1 ) with the straight line x=x p +2 is the second adjacent point on the right side of the edge pixel along the gradient direction; take the gradient direction straight line of the pixel point and the straight line x=x p +3 The intersection point F(x f , y f ) of the edge pixel point along the gradient direction is the third adjacent point on the right; take the intersection point C 1 (x c1 , y c1 ) is the first adjacent point on the left side of the edge pixel point along the gradient direction; take the intersection point D 1 (x d1 , y d1 ) of the gradient direction line of the pixel point and the line x=x p -2 as the edge pixel point The second adjacent point on the left side of the gradient direction, take the intersection point E 1 (x e1 , y e1 ) of the gradient direction line of the pixel point and the line x=x p -3 as the third adjacent point on the left side of the edge pixel point along the gradient direction ; If the edge pixel point P(x p , y p ) is in the second area, Figure 5(b) is the definition map of the adjacent points of the pixel point along the gradient direction, L G is the edge pixel point P(x p , y p ) Gradient direction straight line of the pixel point, take the intersection point A 2 (x a2 , y a2 ) of the gradient direction straight line of the pixel point and the straight line y=y p +1 as the first adjacent point on the upper edge of the edge pixel point along the gradient direction; take the pixel The intersection point B 2 (x b2 , y b2 ) of the gradient direction straight line of the point and the straight line y=y p +2 is the second adjacent point on the upper side of the edge pixel along the gradient direction; take the gradient direction straight line and the straight line y of the pixel point The intersection point F 2 (x f2 , y f2 ) of =y p +3 is the third adjacent point above the edge pixel along the gradient direction; take the intersection point C of the gradient direction line of the pixel point and the line y=y p -1 2 (x c2 , y c2 ) is the first adjacent point on the lower edge of the edge pixel along the gradient direction; take the intersection point D 2 (x d2 , y d2 ) of the gradient direction line of the pixel point and the line y=y p -2 is the second adjacent point on the upper side of the edge pixel point along the gradient direction; take the intersection point E 2 (x e2 , y e2 ) of the gradient direction line of the pixel point and the line y=y p -3 as the edge pixel point along the gradient direction The third immediate point above .

第3.3步:采用线性灰度插值的方法求取以像素为单位的边缘像素点P(xp,yp)沿梯度方向的邻近点的灰度值,具体方法为:首先找出与该像素点的沿梯度方向的邻近点距离最近的两个整像素点,然后对这两个整像素点进行线性灰度插值,插值结果为该边缘像素点的沿梯度方向的邻近点的灰度值,其中,(xp,yp)为以像素为单位的边缘像素点P的坐标值,使用f(x,y)表示坐标为(x,y)的像素点的灰度值,使用符号[]表示取整数部分,所述的用于获取邻近点灰度值的线性灰度插值方法如下:Step 3.3: Use the method of linear grayscale interpolation to obtain the grayscale value of the adjacent point of the edge pixel point P(x p , y p ) along the gradient direction in units of pixels. The specific method is: first find out the The adjacent points along the gradient direction of the point are the nearest two integer pixels, and then linear grayscale interpolation is performed on these two integer pixels, and the interpolation result is the grayscale value of the adjacent points along the gradient direction of the edge pixel point, Among them, (x p , y p ) is the coordinate value of the edge pixel point P in units of pixels, f(x, y) is used to represent the gray value of the pixel point whose coordinates are (x, y), and the symbol [] Indicates the integer part, the linear gray interpolation method used to obtain the gray value of adjacent points is as follows:

(1)若边缘像素点P(xp,yp)在第1区域内,(1) If the edge pixel point P(x p , y p ) is in the first area,

邻近点A1(xa1,ya1)的灰度值Gray value of adjacent point A 1 (x a1 , y a1 )

f(xa1,ya1)=(1-λ)*f(xa1,[ya1])+λ*f(xa1,[ya1]+1)f(x a1 , y a1 )=(1-λ)*f(x a1 , [y a1 ])+λ*f(x a1 , [y a1 ]+1)

λ=ya1-[ya1],λ=y a1 -[y a1 ],

邻近点B1(xb1,yb1)的灰度值Gray value of adjacent point B 1 (x b1 , y b1 )

f(xb1,yb1)=(1-λ)*f(xb1,[yb1])+λ*f(xb1,[yb1]+1)f(x b1 , y b1 )=(1-λ)*f(x b1 ,[y b1 ])+λ*f(x b1 ,[y b1 ]+1)

λ=yb1-[yb1]λ=y b1 -[y b1 ]

邻近点F1(xf1,yf1)的灰度值Gray value of adjacent point F 1 (x f1 , y f1 )

f(xf1,yf1)=(1-λ)*f(xf1,[yf1])+λ*f(xf1,[yf1]+1)f(x f1 , y f1 )=(1-λ)*f(x f1 , [y f1 ])+λ*f(x f1 , [y f1 ]+1)

λ=yf1-[yf1]λ=y f1 -[y f1 ]

邻近点C1(xc1,yc1)的灰度值Gray value of adjacent point C 1 (x c1 , y c1 )

f(xc1,yc1)=(1-λ)*f(xc1,[yc1])+λ*f(xc1,[yc1]+1)f(x c1 , y c1 )=(1-λ)*f(x c1 ,[y c1 ])+λ*f(x c1 ,[y c1 ]+1)

λ=yc1-[yc1]λ=y c1 -[y c1 ]

邻近点D1(xd,yd)的灰度值Gray value of adjacent point D 1 (x d , y d )

f(xd1,yd1)=(1-λ)*f(xd1,[yd1])+λ*f(xd1,[yd1]+1)f(x d1 , y d1 )=(1-λ)*f(x d1 , [y d1 ])+λ*f(x d1 , [y d1 ]+1)

λ=yd1-[yd1]λ=y d1 -[y d1 ]

邻近点E1(xe1,ye1)的灰度值Gray value of adjacent point E 1 (x e1 , y e1 )

f(xe1,ye1)=(1-λ)*f(xe1,[ye1])+λ*f(xe1,[ye1]+1)f(x e1 , y e1 )=(1-λ)*f(x e1 ,[y e1 ])+λ*f(x e1 ,[y e1 ]+1)

λ=ye1-[ye1]λ=y e1 -[y e1 ]

(2)若边缘像素点P(xp,yp)在第2区域内,(2) If the edge pixel point P(x p , y p ) is in the second area,

邻近点A2(xa2,ya2)的灰度值Gray value of adjacent point A 2 (x a2 , y a2 )

f(xa2,ya2)=(1-λ)*f([xa2],ya2)+λ*f([xa2]+1,ya2)f(x a2 , y a2 )=(1-λ)*f([x a2 ], y a2 )+λ*f([x a2 ]+1, y a2 )

λ=xa2-[xa2]λ=x a2 -[x a2 ]

邻近点B2(xb2,yb2)的灰度值Gray value of adjacent point B 2 (x b2 , y b2 )

f(xb2,yb2)=(1-λ)*f([xb2],yb2)+λ*f([xb2]+1,yb2)f(x b2 , y b2 )=(1-λ)*f([x b2 ], y b2 )+λ*f([x b2 ]+1, y b2 )

λ=xb2-[xb2]λ=x b2 -[x b2 ]

邻近点F2(xf2,yf2)的灰度值Gray value of adjacent point F 2 (x f2 , y f2 )

f(xf2,yf2)=(1-λ)*f([xf2],yf2)+λ*f([xf2]+1,yf2)f(x f2 , y f2 )=(1-λ)*f([x f2 ], y f2 )+λ*f([x f2 ]+1, y f2 )

λ=xf2-[xf2]λ=x f2 -[x f2 ]

邻近点C2(xc2,yc2)的灰度值Gray value of adjacent point C 2 (x c2 , y c2 )

f(xc2,yc2)=(1-λ)*f([xc2],yc2)+λ*f([xc2]+1,yc2)f(x c2 , y c2 )=(1-λ)*f([x c2 ], y c2 )+λ*f([x c2 ]+1, y c2 )

λ=xc2-[xc2]λ=x c2 -[x c2 ]

邻近点D2(xd2,yd2)的灰度值Gray value of adjacent point D 2 (x d2 , y d2 )

f(xd2,yd2)=(1-λ)*f([xd2],yd2)+λ*f([xd2]+1,yd2)f(x d2 , y d2 )=(1-λ)*f([x d2 ], y d2 )+λ*f([x d2 ]+1, y d2 )

λ=xd2-[xd2]λ=x d2 -[x d2 ]

邻近点E2(xe2,ye2)的灰度值Gray value of adjacent point E 2 (x e2 , y e2 )

f(xe2,ye2)=(1-λ)*f([e2],ye2)+λ*f([e2]+1,ye2)f(x e2 , y e2 )=(1-λ)*f([ e2 ], y e2 )+λ*f([ e2 ]+1, y e2 )

λ=xe2-[xe2]λ=x e2 -[x e2 ]

第3.4步:根据第3.3步所求边缘点沿梯度方向的邻近点的灰度值以及边缘点P(xp,yp)本身的灰度值,选取灰度值的前向差分和后向差分的平均值作为边缘点及其沿梯度方向的邻近点的对应的灰度差,其中,(xp,yp)为以像素为单位的边缘像素点P的坐标值,使用f(x,y)表示坐标为(x,y)的像素点的灰度值,所述的求取灰度差的方法如下:Step 3.4: According to the gray value of the adjacent points of the edge point along the gradient direction obtained in step 3.3 and the gray value of the edge point P(x p , y p ) itself, select the forward difference and backward direction of the gray value The average value of the difference is taken as the corresponding gray level difference between the edge point and its adjacent points along the gradient direction, where (x p , y p ) is the coordinate value of the edge pixel point P in pixels, using f(x, y) represents the gray value of the pixel whose coordinates are (x, y), and the method for obtaining the gray difference is as follows:

(1)若边缘像素点P(xp,yp)在第1区域内,(1) If the edge pixel point P(x p , y p ) is in the first area,

邻近点A1(xa1,ya1)对应的灰度差The gray level difference corresponding to the adjacent point A 1 (x a1 , y a1 )

ff aa 11 == || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 11 ,, ythe y aa 11 )) || 22 ++ || ff (( xx aa 11 ,, ythe y aa 11 )) -- ff (( xx bb 11 ,, ythe y bb 11 )) || 22

邻近点B1(xb1,yb1)对应的灰度差The gray level difference corresponding to the adjacent point B 1 (x b1 , y b1 )

ff bb 11 == || ff (( xx aa 11 ,, ythe y aa 11 )) -- ff (( xx bb 11 ,, ythe y bb 11 )) || 22 ++ || ff (( xx bb 11 ,, ythe y bb 11 )) -- ff (( xx ff 11 ,, ythe y ff 11 )) || 22

边缘点P(xp,yp)对应的灰度差The gray level difference corresponding to the edge point P(x p , y p )

ff pp 11 == || ff (( xx cc 11 ,, ythe y cc 11 )) -- ff (( xx pp ,, ythe y pp )) || 22 ++ || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 11 ,, ythe y aa 11 )) || 22

邻近点C1(xc1,yc1)对应的灰度差The gray level difference corresponding to the adjacent point C 1 (x c1 , y c1 )

ff cc 11 == || ff (( xx dd 11 ,, ythe y dd 11 )) -- ff (( xx cc 11 ,, ythe y cc 11 )) || 22 ++ || ff (( xx cc 11 ,, ythe y cc 11 )) -- ff (( xx pp ,, ythe y pp )) || 22

邻近点D1(xd,yd)对应的灰度差The gray level difference corresponding to the adjacent point D 1 (x d , y d )

ff dd 11 == || ff (( xx ee 11 ,, ythe y ee 11 )) -- ff (( xx dd 11 ,, ythe y dd 11 )) || 22 ++ || ff (( xx dd 11 ,, ythe y dd 11 )) -- ff (( xx cc 11 ,, ythe y cc 11 )) || 22

(2)若边缘像素点P(xp,yp)在第2区域内,(2) If the edge pixel point P(x p , y p ) is in the second area,

邻近点A2(xa2,ya2)对应的灰度差The gray level difference corresponding to the adjacent point A 2 (x a2 , y a2 )

ff aa 22 == || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 22 ,, ythe y aa 22 )) || 22 ++ || ff (( xx aa 22 ,, ythe y aa 22 )) -- ff (( xx bb 22 ,, ythe y bb 22 )) || 22

邻近点B2(xb2,yb2)对应的灰度差The gray level difference corresponding to the adjacent point B 2 (x b2 , y b2 )

ff bb 22 == || ff (( xx aa 22 ,, ythe y aa 22 )) -- ff (( xx bb 22 ,, ythe y bb 22 )) || 22 ++ || ff (( xx bb 22 ,, ythe y bb 22 )) -- ff (( xx ff 22 ,, ythe y ff 22 )) || 22

边缘点P(xp,yp)对应的灰度差The gray level difference corresponding to the edge point P(x p , y p )

ff pp 22 == || ff (( xx cc 22 ,, ythe y cc 22 )) -- ff (( xx pp ,, ythe y pp )) || 22 ++ || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 22 ,, ythe y aa 22 )) || 22

邻近点C2(xc2,yc2)对应的灰度差The gray level difference corresponding to the adjacent point C 2 (x c2 , y c2 )

ff cc 22 == || ff (( xx dd 22 ,, ythe y dd 22 )) -- ff (( xx cc 22 ,, ythe y cc 22 )) || 22 ++ || ff (( xx cc 22 ,, ythe y cc 22 )) -- ff (( xx pp ,, ythe y pp )) || 22

邻近点D2(xd2,yd2)对应的灰度差The gray level difference corresponding to the adjacent point D 2 (x d2 , y d2 )

ff dd 22 == || ff (( xx ee 22 ,, ythe y ee 22 )) -- ff (( xx dd 22 ,, ythe y dd 22 )) || 22 ++ || ff (( xx dd 22 ,, ythe y dd 22 )) -- ff (( xx cc 22 ,, ythe y cc 22 )) || 22

第3.5步:求取亚像素点边缘点与像素边缘点的距离差,求取方法及说明如下:根据方形孔径原理,某一像素的灰度值输出可以表示为Step 3.5: Calculate the distance difference between the sub-pixel edge point and the pixel edge point. The calculation method and description are as follows: According to the square aperture principle, the gray value output of a certain pixel can be expressed as

ff (( ii ,, jj )) == ∫∫ jj -- 0.50.5 jj ++ 0.50.5 ∫∫ ii -- 0.50.5 ii ++ 0.50.5 gg (( xx ,, ythe y )) dxdydxdy

式中,g(x,y)是连续图像的光强分布,f(i,j)是像素感光面上各部分光强综合作用的结果,采样结果是以灰度值为数值的离散矩阵。由于光学元器件的卷积作用以及光学衍射作用,在物空间剧变的灰度经光学成像成为渐变的形式,边缘在图像中表征为一种灰度分布,图像边缘灰度值变化应当是高斯分布,高斯曲线顶点对应的位置即为真实边缘点的位置,而对于圆形图像,像素级边缘点的梯度方向为圆心到该点的直线方向,仅需对像素级边缘点及其沿梯度方向的邻近点进行高斯曲线拟合即可求出亚像素边缘点的位置,这样可以将二维的高斯曲面拟合转化为一维的高斯曲线拟合。如图6所示,P(xp,yp)点为像素级边缘点,LG为边缘像素点P(xp,yp)的梯度方向直线,若P在第1区域内,B1、A1、C1、D1为P点沿梯度方向的邻近点,若P在第2区域内,B2、A2、C2、D2为P点沿梯度方向的邻近点,高斯曲线顶点M所对应的P′点应为其真实边缘点位置,P点与P′点的距离差为δ。一维高斯曲线的表达式为:In the formula, g(x, y) is the light intensity distribution of the continuous image, f(i, j) is the result of the comprehensive effect of the light intensity of each part on the photosensitive surface of the pixel, and the sampling result is a discrete matrix with gray values. Due to the convolution effect of optical components and the effect of optical diffraction, the gray level that changes drastically in the object space becomes a gradual form through optical imaging, and the edge is represented as a gray level distribution in the image, and the change of the gray level value at the edge of the image should be a Gaussian distribution , the position corresponding to the vertex of the Gaussian curve is the position of the real edge point, and for a circular image, the gradient direction of the pixel-level edge point is the straight line direction from the center of the circle to the point, and only the pixel-level edge point and its gradient along the gradient direction The position of the sub-pixel edge point can be obtained by Gaussian curve fitting of adjacent points, so that the two-dimensional Gaussian surface fitting can be transformed into one-dimensional Gaussian curve fitting. As shown in Figure 6, P(x p , y p ) is a pixel-level edge point, L G is the gradient direction straight line of edge pixel point P(x p , y p ), if P is in the first area, B 1 , A 1 , C 1 , D 1 are adjacent points of point P along the gradient direction. If P is in the second area, B 2 , A 2 , C 2 , D 2 are adjacent points of point P along the gradient direction. The point P' corresponding to the vertex M should be the position of the real edge point, and the distance difference between point P and point P' is δ. The expression of a one-dimensional Gaussian curve is:

ythe y ~~ == 11 22 ππ σσ GG expexp (( -- (( δδ -- μμ )) 22 22 σσ GG 22 )) -- -- -- (( 11 ))

式中,μ为高斯函数的均值,σG为高斯函数的标准差。In the formula, μ is the mean value of the Gaussian function, and σ G is the standard deviation of the Gaussian function.

为方便计算,对上式两边取对数,并令 y ~ * = ln y ~ , 上式可转化为:For the convenience of calculation, logarithms are taken on both sides of the above formula, and let the y ~ * = ln the y ~ , The above formula can be transformed into:

ythe y ~~ ** == mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 -- -- -- (( 22 ))

根据方形孔径采样定理,像素灰度差值为According to the square aperture sampling theorem, the pixel gray level difference is

ythe y ~~ ** (( nno )) == ∫∫ nno -- 0.50.5 nno ++ 0.50.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ -- -- -- (( 33 ))

令边缘点P的序号为0,若P在第1区域内,P点对应的灰度差为fp1,其邻近点D1,C1,A1,B1序号分别表示为-2、-1、1和2,对应的灰度差为fd1、fc1、fa1和fb1,根据式(3),有:Let the serial number of the edge point P be 0, if P is in the first area, the gray level difference corresponding to P point is f p1 , and the serial numbers of its adjacent points D 1 , C 1 , A 1 , B 1 are expressed as -2, - 1, 1 and 2, the corresponding gray level difference is f d1 , f c1 , f a1 and f b1 , according to formula (3), we have:

ff dd 11 == ∫∫ -- 2.52.5 -- 1.51.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 4949 1212 mm 1111 -- 22 mm 1212 ++ mm 1313 -- -- -- (( 44 ))

ff cc 11 == ∫∫ -- 1.51.5 -- 0.50.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 1313 1212 mm 1111 -- mm 1212 ++ mm 1313 -- -- -- (( 55 ))

ff pp 11 == ∫∫ -- 0.50.5 0.50.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 11 1212 mm 1111 ++ mm 1313 -- -- -- (( 66 ))

ff aa 11 == ∫∫ 0.50.5 1.51.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 1313 1212 mm 1111 ++ mm 1212 ++ mm 1313 -- -- -- (( 77 ))

ff bb 11 == ∫∫ 1.51.5 2.52.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 4949 1212 mm 1111 ++ 22 mm 1212 ++ mm 1313 -- -- -- (( 88 ))

根据式(4)~(8),联立方程组,用最小二乘法可求得m11,m12,m13关于fd1、fc1、fp1、fa1和fb1的表达式,将其代入抛物线顶点坐标值δ=-m12/2m11,得:According to the equations (4)~(8) and the simultaneous equations, the expression of m 11 , m 12 , m 13 about f d1 , f c1 , f p1 , f a1 and f b1 can be obtained by using the least square method. Substituting it into the parabola vertex coordinate value δ=-m 12 /2m 11 , we get:

δδ == -- -- 0.20.2 ff dd 11 -- 0.10.1 ff cc 11 ++ 0.10.1 ff aa 11 -- 0.20.2 ff bb 11 22 (( 0.14290.1429 ff dd 11 -- 0.07140.0714 ff cc 11 -- 0.14290.1429 ff pp 11 -- 0.07140.0714 ff aa 11 ++ 0.14290.1429 ff bb 11

注意式(1)到式(2)的过程中,

Figure A20081002279300278
经过了取自然对数的操作,因此,像素灰度差值也应取对数,所以P点与P′点的距离差为:Note that in the process from formula (1) to formula (2),
Figure A20081002279300278
After the operation of taking natural logarithm, the pixel gray level difference should also take logarithm, so the distance difference between point P and point P′ is:

δδ == 0.10.1 lnln ff dd 11 ++ 0.050.05 lnln ff cc 11 -- 0.050.05 lnln ff aa 11 ++ 00 .. 11 lnln ff bb 11 0.14290.1429 lnln ff dd 11 -- 0.07140.0714 lnln ff cc 11 -- 0.14290.1429 lnln ff pp 11 -- 0.7140.714 lnln ff aa 11 ++ 0.14290.1429 lnln ff bb 11

若P在第2区域内,P点对应的灰度差为fp2,其邻近点D2,C2,A2,B2序号分别表示为-2、-1、1和2,对应的灰度差为fd2、fc2、fa2和fb2,根据式(3),有:If P is in the second area, the gray level difference corresponding to point P is f p2 , and the serial numbers of its adjacent points D 2 , C 2 , A 2 , and B 2 are respectively expressed as -2, -1, 1 and 2, and the corresponding gray Degree difference is f d2 , f c2 , f a2 and f b2 , according to formula (3), we have:

ff dd 22 == ∫∫ -- 2.52.5 -- 1.51.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 4949 1212 mm 1111 -- 22 mm 1212 ++ mm 1313 -- -- -- (( 99 ))

ff cc 22 == ∫∫ -- 1.51.5 -- 0.50.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 1313 1212 mm 1111 -- mm 1212 ++ mm 1313 -- -- -- (( 1010 ))

ff pp 22 == ∫∫ -- 0.50.5 00 .. 55 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 11 1212 mm 1111 ++ mm 1313 -- -- -- (( 1111 ))

ff aa 22 == ∫∫ 00 .. 55 1.51.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 1313 1212 mm 1111 ++ mm 1212 ++ mm 1313 -- -- -- (( 1212 ))

ff bb 22 == ∫∫ 1.51.5 2.52.5 (( mm 1111 δδ 22 ++ mm 1212 δδ ++ mm 1313 )) dδdδ == 4949 1212 mm 1111 ++ 22 mm 1212 ++ mm 1313 -- -- -- (( 1313 ))

根据式(9)~(13),联立方程组,用最小二乘法可求得m11,m12,m13关于fd2、fc2、fp2、fa2和fb2的表达式,将其代入抛物线顶点坐标值δ=-m12/2m11,得:According to equations (9)~(13), the simultaneous equations, the expression of m 11 , m 12 , m 13 with respect to f d2 , f c2 , f p2 , f a2 and f b2 can be obtained by using the least square method. Substituting it into the parabola vertex coordinate value δ=-m 12 /2m 11 , we get:

δδ == -- -- 0.20.2 ff dd 22 -- 0.10.1 ff cc 22 ++ 0.10.1 ff aa 22 -- 0.20.2 ff bb 22 22 (( 0.14290.1429 ff dd 22 -- 0.07140.0714 ff cc 22 -- 0.14290.1429 ff pp 22 -- 0.07140.0714 ff aa 22 ++ 0.14290.1429 ff bb 22

注意式(1)到式(2)的过程中,

Figure A20081002279300283
经过了取自然对数的操作,因此,像素灰度差值也应取对数,所以P点与P′点的距离差为:Note that in the process from formula (1) to formula (2),
Figure A20081002279300283
After the operation of taking natural logarithm, the pixel gray level difference should also take logarithm, so the distance difference between point P and point P′ is:

δδ == 0.10.1 lnln ff dd 22 ++ 0.050.05 lnln ff cc 22 -- 0.050.05 lnln ff aa 22 ++ 0.10.1 lnln ff bb 22 0.14290.1429 lnln ff dd 22 -- 0.07140.0714 lnln ff cc 22 -- 0.14290.1429 lnln ff pp 22 -- 0.01740.0174 lnln ff aa 22 ++ 0.1420.142 99 lnln ff bb 22

综上所述,根据第3.4步中所得边缘点及其沿梯度方向的邻近点的对应的灰度差,求取亚像素边缘点P′与像素边缘点P的距离差δ为To sum up, according to the corresponding gray level difference between the edge point and its adjacent points along the gradient direction obtained in step 3.4, the distance difference δ between the sub-pixel edge point P′ and the pixel edge point P is calculated as

Figure A20081002279300285
Figure A20081002279300285

若边缘点及其沿梯度方向的邻近点对应的灰度差中有一个为0,则令δ=0,并存储δ=0出现的次数Count_zero。If one of the grayscale differences corresponding to the edge point and its adjacent points along the gradient direction is 0, then set δ=0, and store the number of occurrences of δ=0 as Count_zero.

第3.6步:求取亚像素边缘点P′的坐标,具体方法为:根据3.5步所求亚像素边缘点P与像素级边缘点P的距离差δ以及像素级边缘点P的梯度方向直线的斜率kgradient,分别求出P点与P′点在x方向和y方向上的坐标差δx,δy,公式如下:Step 3.6: Calculate the coordinates of the sub-pixel edge point P′, the specific method is: according to the distance difference δ between the sub-pixel edge point P and the pixel-level edge point P obtained in step 3.5 and the gradient direction straight line of the pixel-level edge point P Slope k gradient , respectively calculate the coordinate difference δ x , δ y between point P and point P′ in the x direction and y direction, the formula is as follows:

δδ xx == δδ kk gradgrad ientient 22 ++ 11

δδ ythe y == kk gradgrad ientient ** δδ kk gradgrad ientient 22 ++ 11

其中,像素级边缘点P的梯度方向直线的斜率kgradient可由该像素边缘点的坐标(xp,yp)和粗定位圆心坐标(xoc,yoc)获得,即:Among them, the slope k gradient of the gradient direction line of the pixel-level edge point P can be obtained from the coordinates (x p , y p ) of the pixel edge point and the coordinates (x oc , y oc ) of the coarse positioning circle, namely:

kk gradgrad ientient == ythe y ococ -- ythe y pp xx ococ -- xx pp

那么,对于像素点P(xp,yp),其对应的亚像素边缘点为P′(xpx,ypy),Then, for a pixel point P(x p , y p ), its corresponding sub-pixel edge point is P′(x px , y py ),

第3.7步:对所有边缘像素点P(xp,yp),依据第3.2步~第3.6步依次求取对应的亚像素边缘点为P′(xpx,ypy),其中,(xpx,ypy)为亚像素边缘点P′的坐标。Step 3.7: For all edge pixel points P(x p , y p ), according to steps 3.2 to 3.6, obtain the corresponding sub-pixel edge points as P′(x px , y py ), where (x px , y py ) is the coordinate of the sub-pixel edge point P′.

第4步:对亚像素边缘点进行滤波,针对亚像素边缘点出现的“孤立点”和噪声,分别用曲率滤波的方法和均值滤波的方法进行处理。Step 4: Filter the sub-pixel edge points, and use the curvature filtering method and the mean value filtering method to process the "isolated points" and noises that appear at the sub-pixel edge points, respectively.

第4.1步:针对亚像素边缘出现的“孤立点”,用曲率滤波的方法进行处理,具体步骤如下:Step 4.1: For the "isolated point" that appears on the edge of the sub-pixel, use the method of curvature filtering to process it. The specific steps are as follows:

(1)求取所有亚像素边缘点的曲率,方法如下:在顺序存储的亚像素边缘点中,为亚像素边缘点P′的坐标,

Figure A20081002279300292
为P′点的紧邻前一点的坐标,
Figure A20081002279300293
为P′点的紧邻后一点的坐标,那么亚像素边缘点P′的曲率为(1) Calculate the curvature of all sub-pixel edge points, the method is as follows: in the sequentially stored sub-pixel edge points, is the coordinate of the sub-pixel edge point P′,
Figure A20081002279300292
is the coordinates of the immediately preceding point of point P′,
Figure A20081002279300293
is the coordinate of the next point of point P′, then the curvature of the sub-pixel edge point P′ is

kk == 11 rr == 11 (( (( xx 00 -- xx pp 22 ′′ )) 22 ++ (( ythe y 00 -- ythe y pp 22 ′′ )) 22 ))

其中,(x0,y0)为P′、P′点的紧邻前一点、P′点的紧邻后一点所组成圆的圆心,Among them, (x 0 , y 0 ) is the center of the circle formed by P', the point immediately before P', and the point immediately after P',

xx 00 == aa -- bb ++ cc dd ,, ythe y 00 == ee -- ff ++ gg -- dd ,,

aa == (( xx pp 11 ′′ ++ xx pp 22 ′′ )) (( xx pp 22 ′′ -- xx pp 11 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ ))

bb == (( xx pp 22 ′′ ++ xx pp 33 ′′ )) (( xx pp 33 ′′ -- xx pp 22 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ ))

cc == (( ythe y pp 11 ′′ -- ythe y pp 33 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ ))

dd == 22 [[ (( xx pp 22 ′′ -- xx pp 11 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ )) -- (( xx pp 33 ′′ -- xx pp 22 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) ]]

ee == (( ythe y pp 11 ′′ ++ ythe y pp 22 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) (( xx pp 33 ′′ -- xx pp 22 ′′ ))

ff == (( ythe y pp 22 ′′ ++ ythe y pp 33 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ )) (( xx pp 22 ′′ -- xx pp 11 ′′ ))

gg == (( xx pp 11 ′′ -- xx pp 33 ′′ )) (( xx pp 22 ′′ -- xx pp 11 ′′ )) (( xx pp 33 ′′ -- xx pp 22 ′′ ))

(2)根据所求亚像素边缘点的曲率对所有亚像素边缘点进行滤波,滤波准则如下:(2) Filter all sub-pixel edge points according to the curvature of the sub-pixel edge points. The filtering criteria are as follows:

首先将各亚像素边缘点的曲率进行降序排列,取曲率阈值为该序列曲率值由高到低的第3*n个元素,n为孤立点的个数,而造成“孤立点”的原因是因为在第3.5步中,边缘像素点及其沿梯度方向的邻近点所对应的灰度差有一个或多个为0,因此,孤立点的个数n数值为第3.5步中亚像素边缘定位时δ=0的出现次数Count_zero;然后用曲率阈值对亚像素边缘点的曲率进行分割,如果亚像素边缘点P的曲率大于曲率阈值,且P的曲率大于其前后紧邻点的曲率,则认为点P是孤立点,给予滤除。First, the curvature of each sub-pixel edge point is arranged in descending order, and the curvature threshold is taken as the 3*nth element of the curvature value of the sequence from high to low, n is the number of isolated points, and the reason for the "isolated point" is Because in step 3.5, one or more grayscale differences corresponding to edge pixels and their adjacent points along the gradient direction are 0, therefore, the number n of isolated points is the sub-pixel edge positioning in step 3.5 When δ=0, the number of occurrences Count_zero; then use the curvature threshold to segment the curvature of the sub-pixel edge point, if the curvature of the sub-pixel edge point P is greater than the curvature threshold, and the curvature of P is greater than the curvature of its immediate front and rear points, then the point is considered P is an isolated point, which is filtered out.

第4.2步:将滤除“孤立点”后的所有亚像素边缘点用均值方法进行滤波,具体方法为:选取亚像素边缘点前后紧邻的两点的横、纵坐标与亚像素边缘点本身的横、纵坐标的均值作为该亚像素边缘点的新的横、纵坐标。Step 4.2: Filter all sub-pixel edge points after filtering out "isolated points" with the mean value method, the specific method is: select the horizontal and vertical coordinates of the two points immediately before and after the sub-pixel edge point and the sub-pixel edge point itself The average value of the horizontal and vertical coordinates is used as the new horizontal and vertical coordinates of the sub-pixel edge point.

第5步:对滤波后的亚像素边缘点用最小二乘法拟合圆,最终得到圆形目标的圆心和半径。最小二乘拟合圆的算法如下:Step 5: Use the least square method to fit the circle to the filtered sub-pixel edge points, and finally get the center and radius of the circular target. The least squares algorithm for fitting circles is as follows:

设滤波后的亚像素边缘点个数为N_sub,二次曲线的一般方程为Let the number of filtered sub-pixel edge points be N_sub, and the general equation of the quadratic curve is

x2+2Bexy+Cey2+2Dex+2Eey+Fe=0x 2 +2B e xy+C e y 2 +2D e x+2E e y+F e =0

利用N_sub个边缘点进行曲线拟合时,其均方差和为When using N_sub edge points for curve fitting, the mean square error sum is

ee 22 == ΣΣ ii == 11 NN __ subsub (( xx ii 22 ++ 22 BB ee xx ii ythe y ii ++ CC ee ythe y ii 22 ++ 22 DD. ee xx ii ++ 22 EE. ee ythe y ii ++ Ff ee )) 22

对上式关于Be,Ce,De,Ee,Fe分别取偏导数,并令每个式子为零,可以得到一个包含5个方程和5个未知数的静定方程组。用矩阵求逆或高斯列主元消去等方法可以求得最终的中心坐标(xcf,ycf)为:Taking partial derivatives of the above formula with respect to Be e , C e , De e , E e , and F e respectively, and setting each formula to be zero, a statically determinate equation system containing 5 equations and 5 unknowns can be obtained. The final center coordinates (x cf , y cf ) can be obtained by matrix inversion or Gaussian column pivot elimination:

xx cfcf == BB ee EE. ee -- CC ee DD. ee CC ee -- BB ee 22

ythe y cfcf == BB ee DD. ee -- EE. ee CC ee -- BB ee 22 ..

Claims (2)

1、一种圆形目标的圆心定位方法,其特征在于:1, a kind of circle center positioning method of circular target, it is characterized in that: 第1步:对图像中圆形目标的圆心和半径进行粗定位,得到圆形目标的粗定位圆心(xoc,yoc)和粗定位圆半径Rc,其中,(xoc,yoc)为图像坐标系下圆心的坐标,Step 1: Perform rough positioning on the center and radius of the circular target in the image, and obtain the rough positioning center (x oc , y oc ) and rough positioning circle radius R c of the circular target, where (x oc , y oc ) is the coordinates of the center of the circle in the image coordinate system, 第2步:对图像中的圆形目标进行像素级边缘定位,具体步骤如下:Step 2: Perform pixel-level edge positioning on the circular target in the image, the specific steps are as follows: 第2.1步:根据粗定位圆心和粗定位圆半径,从图像中提取一个正方形区域,称为“感兴趣区域”,该正方形区域的提取方法为:以图像中圆形目标的粗定位圆心作为正方形区域的中心点,以圆形目标的粗定位直径加上2~6个像素点作为正方形区域的边长,Step 2.1: According to the center of the rough positioning circle and the radius of the rough positioning circle, extract a square area from the image, called the "region of interest". The center point of the area, the rough positioning diameter of the circular target plus 2 to 6 pixels as the side length of the square area, 第2.2步:用canny算子对图像中所提取的感兴趣区域进行像素级边缘检测,得到圆形目标的像素级边缘,再对canny算子提取的圆形目标的像素级边缘点进行边界跟踪,得到每个像素级边缘点的坐标,且按顺时针顺序存储每个像素级边缘点的坐标,Step 2.2: Use the canny operator to perform pixel-level edge detection on the region of interest extracted in the image to obtain the pixel-level edge of the circular target, and then perform boundary tracking on the pixel-level edge points of the circular target extracted by the canny operator , get the coordinates of each pixel-level edge point, and store the coordinates of each pixel-level edge point in clockwise order, 第3步:对图像中的圆形目标进行亚像素边缘定位,具体步骤如下:Step 3: Perform sub-pixel edge positioning on the circular target in the image, the specific steps are as follows: 第3.1步:将圆形目标的边缘像素点进行区域划分,划分方法如下:称以圆形目标的粗定位圆心为原点、以水平向右为正方向、以像素为单位长度的直线为x轴;称以圆形目标的粗定位圆心为原点、以垂直向上为正方向、以像素为单位长度的直线为y轴;以粗定位圆心为旋转点,由x轴正方向逆时针旋转的角度为θ,根据θ的大小将圆形目标的边缘像素点划分为两个部分,一部分是θ∈[0°,45°]∪[135°,225°]∪[315°,360°]所对应的边缘像素点,称为第1区域的边缘像素点,另一部分是θ∈(45°,135°)∪(225°,315°)对应的边缘像素点,称为第2区域的边缘像素点,粗定位圆心与边缘像素点的连线为该边缘像素点的梯度方向,Step 3.1: Divide the edge pixels of the circular target into regions, and the division method is as follows: the x-axis is the straight line with the rough positioning center of the circular target as the origin, the positive direction horizontally to the right, and the unit length in pixels ; It is called the y-axis with the coarse positioning center of the circular target as the origin, the vertical upward as the positive direction, and the length in units of pixels as the y-axis; the coarse positioning center as the rotation point, and the angle of counterclockwise rotation from the positive direction of the x-axis is θ, according to the size of θ, the edge pixels of the circular target are divided into two parts, one part is θ∈[0°, 45°]∪[135°, 225°]∪[315°, 360°] corresponding to The edge pixels are called the edge pixels of the first area, and the other part is the edge pixels corresponding to θ∈(45°, 135°)∪(225°, 315°), which are called the edge pixels of the second area. The line connecting the center of the coarse positioning circle and the edge pixel point is the gradient direction of the edge pixel point, 第3.2步:求取边缘像素点沿梯度方向的邻近点,(xp,yp)为以像素为单位的边缘点P的坐标值,具体方法如下:若边缘像素点P(xp,yp)在第1区域内,取该像素点的梯度方向直线与直线x=xp+1的交点A1(xa1,ya1)为该边缘像素点沿梯度方向的右边第一邻近点;取该像素点的梯度方向直线与直线x=xp+2的交点B1(xb1,yb1)为该边缘像素点沿梯度方向的右边第二邻近点;取该像素点的梯度方向直线与直线x=xp+3的交点F(xf,yf)为该边缘像素点沿梯度方向的右边第三邻近点;取该像素点的梯度方向直线与直线x=xp-1的交点C1(xc1,yc1)为该边缘像素点沿梯度方向的左边第一邻近点;取该像素点的梯度方向直线与直线x=xp-2的交点D1(xd1,yd1)为该边缘像素点沿梯度方向的左边第二邻近点,取该像素点的梯度方向直线与直线x=xp-3的交点E1(xe1,ye1)为该边缘像素点沿梯度方向的左边第三邻近点;若边缘像素点P(xp,yp)在第2区域内,取该像素点的梯度方向直线与直线y=yp+1的交点A2(xa2,ya2)为该边缘像素点沿梯度方向的上边第一紧邻点;取该像素点的梯度方向直线与直线y=yp+2的交点B2(xb2,yb2)为该边缘像素点沿梯度方向的上边第二紧邻点;取该像素点的梯度方向直线与直线y=yp+3的交点F2(xf2,yf2)为该边缘像素点沿梯度方向的上边第三紧邻点;取该像素点的梯度方向直线与直线y=yp-1的交点C2(xc2,yc2)为该边缘像素点沿梯度方向的下边第一紧邻点;取该像素点的梯度方向直线与直线y=yp-2的交点D2(xd2,yd2)为该边缘像素点沿梯度方向的上边第二紧邻点;取该像素点的梯度方向直线与直线y=yp-3的交点E2(xe2,ye2)为该边缘像素点沿梯度方向的上边第三紧邻点,Step 3.2: Calculate the adjacent points of the edge pixel point along the gradient direction, (x p , y p ) is the coordinate value of the edge point P in units of pixels, the specific method is as follows: if the edge pixel point P(x p , y p ) In the first area, take the intersection point A 1 (x a1 , y a1 ) of the gradient direction line of the pixel point and the line x=x p +1 as the first adjacent point on the right side of the edge pixel point along the gradient direction; Take the intersection point B 1 (x b1 , y b1 ) of the gradient direction straight line of the pixel point and the straight line x=x p +2 as the second adjacent point on the right side of the edge pixel point along the gradient direction; take the gradient direction straight line of the pixel point The point of intersection F(x f , y f ) with the straight line x=x p +3 is the third adjacent point on the right side of the edge pixel along the gradient direction; take the gradient direction straight line of the pixel point and the straight line x=x p -1 Intersection point C 1 (x c1 , y c1 ) is the first adjacent point on the left side of the edge pixel along the gradient direction; take the intersection point D 1 (x d1 , y d1 ) is the second adjacent point on the left side of the edge pixel along the gradient direction, and the intersection point E 1 (x e1 , y e1 ) of the gradient direction line of the pixel point and the line x=x p -3 is the edge of the edge pixel point The third adjacent point on the left side of the gradient direction; if the edge pixel point P(x p , y p ) is in the second area, take the intersection point A 2 (x a2 , y a2 ) is the first adjacent point above the edge pixel along the gradient direction; take the intersection point B 2 (x b2 , y b2 ) of the gradient direction line of the pixel point and the line y=y p +2 as the edge pixel The second point next to the upper side of the point along the gradient direction; take the intersection point F 2 (x f2 , y f2 ) of the gradient direction line of the pixel point and the straight line y=y p +3 as the third point of the upper side of the edge pixel point along the gradient direction Close to the point; take the intersection point C 2 (x c2 , y c2 ) of the gradient direction straight line of the pixel point and the straight line y=y p -1 as the first adjacent point of the edge pixel point along the gradient direction; take the pixel point The intersection point D 2 (x d2 , y d2 ) of the gradient direction straight line and the straight line y=y p -2 is the second adjacent point on the upper edge of the edge pixel along the gradient direction; take the gradient direction straight line of the pixel point and the straight line y=y The intersection point E 2 (x e2 , y e2 ) of p -3 is the third immediate point above the edge pixel along the gradient direction, 第3.3步:采用线性灰度插值的方法求取以像素为单位的边缘像素点P(xp,yp)沿梯度方向的邻近点的灰度值,其中,(xp,yp)为以像素为单位的边缘像素点P的坐标值,使用f(x,y)表示坐标为(x,y)的像素点的灰度值,使用符号[]表示取整数部分,所述的用于获取邻近点灰度值的线性灰度插值方法如下:Step 3.3: Use the method of linear grayscale interpolation to obtain the grayscale value of the adjacent points of the edge pixel point P(x p , y p ) along the gradient direction in units of pixels, where (x p , y p ) is The coordinate value of the edge pixel point P in pixels, use f(x, y) to represent the gray value of the pixel point whose coordinates are (x, y), use the symbol [] to represent the integer part, the above is used for The linear gray interpolation method to obtain the gray value of adjacent points is as follows: (1)若边缘像素点P(xp,yp)在第1区域内,(1) If the edge pixel point P(x p , y p ) is in the first area, 邻近点A1(xa1,ya1)的灰度值Gray value of adjacent point A 1 (x a1 , y a1 ) f(xa1,ya1)=(1-λ)*f(xa1,[ya1])+λ*f(xa1,[ya1]+1)f(x a1 , y a1 )=(1-λ)*f(x a1 , [y a1 ])+λ*f(x a1 , [y a1 ]+1) λ=ya1-[ya1],λ=y a1 -[y a1 ], 邻近点B1(xb1,yb1)的灰度值Gray value of adjacent point B 1 (x b1 , y b1 ) f(xb1,yb1)=(1-λ)*f(xb1,[yb1])+λ*f(xb1,[yb1]+1)f(x b1 , y b1 )=(1-λ)*f(x b1 ,[y b1 ])+λ*f(x b1 ,[y b1 ]+1) λ=yb1-[yb1]λ=y b1 -[y b1 ] 邻近点F1(xf1,yf1)的灰度值Gray value of adjacent point F 1 (x f1 , y f1 ) f(xf1,yf1)=(1-λ)*f(xf1,[yf1])+λ*f(xf1,[yf1]+1)f(x f1 , y f1 )=(1-λ)*f(x f1 , [y f1 ])+λ*f(x f1 , [y f1 ]+1) λ=yf1-[yf1]λ=y f1 -[y f1 ] 邻近点C1(xc1,yc1)的灰度值Gray value of adjacent point C 1 (x c1 , y c1 ) f(xc1,yc1)=(1-λ)*f(xc1,[yc1])+λ*f(xc1,[yc1]+1)f(x c1 , y c1 )=(1-λ)*f(x c1 ,[y c1 ])+λ*f(x c1 ,[y c1 ]+1) λ=yc1-[yc1]λ=y c1 -[y c1 ] 邻近点D1(xd,yd)的灰度值Gray value of adjacent point D 1 (x d , y d ) f(xd1,yd1)=(1-λ)*f(xd1,[yd1])+λ*f(xd1,[yd1]+1)f(x d1 , y d1 )=(1-λ)*f(x d1 , [y d1 ])+λ*f(x d1 , [y d1 ]+1) λ=yd1-[yd1]λ=y d1 -[y d1 ] 邻近点E1(xe1,ye1)的灰度值Gray value of adjacent point E 1 (x e1 , y e1 ) f(xe1,ye1)=(1-λ)*f(xe1,[ye1])+λ*f(xe1,[ye1]+1)f(x e1 , y e1 )=(1-λ)*f(x e1 ,[y e1 ])+λ*f(x e1 ,[y e1 ]+1) λ=ye1-[ye1]λ=y e1 -[y e1 ] (2)若边缘像素点P(xp,yp)在第2区域内,(2) If the edge pixel point P(x p , y p ) is in the second area, 邻近点A2(xa2,ya2)的灰度值Gray value of adjacent point A 2 (x a2 , y a2 ) f(xa2,ya2)=(1-λ)*f([xa2],ya2)+λ*f([xa2]+1,ya2)f(x a2 , y a2 )=(1-λ)*f([x a2 ], y a2 )+λ*f([x a2 ]+1, y a2 ) λ=xa2-[xa2]λ=x a2 -[x a2 ] 邻近点B2(xb2,yb2)的灰度值Gray value of adjacent point B 2 (x b2 , y b2 ) f(xb2,yb2)=(1-λ)*f([xb2],yb2)+λ*f([xb2]+1,yb2)f(x b2 , y b2 )=(1-λ)*f([x b2 ], y b2 )+λ*f([x b2 ]+1, y b2 ) λ=xb2-[xb2]λ=x b2 -[x b2 ] 邻近点F2(xf2,yf2)的灰度值Gray value of adjacent point F 2 (x f2 , y f2 ) f(xf2,yf2)=(1-λ)*f([xf2],yf2)+λ*f([xf2]+1,yf2)f(x f2 , y f2 )=(1-λ)*f([x f2 ], y f2 )+λ*f([x f2 ]+1, y f2 ) λ=xf2-[xf2]λ=x f2 -[x f2 ] 邻近点C2(xc2,yc2)的灰度值Gray value of adjacent point C 2 (x c2 , y c2 ) f(xc2,yc2)=(1-λ)*f([xc2],yc2)+λ*f([xc2]+1,yc2)f(x c2 , y c2 )=(1-λ)*f([x c2 ], y c2 )+λ*f([x c2 ]+1, y c2 ) λ=xc2-[xc2]λ=x c2 -[x c2 ] 邻近点D2(xd2,yd2)的灰度值Gray value of adjacent point D 2 (x d2 , y d2 ) f(xd2,yd2)=(1-λ)*f([xd2],yd2)+λ*f([xd2]+1,yd2)f(x d2 , y d2 )=(1-λ)*f([x d2 ], y d2 )+λ*f([x d2 ]+1, y d2 ) λ=xd2-[xd2]λ=x d2 -[x d2 ] 邻近点E2(xe2,ye2)的灰度值Gray value of adjacent point E 2 (x e2 , y e2 ) f(xe2,ye2)=(1-λ)*f([xe2],ye2)+λ*f([xe2]+1,ye2)f(x e2 , y e2 )=(1-λ)*f([x e2 ], y e2 )+λ*f([x e2 ]+1, y e2 ) λ=xe2-[xe2]λ=x e2 -[x e2 ] 第3.4步:根据第3.3步所求边缘点沿梯度方向的邻近点的灰度值以及边缘点P(xp,yp)本身的灰度值,选取灰度值的前向差分和后向差分的平均值作为边缘点及其沿梯度方向的邻近点的对应的灰度差,其中,(xp,yp)为以像素为单位的边缘像素点P的坐标值,使用f(x,y)表示坐标为(x,y)的像素点的灰度值,所述的求取灰度差的方法如下:Step 3.4: According to the gray value of the adjacent points of the edge point along the gradient direction obtained in step 3.3 and the gray value of the edge point P(x p , y p ) itself, select the forward difference and backward direction of the gray value The average value of the difference is taken as the corresponding gray level difference between the edge point and its adjacent points along the gradient direction, where (x p , y p ) is the coordinate value of the edge pixel point P in pixels, using f(x, y) represents the gray value of the pixel whose coordinates are (x, y), and the method for obtaining the gray difference is as follows: (1)若边缘像素点P(xp,yp)在第1区域内,(1) If the edge pixel point P(x p , y p ) is in the first area, 邻近点A1(xa1,ya1)对应的灰度差The gray level difference corresponding to the adjacent point A 1 (x a1 , y a1 ) ff aa 11 == || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 11 ,, ythe y aa 11 )) || 22 ++ || ff (( xx aa 11 ,, ythe y aa 11 )) -- ff (( xx bb 11 ,, ythe y bb 11 )) || 22 邻近点B1(xb1,yb1)对应的灰度差The gray level difference corresponding to the adjacent point B 1 (x b1 , y b1 ) ff bb 11 == || ff (( xx aa 11 ,, ythe y aa 11 )) -- ff (( xx bb 11 ,, ythe y bb 11 )) || 22 ++ || ff (( xx bb 11 ,, ythe y bb 11 )) -- ff (( xx ff 11 ,, ythe y ff 11 )) || 22 边缘点P(xp,yp)对应的灰度差The gray level difference corresponding to the edge point P(x p , y p ) ff pp 11 == || ff (( xx cc 11 ,, ythe y cc 11 )) -- ff (( xx pp ,, ythe y pp )) || 22 ++ || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 11 ,, ythe y aa 11 )) || 22 邻近点C1(xc1,yc1)对应的灰度差The gray level difference corresponding to the adjacent point C 1 (x c1 , y c1 ) ff cc 11 == || ff (( xx dd 11 ,, ythe y dd 11 )) -- ff (( xx cc 11 ,, ythe y cc 11 )) || 22 ++ || ff (( cc cc 11 ,, ythe y cc 11 )) -- ff (( xx pp ,, ythe y pp )) || 22 邻近点D1(xd,yd)对应的灰度差The gray level difference corresponding to the adjacent point D 1 (x d , y d ) ff dd 11 == || ff (( xx ee 11 ,, ythe y ee 11 )) -- ff (( xx dd 11 ,, ythe y dd 11 )) || 22 ++ || ff (( xx dd 11 ,, ythe y dd 11 )) -- ff (( xx cc 11 ,, ythe y cc 11 )) || 22 (2)若边缘像素点P(xp,yp)在第2区域内,(2) If the edge pixel point P(x p , y p ) is in the second area, 邻近点A2(xa2,ya2)对应的灰度差The gray level difference corresponding to the adjacent point A 2 (x a2 , y a2 ) ff aa 22 == || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 22 ,, ythe y aa 22 )) || 22 ++ || ff (( xx aa 22 ,, ythe y aa 22 )) -- ff (( xx bb 22 ,, ythe y bb 22 )) || 22 邻近点B2(xb2,yb2)对应的灰度差The gray level difference corresponding to the adjacent point B 2 (x b2 , y b2 ) ff bb 22 == || ff (( xx aa 22 ,, ythe y aa 22 )) -- ff (( xx bb 22 ,, ythe y bb 22 )) || 22 ++ || ff (( xx bb 22 ,, ythe y bb 22 )) -- ff (( xx ff 22 ,, ythe y ff 22 )) || 22 边缘点P(xp,yp)对应的灰度差The gray level difference corresponding to the edge point P(x p , y p ) ff pp 22 == || ff (( xx cc 22 ,, ythe y cc 22 )) -- ff (( xx pp ,, ythe y pp )) || 22 ++ || ff (( xx pp ,, ythe y pp )) -- ff (( xx aa 22 ,, ythe y aa 22 )) || 22 邻近点C2(xc2,yc2)对应的灰度差The gray level difference corresponding to the adjacent point C 2 (x c2 , y c2 ) ff cc 22 == || ff (( xx dd 22 ,, ythe y dd 22 )) -- ff (( xx cc 22 ,, ythe y cc 22 )) || 22 ++ || ff (( xx cc 22 ,, ythe y cc 22 )) -- ff (( xx pp ,, ythe y pp )) || 22 邻近点D2(xd2,yd2)对应的灰度差The gray level difference corresponding to the adjacent point D 2 (x d2 , y d2 ) ff dd 22 == || ff (( xx ee 22 ,, ythe y ee 22 )) -- ff (( xx dd 22 ,, ythe y dd 22 )) || 22 ++ || ff (( xx dd 22 ,, ythe y dd 22 )) -- ff (( xx cc 22 ,, ythe y cc 22 )) || 22 第3.5步:根据第3.4步中所得边缘点及其沿梯度方向的邻近点的对应的灰度差,求取亚像素边缘点P′与像素边缘点P的距离差δ为Step 3.5: Calculate the distance difference δ between the sub-pixel edge point P′ and the pixel edge point P according to the corresponding gray level difference between the edge point and its adjacent points along the gradient direction obtained in step 3.4.
Figure A2008100227930006C8
Figure A2008100227930006C8
若边缘点及其沿梯度方向的邻近点对应的灰度差中有一个为0,则令δ=0,并存储δ=0出现的次数Count_zero,If one of the grayscale differences corresponding to the edge point and its adjacent points along the gradient direction is 0, then let δ=0, and store the count_zero of the number of occurrences of δ=0, 第3.6步:求取亚像素边缘点P′的坐标,具体方法为:根据3.5步所求亚像素边缘点P′与像素级边缘点P的距离差δ以及像素级边缘点P的梯度方向直线的斜率kgradient,分别求出P点与P′点在x方向和y方向上的坐标差δx,δy,公式如下:Step 3.6: Find the coordinates of the sub-pixel edge point P′, the specific method is: according to the distance difference δ between the sub-pixel edge point P′ and the pixel-level edge point P obtained in step 3.5 and the gradient direction straight line of the pixel-level edge point P The slope k gradient , respectively calculate the coordinate difference δ x , δ y between point P and point P′ in the x direction and y direction, the formula is as follows: δδ xx == δδ kk gradgrad ientient 22 ++ 11 δδ ythe y == kk gradgrad ientient ** δδ kk gradgrad ientient 22 ++ 11 其中,像素级边缘点P的梯度方向直线的斜率kgradient可由该像素边缘点的坐标(xp,yp)和粗定位圆心坐标(xoc,yoc)获得,即:Among them, the slope k gradient of the gradient direction line of the pixel-level edge point P can be obtained from the coordinates (x p , y p ) of the pixel edge point and the coordinates (x oc , y oc ) of the coarse positioning circle, namely: kk gradgrad ientient == ythe y ococ -- ythe y pp xx ococ -- xx pp 那么,对于像素点P(xp,yp),其对应的亚像素边缘点为P′(xpx,ypy),Then, for a pixel point P(x p , y p ), its corresponding sub-pixel edge point is P′(x px , y py ), 第3.7步:对所有边缘像素点P(xp,yp),依据第3.2步~第3.6步依次求取对应的亚像素边缘点为P′(xpx,ypy),其中,(xpx,ypy)为亚像素边缘点P′的坐标,Step 3.7: For all edge pixel points P(x p , y p ), according to steps 3.2 to 3.6, obtain the corresponding sub-pixel edge points as P′(x px , y py ), where (x px , y py ) is the coordinate of the sub-pixel edge point P′, 第4步:对亚像素边缘点进行滤波,针对亚像素边缘点出现的“孤立点”和噪声,分别用曲率滤波的方法和均值滤波的方法进行处理,Step 4: Filter the sub-pixel edge points, and use the curvature filtering method and the mean value filtering method to process the "isolated points" and noises that appear at the sub-pixel edge points, respectively. 第4.1步:针对亚像素边缘出现的“孤立点”,用曲率滤波的方法进行处理,具体步骤如下:Step 4.1: For the "isolated point" that appears on the edge of the sub-pixel, use the method of curvature filtering to process it. The specific steps are as follows: (1)求取所有亚像素边缘点的曲率,方法如下:在顺序存储的亚像素边缘点中,
Figure A2008100227930007C4
为亚像素边缘点P′的坐标,为P′点的紧邻前一点的坐标,
Figure A2008100227930007C6
为P′点的紧邻后一点的坐标,那么亚像素边缘点P′的曲率为
(1) Calculate the curvature of all sub-pixel edge points, the method is as follows: in the sequentially stored sub-pixel edge points,
Figure A2008100227930007C4
is the coordinate of the sub-pixel edge point P′, is the coordinates of the immediately preceding point of point P′,
Figure A2008100227930007C6
is the coordinate of the next point of point P′, then the curvature of the sub-pixel edge point P′ is
kk == 11 rr == 11 (( (( xx 00 -- xx pp 22 ′′ )) 22 ++ (( ythe y 00 -- ythe y pp 22 ′′ )) 22 )) 其中,(x0,y0)为P′、P′点的紧邻前一点、P′点的紧邻后一点所组成圆的圆心,Among them, (x 0 , y 0 ) is the center of the circle formed by P', the point immediately before P', and the point immediately after P', xx 00 == aa -- bb ++ cc dd ,, ythe y 00 == ee -- ff ++ gg -- dd ,, aa == (( xx pp 11 ′′ ++ xx pp 22 ′′ )) (( xx pp 22 ′′ -- xx pp 11 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ )) bb == (( xx pp 22 ′′ ++ xx pp 33 ′′ )) (( xx pp 33 ′′ -- xx pp 22 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) cc == (( ythe y pp 11 ′′ -- ythe y pp 33 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ )) dd == 22 [[ (( xx pp 22 ′′ -- xx pp 11 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ )) -- (( xx pp 33 ′′ -- xx pp 22 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) ]] ee == (( ythe y pp 11 ′′ ++ ythe y pp 22 ′′ )) (( ythe y pp 22 ′′ -- ythe y pp 11 ′′ )) (( xx pp 33 ′′ -- xx pp 22 ′′ )) ff == (( ythe y pp 22 ′′ ++ ythe y pp 33 ′′ )) (( ythe y pp 33 ′′ -- ythe y pp 22 ′′ )) (( xx pp 22 ′′ -- xx pp 11 ′′ )) gg == (( xx pp 11 ′′ -- xx pp 33 ′′ )) (( xx pp 22 ′′ -- xx pp 11 ′′ )) (( xx pp 33 ′′ -- xx pp 22 ′′ )) (2)根据亚像素边缘点的曲率对所有亚像素边缘点进行滤波,滤波准则如下:(2) Filter all sub-pixel edge points according to the curvature of the sub-pixel edge points, and the filtering criteria are as follows: 首先将各亚像素边缘点的曲率进行降序排列,取曲率阈值为该序列曲率值由高到低的第3*n个元素,其中,n为孤立点的个数,其数值为第3.5步中亚像素边缘定位时δ=0的出现次数Count_zero;然后用曲率阈值对亚像素边缘点的曲率进行分割,如果亚像素边缘点P的曲率大于曲率阈值,且P的曲率大于其前后紧邻点的曲率,则认为点P是孤立点,给予滤除,First, arrange the curvature of each sub-pixel edge point in descending order, and take the curvature threshold as the 3*n element of the curvature value of the sequence from high to low, where n is the number of isolated points, and its value is the value in step 3.5 Count_zero the number of occurrences of δ=0 during sub-pixel edge positioning; then use the curvature threshold to segment the curvature of the sub-pixel edge point, if the curvature of the sub-pixel edge point P is greater than the curvature threshold, and the curvature of P is greater than the curvature of its immediate front and rear points , the point P is considered to be an isolated point, and it is filtered out, 第4.2步:将滤除“孤立点”后的所有亚像素边缘点用均值方法进行滤波,具体方法为:选取亚像素边缘点前后紧邻的两点的横、纵坐标与亚像素边缘点本身的横、纵坐标的均值作为该亚像素边缘点的新的横、纵坐标,Step 4.2: Filter all sub-pixel edge points after filtering out "isolated points" with the mean value method, the specific method is: select the horizontal and vertical coordinates of the two points immediately before and after the sub-pixel edge point and the sub-pixel edge point itself The mean value of the horizontal and vertical coordinates is used as the new horizontal and vertical coordinates of the sub-pixel edge point, 第5步:对滤波后的亚像素边缘点用最小二乘法拟合圆,最终得到圆形目标的圆心和半径。Step 5: Use the least square method to fit the circle to the filtered sub-pixel edge points, and finally get the center and radius of the circular target.
2、根据权利要求1所述的圆形目标的圆心定位方法,其特征在于:2. The method for locating the center of a circular target according to claim 1, characterized in that: 第1步所述对图像中圆形目标的圆心和半径进行粗定位的方法是:The method for roughly positioning the center and radius of the circular target in the image as described in step 1 is: 第1.1步:对图像进行除噪、阈值分割,得到图像中各像素点灰度值为255或0的二值图像,Step 1.1: Carry out noise removal and threshold segmentation on the image to obtain a binary image with a gray value of 255 or 0 for each pixel in the image. 第1.2步:对二值图像中的圆形目标进行边界提取及边界跟踪,Step 1.2: Perform boundary extraction and boundary tracking on the circular target in the binary image, 第1.3步:对跟踪的边界点用周线形心法拟合圆,得到该圆形目标的粗定位圆心和粗定位圆半径,并存储粗定位圆心坐标(xoc,yoc)和粗定位圆半径RcStep 1.3: Fit the circle with the centroid method of the tracked boundary point, obtain the rough positioning circle center and the rough positioning circle radius of the circular target, and store the coarse positioning circle center coordinates (x oc , y oc ) and the rough positioning circle Radius R c .
CN2008100227931A 2008-07-22 2008-07-22 A method for locating the center of a circular target Expired - Fee Related CN101334263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008100227931A CN101334263B (en) 2008-07-22 2008-07-22 A method for locating the center of a circular target

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008100227931A CN101334263B (en) 2008-07-22 2008-07-22 A method for locating the center of a circular target

Publications (2)

Publication Number Publication Date
CN101334263A true CN101334263A (en) 2008-12-31
CN101334263B CN101334263B (en) 2010-09-15

Family

ID=40197024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008100227931A Expired - Fee Related CN101334263B (en) 2008-07-22 2008-07-22 A method for locating the center of a circular target

Country Status (1)

Country Link
CN (1) CN101334263B (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902548A (en) * 2009-05-27 2010-12-01 索尼公司 Image processing equipment, imaging device and image processing method
CN101604380B (en) * 2009-07-21 2011-07-20 上海理工大学 Method for identifying human head by diameter searching
CN102135416A (en) * 2010-12-30 2011-07-27 天津普达软件技术有限公司 Online image detecting system and method for bottle covers
CN102213591A (en) * 2010-04-01 2011-10-12 安鹏科技股份有限公司 Digital image analysis device
CN102252661A (en) * 2011-04-02 2011-11-23 华南理工大学 Globoid three-dimensional positioning method for machine vision
CN102496003A (en) * 2011-11-21 2012-06-13 中国科学院自动化研究所 Identification block and target locating method based on identification block
CN102590052A (en) * 2012-02-28 2012-07-18 清华大学 Method for measuring particulate size of foreign matters in liquid
CN102637300A (en) * 2012-04-26 2012-08-15 重庆大学 Improved Zernike moment edge detection method
CN102706280A (en) * 2012-06-21 2012-10-03 中国地质大学(武汉) Automatic centering method based on laser measurement
CN102721364A (en) * 2011-03-30 2012-10-10 比亚迪股份有限公司 Positioning method and positioning device for workpiece
CN102855608A (en) * 2012-07-18 2013-01-02 奇瑞汽车股份有限公司 Method and device for weakening image
CN103235939A (en) * 2013-05-08 2013-08-07 哈尔滨工业大学 Datum point positioning method based on machine vision
CN103245309A (en) * 2013-05-21 2013-08-14 杭州鼎热科技有限公司 Method for compensating laser flatness measurement error
CN103512494A (en) * 2013-07-16 2014-01-15 宁波职业技术学院 Visual inspection system and method for scale micro changes of plant fruits
CN103632152A (en) * 2013-12-04 2014-03-12 天津普达软件技术有限公司 Method for detecting forks for drummed instant noodle
CN104359403A (en) * 2014-11-21 2015-02-18 天津工业大学 Plane part size measurement method based on sub-pixel edge algorithm
CN104408465A (en) * 2014-11-01 2015-03-11 西南大学 Canny-matrix-pulse-edge-mode-based silkworm chrysalis male-female identification method
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105457908A (en) * 2015-11-12 2016-04-06 孙高磊 Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD
CN105635583A (en) * 2016-01-27 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Shooting method and device
CN105956536A (en) * 2016-04-26 2016-09-21 北京森科赛德科技有限公司 Pretreatment method and device for iris recognition
CN103729655B (en) * 2014-01-22 2017-03-01 哈尔滨工业大学 A kind of detection method for slice component vision localization
CN106651959A (en) * 2016-11-15 2017-05-10 东南大学 Optical field camera micro-lens array geometric parameter calibration method
CN106815585A (en) * 2017-01-20 2017-06-09 浙江大学 A kind of high-precision vision positioning method of complex dynamic environment hole characteristic
CN106878701A (en) * 2016-12-31 2017-06-20 歌尔科技有限公司 The detection method and device of a kind of TVLine
WO2017118285A1 (en) * 2016-01-05 2017-07-13 北京度量科技有限公司 Method for rapidly extracting central point of circular image
CN107301636A (en) * 2017-05-17 2017-10-27 华南理工大学 A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting
CN107516325A (en) * 2017-08-22 2017-12-26 上海理工大学 Center of circle detection method based on sub-pixel edge
CN107577979A (en) * 2017-07-26 2018-01-12 中科创达软件股份有限公司 DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment
CN107678551A (en) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 Gesture identification method and device, electronic equipment
CN107845098A (en) * 2017-11-14 2018-03-27 南京理工大学 Liver cancer image full-automatic partition method based on random forest and fuzzy clustering
CN108226915A (en) * 2017-12-25 2018-06-29 中国人民解放军63921部队 A kind of quantitatively characterizing space multiple target spatial distribution method
CN108921865A (en) * 2018-06-27 2018-11-30 南京大学 A kind of jamproof sub-pix line fitting method
CN109035230A (en) * 2018-07-19 2018-12-18 中导光电设备股份有限公司 A kind of Circularhole diameter vision measuring method
CN109011654A (en) * 2018-09-05 2018-12-18 浙江大丰实业股份有限公司 Peoperty walking identification mechanism
CN109084675A (en) * 2018-06-04 2018-12-25 哈尔滨工业大学 Center of circle positioning device and method based on Embedded geometrical characteristic in conjunction with Zernike square
CN109410268A (en) * 2018-11-06 2019-03-01 温州雷蒙光电科技有限公司 A kind of determination method and system in the concentric loop center of circle of corneal topography
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece defect detection method based on machine vision
CN109990936A (en) * 2019-03-12 2019-07-09 高新兴创联科技有限公司 High speed railway track stress automated watch-keeping facility and method
CN110223339A (en) * 2019-05-27 2019-09-10 盐城工学院 One kind being based on machine vision thermal protector calibration point center positioning method
CN110335322A (en) * 2019-07-09 2019-10-15 成都理工大学 Image-based road recognition method and road recognition device
CN110349199A (en) * 2019-06-25 2019-10-18 杭州汇萃智能科技有限公司 A kind of object roundness measurement method
CN110533682A (en) * 2019-08-30 2019-12-03 福建省德腾智能科技有限公司 A kind of image border real time extracting method based on curvature filtering
CN110930423A (en) * 2019-11-26 2020-03-27 广州敏视数码科技有限公司 Object edge feature recognition and extraction method
CN111539972A (en) * 2020-04-24 2020-08-14 大连理工大学 A method for segmentation of cerebellar vermis in ultrasound images
CN112066874A (en) * 2020-08-14 2020-12-11 苏州环球科技股份有限公司 Multi-position 3D scanning online detection method
CN112116667A (en) * 2020-09-22 2020-12-22 扬州大学 An Algorithm for Measuring the Diameter of Engine Surface Machined Holes
CN112478779A (en) * 2020-11-27 2021-03-12 北京石油化工学院 Base plate visual positioning method and system and base plate carrying joint robot device
CN113192120A (en) * 2021-04-25 2021-07-30 无锡信捷电气股份有限公司 Circle positioning algorithm based on two-dimensional edge measurement and least square principle
CN113284154A (en) * 2021-05-25 2021-08-20 武汉钢铁有限公司 Steel coil end face image segmentation method and device and electronic equipment
CN113406093A (en) * 2021-08-19 2021-09-17 苏州维嘉科技股份有限公司 Optical detection equipment and method and device for measuring object attribute thereof
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN113470102A (en) * 2021-06-23 2021-10-01 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113487589A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Sub-pixel circle center detection method and system
CN113610881A (en) * 2021-08-25 2021-11-05 浙江大华技术股份有限公司 Target object determination method and device, storage medium and electronic device
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN114663361A (en) * 2022-03-04 2022-06-24 南京图格医疗科技有限公司 A detection method and system for searching imaging radius
CN114923417A (en) * 2022-07-22 2022-08-19 沈阳和研科技有限公司 Method and system for positioning multiple circular workpieces for dicing saw
CN115082552A (en) * 2022-07-25 2022-09-20 荣耀终端有限公司 Marking hole positioning method and device, assembly equipment and storage medium
CN116168025A (en) * 2023-04-24 2023-05-26 日照金果粮油有限公司 Oil curtain type fried peanut production system
CN117291972A (en) * 2023-11-23 2023-12-26 湖南科天健光电技术有限公司 Sub-pixel positioning method and device for circular mark, electronic equipment and medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102072707B (en) * 2011-01-12 2012-05-30 河南理工大学 A fast detection method for the center and radius of circles in digital images
CN109344785B (en) * 2018-10-12 2021-10-01 北京航空航天大学 A high-precision planetary center positioning method in autonomous optical navigation in deep space

Cited By (101)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902548A (en) * 2009-05-27 2010-12-01 索尼公司 Image processing equipment, imaging device and image processing method
CN101604380B (en) * 2009-07-21 2011-07-20 上海理工大学 Method for identifying human head by diameter searching
CN102213591A (en) * 2010-04-01 2011-10-12 安鹏科技股份有限公司 Digital image analysis device
CN102213591B (en) * 2010-04-01 2013-10-23 安鹏科技股份有限公司 Digital Image Analysis Device
CN102135416B (en) * 2010-12-30 2012-10-03 天津普达软件技术有限公司 Online image detecting system and method for bottle covers
CN102135416A (en) * 2010-12-30 2011-07-27 天津普达软件技术有限公司 Online image detecting system and method for bottle covers
CN102721364A (en) * 2011-03-30 2012-10-10 比亚迪股份有限公司 Positioning method and positioning device for workpiece
CN102721364B (en) * 2011-03-30 2015-12-02 比亚迪股份有限公司 A kind of localization method of workpiece and device thereof
CN102252661A (en) * 2011-04-02 2011-11-23 华南理工大学 Globoid three-dimensional positioning method for machine vision
CN102252661B (en) * 2011-04-02 2013-04-10 华南理工大学 Globoid three-dimensional positioning method for machine vision
CN102496003A (en) * 2011-11-21 2012-06-13 中国科学院自动化研究所 Identification block and target locating method based on identification block
CN102590052A (en) * 2012-02-28 2012-07-18 清华大学 Method for measuring particulate size of foreign matters in liquid
CN102637300A (en) * 2012-04-26 2012-08-15 重庆大学 Improved Zernike moment edge detection method
CN102637300B (en) * 2012-04-26 2014-08-06 重庆大学 Improved Zernike moment edge detection method
CN102706280A (en) * 2012-06-21 2012-10-03 中国地质大学(武汉) Automatic centering method based on laser measurement
CN102706280B (en) * 2012-06-21 2014-05-14 中国地质大学(武汉) Automatic centering method based on laser measurement
CN102855608B (en) * 2012-07-18 2015-01-28 奇瑞汽车股份有限公司 Method and device for weakening image
CN102855608A (en) * 2012-07-18 2013-01-02 奇瑞汽车股份有限公司 Method and device for weakening image
CN103235939A (en) * 2013-05-08 2013-08-07 哈尔滨工业大学 Datum point positioning method based on machine vision
CN103245309A (en) * 2013-05-21 2013-08-14 杭州鼎热科技有限公司 Method for compensating laser flatness measurement error
CN103512494A (en) * 2013-07-16 2014-01-15 宁波职业技术学院 Visual inspection system and method for scale micro changes of plant fruits
CN103512494B (en) * 2013-07-16 2017-02-08 宁波职业技术学院 Visual inspection system and method for scale micro changes of plant fruits
CN103632152A (en) * 2013-12-04 2014-03-12 天津普达软件技术有限公司 Method for detecting forks for drummed instant noodle
CN104778677A (en) * 2014-01-13 2015-07-15 联想(北京)有限公司 Positioning method, device and equipment
CN103729655B (en) * 2014-01-22 2017-03-01 哈尔滨工业大学 A kind of detection method for slice component vision localization
CN104408465A (en) * 2014-11-01 2015-03-11 西南大学 Canny-matrix-pulse-edge-mode-based silkworm chrysalis male-female identification method
CN104359403A (en) * 2014-11-21 2015-02-18 天津工业大学 Plane part size measurement method based on sub-pixel edge algorithm
CN104359403B (en) * 2014-11-21 2017-03-29 天津工业大学 Planar part dimension measurement method based on sub-pixel edge algorithm
CN105005985A (en) * 2015-06-19 2015-10-28 沈阳工业大学 Backlight image micron-order edge detection method
CN105005985B (en) * 2015-06-19 2017-10-31 沈阳工业大学 Backlight image micron order edge detection method
CN105457908A (en) * 2015-11-12 2016-04-06 孙高磊 Sorting and quick locating method and system for small-size glass panels on basis of monocular CCD
CN105457908B (en) * 2015-11-12 2018-04-13 孙高磊 The sorting method for rapidly positioning and system of small size glass panel based on monocular CCD
WO2017118285A1 (en) * 2016-01-05 2017-07-13 北京度量科技有限公司 Method for rapidly extracting central point of circular image
CN105635583A (en) * 2016-01-27 2016-06-01 宇龙计算机通信科技(深圳)有限公司 Shooting method and device
CN105956536A (en) * 2016-04-26 2016-09-21 北京森科赛德科技有限公司 Pretreatment method and device for iris recognition
CN106651959A (en) * 2016-11-15 2017-05-10 东南大学 Optical field camera micro-lens array geometric parameter calibration method
CN106651959B (en) * 2016-11-15 2019-05-31 东南大学 A kind of scaling method of light-field camera microlens array geometric parameter
CN106878701A (en) * 2016-12-31 2017-06-20 歌尔科技有限公司 The detection method and device of a kind of TVLine
CN106815585A (en) * 2017-01-20 2017-06-09 浙江大学 A kind of high-precision vision positioning method of complex dynamic environment hole characteristic
CN106815585B (en) * 2017-01-20 2020-01-10 浙江大学 High-precision visual positioning method for hole characteristics of complex dynamic environment
CN107301636A (en) * 2017-05-17 2017-10-27 华南理工大学 A kind of high density circuit board circular hole sub-pixel detection method based on Gauss curve fitting
WO2018209941A1 (en) * 2017-05-17 2018-11-22 华南理工大学 High-density circuit board circular hole sub-pixel detection method based on gaussian fitting
CN107577979B (en) * 2017-07-26 2020-07-03 中科创达软件股份有限公司 Method and device for quickly identifying DataMatrix type two-dimensional code and electronic equipment
CN107577979A (en) * 2017-07-26 2018-01-12 中科创达软件股份有限公司 DataMatrix type Quick Response Codes method for quickly identifying, device and electronic equipment
CN107516325A (en) * 2017-08-22 2017-12-26 上海理工大学 Center of circle detection method based on sub-pixel edge
CN107678551A (en) * 2017-10-19 2018-02-09 京东方科技集团股份有限公司 Gesture identification method and device, electronic equipment
CN107678551B (en) * 2017-10-19 2021-12-28 京东方科技集团股份有限公司 Gesture recognition method and device and electronic equipment
US11402918B2 (en) 2017-10-19 2022-08-02 Boe Technology Group Co., Ltd. Method for controlling terminal apparatus, apparatus for controlling terminal apparatus, and computer-program product
CN107845098A (en) * 2017-11-14 2018-03-27 南京理工大学 Liver cancer image full-automatic partition method based on random forest and fuzzy clustering
CN108226915A (en) * 2017-12-25 2018-06-29 中国人民解放军63921部队 A kind of quantitatively characterizing space multiple target spatial distribution method
CN108226915B (en) * 2017-12-25 2021-07-30 中国人民解放军63921部队 Quantitative representation space multi-target spatial distribution method
CN109084675A (en) * 2018-06-04 2018-12-25 哈尔滨工业大学 Center of circle positioning device and method based on Embedded geometrical characteristic in conjunction with Zernike square
CN108921865B (en) * 2018-06-27 2022-03-18 南京大学 Anti-interference sub-pixel straight line fitting method
CN108921865A (en) * 2018-06-27 2018-11-30 南京大学 A kind of jamproof sub-pix line fitting method
CN109035230B (en) * 2018-07-19 2021-11-09 中导光电设备股份有限公司 Round hole diameter visual measurement method
CN109035230A (en) * 2018-07-19 2018-12-18 中导光电设备股份有限公司 A kind of Circularhole diameter vision measuring method
CN109011654B (en) * 2018-09-05 2020-05-29 浙江大丰实业股份有限公司 Stage property walking identification mechanism
CN109011654A (en) * 2018-09-05 2018-12-18 浙江大丰实业股份有限公司 Peoperty walking identification mechanism
CN109410268A (en) * 2018-11-06 2019-03-01 温州雷蒙光电科技有限公司 A kind of determination method and system in the concentric loop center of circle of corneal topography
CN109410268B (en) * 2018-11-06 2020-06-23 温州高视雷蒙光电科技有限公司 Method and system for determining circle center of concentric ring of corneal topography
CN109990936A (en) * 2019-03-12 2019-07-09 高新兴创联科技有限公司 High speed railway track stress automated watch-keeping facility and method
CN109900711A (en) * 2019-04-02 2019-06-18 天津工业大学 Workpiece defect detection method based on machine vision
CN110223339A (en) * 2019-05-27 2019-09-10 盐城工学院 One kind being based on machine vision thermal protector calibration point center positioning method
CN110223339B (en) * 2019-05-27 2021-07-16 盐城工学院 A method for locating the center of calibration point based on machine vision thermal protector
CN110349199B (en) * 2019-06-25 2021-07-30 杭州汇萃智能科技有限公司 Object roundness measuring method
CN110349199A (en) * 2019-06-25 2019-10-18 杭州汇萃智能科技有限公司 A kind of object roundness measurement method
CN110335322A (en) * 2019-07-09 2019-10-15 成都理工大学 Image-based road recognition method and road recognition device
CN110335322B (en) * 2019-07-09 2024-03-01 成都理工大学 Road recognition method and road recognition device based on image
CN110533682A (en) * 2019-08-30 2019-12-03 福建省德腾智能科技有限公司 A kind of image border real time extracting method based on curvature filtering
CN110533682B (en) * 2019-08-30 2023-02-14 福建省德腾智能科技有限公司 Image edge real-time extraction method based on curvature filtering
CN110930423A (en) * 2019-11-26 2020-03-27 广州敏视数码科技有限公司 Object edge feature recognition and extraction method
CN111539972A (en) * 2020-04-24 2020-08-14 大连理工大学 A method for segmentation of cerebellar vermis in ultrasound images
CN111539972B (en) * 2020-04-24 2023-04-18 大连理工大学 Method for segmenting cerebellar lumbricus in ultrasonic image
CN112066874A (en) * 2020-08-14 2020-12-11 苏州环球科技股份有限公司 Multi-position 3D scanning online detection method
CN112116667B (en) * 2020-09-22 2023-11-24 扬州大学 Method for measuring diameter of machined hole on surface of engine
CN112116667A (en) * 2020-09-22 2020-12-22 扬州大学 An Algorithm for Measuring the Diameter of Engine Surface Machined Holes
CN112478779A (en) * 2020-11-27 2021-03-12 北京石油化工学院 Base plate visual positioning method and system and base plate carrying joint robot device
CN112478779B (en) * 2020-11-27 2022-07-12 北京石油化工学院 Base plate visual positioning method and system and base plate carrying joint robot device
CN113192120A (en) * 2021-04-25 2021-07-30 无锡信捷电气股份有限公司 Circle positioning algorithm based on two-dimensional edge measurement and least square principle
CN113192120B (en) * 2021-04-25 2024-12-13 无锡信捷电气股份有限公司 Circle Positioning Algorithm Based on Two-Dimensional Edge Measurement and Least Squares Principle
CN113284154A (en) * 2021-05-25 2021-08-20 武汉钢铁有限公司 Steel coil end face image segmentation method and device and electronic equipment
CN113284154B (en) * 2021-05-25 2022-04-26 武汉钢铁有限公司 Steel coil end face image segmentation method and device and electronic equipment
CN113470102B (en) * 2021-06-23 2024-06-11 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113470102A (en) * 2021-06-23 2021-10-01 依未科技(北京)有限公司 Method, device, medium and equipment for measuring fundus blood vessel curvature with high precision
CN113487589B (en) * 2021-07-22 2024-04-19 上海嘉奥信息科技发展有限公司 Sub-pixel circle center detection method and system
CN113487589A (en) * 2021-07-22 2021-10-08 上海嘉奥信息科技发展有限公司 Sub-pixel circle center detection method and system
CN113406093B (en) * 2021-08-19 2021-11-30 苏州维嘉科技股份有限公司 Optical detection equipment and method and device for measuring object attribute thereof
CN113406093A (en) * 2021-08-19 2021-09-17 苏州维嘉科技股份有限公司 Optical detection equipment and method and device for measuring object attribute thereof
CN113610881B (en) * 2021-08-25 2024-03-01 浙江华感科技有限公司 Target object determination method and device, storage medium and electronic device
CN113610881A (en) * 2021-08-25 2021-11-05 浙江大华技术股份有限公司 Target object determination method and device, storage medium and electronic device
CN113720280A (en) * 2021-09-03 2021-11-30 北京机电研究所有限公司 Bar center positioning method based on machine vision
CN113470056B (en) * 2021-09-06 2021-11-16 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN113470056A (en) * 2021-09-06 2021-10-01 成都新西旺自动化科技有限公司 Sub-pixel edge point detection method based on Gaussian model convolution
CN114663361A (en) * 2022-03-04 2022-06-24 南京图格医疗科技有限公司 A detection method and system for searching imaging radius
CN114923417B (en) * 2022-07-22 2022-10-14 沈阳和研科技有限公司 Method and system for positioning multiple circular workpieces for dicing saw
CN114923417A (en) * 2022-07-22 2022-08-19 沈阳和研科技有限公司 Method and system for positioning multiple circular workpieces for dicing saw
CN115082552B (en) * 2022-07-25 2022-12-27 荣耀终端有限公司 Marking hole positioning method and device, assembly equipment and storage medium
CN115082552A (en) * 2022-07-25 2022-09-20 荣耀终端有限公司 Marking hole positioning method and device, assembly equipment and storage medium
CN116168025A (en) * 2023-04-24 2023-05-26 日照金果粮油有限公司 Oil curtain type fried peanut production system
CN117291972A (en) * 2023-11-23 2023-12-26 湖南科天健光电技术有限公司 Sub-pixel positioning method and device for circular mark, electronic equipment and medium
CN117291972B (en) * 2023-11-23 2024-02-13 湖南科天健光电技术有限公司 Sub-pixel positioning method and device for circular mark, electronic equipment and medium

Also Published As

Publication number Publication date
CN101334263B (en) 2010-09-15

Similar Documents

Publication Publication Date Title
CN101334263A (en) Method for locating the center of a circular target
CN104299260B (en) Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration
CN110866924B (en) Line structured light center line extraction method and storage medium
Zhang et al. Robust depth estimation for light field via spinning parallelogram operator
CN109859226B (en) Detection method of checkerboard corner sub-pixels for graph segmentation
Wei et al. Tensor voting guided mesh denoising
CN103400151B (en) The optical remote sensing image of integration and GIS autoregistration and Clean water withdraw method
Da et al. Sub-pixel edge detection based on an improved moment
CN108122256B (en) A method of it approaches under state and rotates object pose measurement
CN111415376B (en) Automobile glass subpixel contour extraction method and automobile glass detection method
CN104318548A (en) Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN104899888B (en) A kind of image sub-pixel edge detection method based on Legendre squares
CN112233116B (en) Concave-convex mark visual detection method based on neighborhood decision and gray level co-occurrence matrix description
CN111354047B (en) Computer vision-based camera module positioning method and system
CN105740829A (en) Scanning line processing based automatic reading method for pointer instrument
CN107452030A (en) Method for registering images based on contour detecting and characteristic matching
CN104077775A (en) Shape matching method and device combining skeleton feature points and shape context
CN101488224A (en) Characteristic point matching method based on relativity measurement
Mousa et al. New DTM extraction approach from airborne images derived DSM
CN105678737A (en) Digital image corner point detection method based on Radon transform
CN103116890B (en) A kind of intelligent search matching process based on video image
CN101430789B (en) Image edge detection method based on Fast Slant Stack transformation
CN110428460B (en) Discontinuous region segmentation method based on quadrature phase shift stripes
CN108986160A (en) A kind of image laser center line extraction method containing specular light interference
CN108460348A (en) Road target detection method based on threedimensional model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SOWTHEAST UNIV.

Effective date: 20131018

Owner name: NANTONG OUTE CONSTRUCTION MATERIALS EQUIPMENT CO.,

Free format text: FORMER OWNER: SOWTHEAST UNIV.

Effective date: 20131018

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 210096 NANJING, JIANGSU PROVINCE TO: 226600 NANTONG, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20131018

Address after: 226600 Jiangsu city of Nantong province Haian County baiding town into the Industrial Park

Patentee after: NANTONG OUTPACE BUILDING MATERIAL EQUIPMENT CO.,LTD.

Patentee after: SOUTHEAST University

Address before: 210096 Jiangsu city Nanjing Province four pailou No. 2

Patentee before: Southeast University

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100915

Termination date: 20210722

CF01 Termination of patent right due to non-payment of annual fee