CN104123542B - Device and method for positioning hub workpiece - Google Patents
Device and method for positioning hub workpiece Download PDFInfo
- Publication number
- CN104123542B CN104123542B CN201410349103.9A CN201410349103A CN104123542B CN 104123542 B CN104123542 B CN 104123542B CN 201410349103 A CN201410349103 A CN 201410349103A CN 104123542 B CN104123542 B CN 104123542B
- Authority
- CN
- China
- Prior art keywords
- hub
- image
- point
- points
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 238000005286 illumination Methods 0.000 claims abstract description 5
- 230000009466 transformation Effects 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 13
- 239000013598 vector Substances 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000009825 accumulation Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000010845 search algorithm Methods 0.000 claims description 2
- 238000012216 screening Methods 0.000 claims 1
- 238000013519 translation Methods 0.000 abstract description 8
- 230000000694 effects Effects 0.000 abstract description 2
- 238000003708 edge detection Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000011218 segmentation Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000005266 casting Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000003754 machining Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及一种工件定位技术,特别是一种轮毂工件定位的装置及其方法。The invention relates to a workpiece positioning technology, in particular to a hub workpiece positioning device and method.
背景技术Background technique
在汽车自动装配生产线上,工件定位是一种常用的操作需求。将具有计算机视觉功能的工业机器人应用于汽车自动化装配,可有效减少人为因素干扰,显著提高生产效率和产品质量,降低生产成本。在工业机器人进行汽车轮毂工件的自动化加工时,要用计算机视觉技术来分析工业相机采集的实际工件图像,从图像中识别出轮毂,计算出几何位置信息,据此确定出机器人的抓取姿态和运动轨迹,实时控制工业机器人抓取和搬运轮毂。On the automatic assembly line of automobiles, workpiece positioning is a common operation requirement. Applying industrial robots with computer vision functions to automatic assembly of automobiles can effectively reduce the interference of human factors, significantly improve production efficiency and product quality, and reduce production costs. When the industrial robot performs automatic processing of the automobile hub workpiece, it is necessary to use computer vision technology to analyze the actual workpiece image collected by the industrial camera, identify the hub from the image, calculate the geometric position information, and determine the grasping posture and position of the robot accordingly. Motion trajectory, real-time control of industrial robots to grab and handle the wheel hub.
通常,轮毂工件是铸造工件,在粗加工后的轮毂旁侧会留下多条铸线,且轮毂表面较粗糙;此外,在实际工作环境下,轮毂周围常存在其它干扰物,轮毂存在平移与旋转,轮毂工件有的拍摄不全,放置轮毂的位置,其图像背景比较复杂等。在此情况下,轮毂工件图像较复杂,从而导致轮毂及其气嘴的位置定位很困难。Usually, the hub workpiece is a casting workpiece, and many casting lines will be left on the side of the hub after rough machining, and the surface of the hub is rough; in addition, in the actual working environment, there are often other disturbances around the hub, and the hub has translation and Rotation, incomplete shooting of hub workpieces, location of hubs, and complex image backgrounds, etc. In this case, the image of the hub workpiece is complex, which makes it difficult to locate the hub and its valve.
现将与本发明有关的现有技术介绍如下:Now the prior art relevant to the present invention is introduced as follows:
1、现有技术一的技术方案1. The technical solution of prior art 1
赵玉良、刘伟军、刘永贤在文献“汽车轮毂在线识别系统的研究.机械设计与制造,2007(10):164-166”中研究了在在传送带上随机混流的状态下自动识别分类多种汽车轮毂。该方法的基本步骤有:图像获取、图像预处理、特征提取和识别分类。其中轮毂定位和分类的关键步骤是提取轮毂图像五类特征:轮毂中心是否有孔洞;轮毂直径;轮毂周边区域的孔洞个数;整个轮毂所占区域的面积;灰度图像中轮毂区域同一灰度级的像素点计数最多的灰度值。Zhao Yuliang, Liu Weijun, and Liu Yongxian studied the automatic identification and classification of various automobile wheels in the state of random mixed flow on the conveyor belt in the literature "Research on the Online Identification System of Automobile Wheels. Mechanical Design and Manufacturing, 2007 (10): 164-166". The basic steps of the method are: image acquisition, image preprocessing, feature extraction and recognition classification. The key step of hub positioning and classification is to extract five types of features of the hub image: whether there is a hole in the center of the hub; the diameter of the hub; the number of holes in the peripheral area of the hub; the area occupied by the entire hub; The gray value with the largest pixel count.
现有技术一通过图像区域分割、Rober算子边缘检测和最小二乘法来拟合轮毂轮廓圆。然而实际加工环境下,可能存在放置轮毂的背景比较复杂或者轮毂图像背景与轮毂目标颜色相近时,边缘检测结果不好时,可能会造成图像特征的漏判和错判。另外,当轮毂工件部分拍摄不全时,该方法也无法实现轮毂形位计算。Existing technology one uses image region segmentation, Rober operator edge detection and least square method to fit the hub contour circle. However, in the actual processing environment, there may be a complex background for placing the hub or when the background of the hub image is similar to the target color of the hub, and when the edge detection result is not good, it may cause missed and wrong judgments of image features. In addition, when the part of the hub workpiece is not fully photographed, the method cannot realize the calculation of the hub shape and position.
2、现有技术二的技术方案2. The technical solution of the second prior art
乐莹、徐新民、吴晓波在文献“基于面阵CCD的轮毂形位参数检测方法.科技通报,2009,25(2):196-201”中提出了一种基于面阵CCD成像和计算机图像处理技术的高精度形位参数检测方法。该方法的基本步骤有:拍取带有标定模板的轮毂图像,并通过图像灰度转换、区域分割来进行图像边缘检测,结合光学畸变校正模型对轮毂边界做几何畸变矫正;接着采用亚像素插值算法使边缘检测结果更精确;最后根据安装孔位置拟合轮毂形位参数。Le Ying, Xu Xinmin and Wu Xiaobo proposed a method based on area array CCD imaging and computer image processing High-precision shape and position parameter detection method of technology. The basic steps of this method are as follows: take a wheel hub image with a calibration template, and perform image edge detection through image grayscale conversion and region segmentation, and perform geometric distortion correction on the hub boundary with an optical distortion correction model; then use sub-pixel interpolation The algorithm makes the edge detection result more accurate; finally, the shape and position parameters of the hub are fitted according to the position of the mounting hole.
现有技术二的定位精度依赖于图像区域分割和边缘检测结果,特别是轮毂区域内部的形状细节分割结果。然而在实际加工环境下,可能存在放置轮毂的背景比较复杂或者轮毂图像背景与轮毂目标颜色相近时,用图像分割算法并不能很好地突出轮毂区域内部的细节形状,导致后续定位步骤无法进行;另外,当轮毂工件部分拍摄不全时,该方法也无法实现轮毂形位计算。The positioning accuracy of prior art 2 depends on image region segmentation and edge detection results, especially the shape detail segmentation results inside the hub region. However, in the actual processing environment, when the background of the hub is more complex or the background of the hub image is similar to the target color of the hub, the image segmentation algorithm cannot well highlight the detailed shape inside the hub area, resulting in the failure of subsequent positioning steps; In addition, when the part of the hub workpiece is not fully photographed, the method cannot realize the calculation of the hub shape and position.
3、现有技术三的技术方案3. The technical solution of prior art 3
胡超、崔家林、裘君等在专利“轮毂自动识别装置和方法:中国,103090790.A[P].2013,05,08”中提出了一种识别轮毂中心孔、轮毂装配面以及轮毂装配面到轮毂底部外周平面的偏距参数的装置及方法。该方法预先通过非接触距离测量仪自动获取轮毂装配面到轮毂底部外周平面的偏距参数,全面覆盖了轮毂的各项技术参数,创建轮毂信息数据库,然后采用了两个图像采集装置分别采集轮毂的上方和下方的图像,轮毂的上方的图像即是轮毂俯视图,通过轮毂俯视图获取轮毂的外形参数,通过轮毂下方的图像,即轮毂的仰视图获取轮毂中心孔以及轮毂装配面的外形参数,例如装配面的装配孔的大小,位置,外形等。Hu Chao, Cui Jialin, Qiu Jun, etc. in the patent "Automatic Wheel Identification Device and Method: China, 103090790.A[P]. Apparatus and method for offset parameters to the outer peripheral plane of the bottom of the hub. This method automatically obtains the offset distance parameters from the assembly surface of the wheel hub to the outer peripheral plane of the bottom of the hub through a non-contact distance measuring instrument in advance, which fully covers various technical parameters of the hub, creates a hub information database, and then uses two image acquisition devices to collect the The images above and below the hub, the image above the hub is the top view of the hub, the shape parameters of the hub are obtained through the top view of the hub, and the shape parameters of the center hole of the hub and the assembly surface of the hub are obtained through the image below the hub, that is, the bottom view of the hub, for example The size, position, shape, etc. of the mounting holes on the mounting surface.
现有技术三需要预先存储轮毂图像的大量信息,并且在实际识别过程中需要获取轮毂上下方的正面图像,装置和识别过程都较复杂。另外,本发明的目的是定位轮毂的特定部位,摄像头与轮毂的相对位置不是固定的,所以该方法并不适用于本场景。The third prior art needs to pre-store a large amount of wheel image information, and in the actual recognition process, it is necessary to obtain the frontal images of the upper and lower sides of the wheel, and the device and recognition process are relatively complicated. In addition, the purpose of the present invention is to locate a specific part of the hub, and the relative position between the camera and the hub is not fixed, so this method is not applicable to this scenario.
4、现有技术四的技术方案4. The technical solution of prior art 4
黄茜、吴元、汤达浚在专利“一种识别轮毂型号的检测系统及其检测方法:中国,103425969.A[P].2013”中提出了一种轮毂型号自动识别系统,该系统包括上位机和CCD图像传感器,上位机与CCD图像传感器依次连接。该专利还提供了一种由上述系统实现的方法,包括如下步骤:初始化设置;获取轮毂型号识别区域的无轮毂图像;创建轮毂型号数据库记录;识别并确定轮毂型号。该专利能根据事先创建轮毂型号数据库,在系统工作过程中对进入轮毂识别区的轮毂实现自动型号识别。Huang Qian, Wu Yuan, and Tang Dajun proposed an automatic wheel model identification system in the patent "A detection system and detection method for identifying wheel models: China, 103425969.A[P].2013", which includes a host computer and the CCD image sensor, the host computer is connected with the CCD image sensor in turn. The patent also provides a method implemented by the above system, including the following steps: initializing settings; obtaining a hub-free image in the hub model identification area; creating a hub model database record; identifying and determining the hub model. This patent can realize automatic model identification for the hubs entering the hub identification area during the working process of the system according to the hub model database created in advance.
现有技术四需要预先存储轮毂图像库,且当轮毂工件部分拍摄不全时,该方法也无法实现轮毂形位计算。Prior Art 4 needs to pre-store the hub image database, and when the hub workpiece part is not fully photographed, this method cannot realize the calculation of the hub shape and position.
综上所述,现有的轮毂工件定位技术存在以下问题:(1)在轮毂匹配过程中,图像遇到光照影响以及平移、旋转、尺度变化等问题时,轮毂工件定位偏差较大;(2)在轮毂视角变化、部分遮挡情况下,难以进行轮毂定位。To sum up, the existing hub workpiece positioning technology has the following problems: (1) During the hub matching process, when the image encounters problems such as illumination, translation, rotation, and scale changes, the hub workpiece positioning deviation is relatively large; (2) ) It is difficult to locate the hub when the viewing angle of the hub changes and partial occlusion occurs.
本发明要用到的术语简称如下:The terms to be used in the present invention are abbreviated as follows:
SIFT:Scale-Invariant Feature Transform,尺度不变特征变换;SIFT: Scale-Invariant Feature Transform, scale invariant feature transformation;
DoG:Difference of Gaussian,高斯差分;DoG: Difference of Gaussian, Gaussian difference;
BBF:Best Bin First,最优节点优先;BBF: Best Bin First, the optimal node is preferred;
RANSAC:Random Sample Consensus,随机抽样一致性。RANSAC: Random Sample Consensus, random sampling consistency.
发明内容Contents of the invention
为解决现有技术存在的上述问题,本发明要设计一种轮毂工件定位的装置及其方法,实现以下两个目的:In order to solve the above-mentioned problems in the prior art, the present invention will design a device and method for hub workpiece positioning to achieve the following two purposes:
(1)在轮毂匹配过程中,降低轮毂工件定位的偏差;(1) During the hub matching process, reduce the deviation of hub workpiece positioning;
(2)在轮毂视角变化、部分遮挡情况下,容易进行轮毂定位。(2) It is easy to locate the hub when the viewing angle of the hub changes and partial occlusion occurs.
为了实现上述目的,本发明的技术方案如下:一种轮毂工件定位的装置,包括图像采集模块、轮毂模板信息提取模块、待检测轮毂特征点提取模块、特征点匹配模块和轮毂定位模块;所述的图像采集模块用于采集轮毂工件灰度图像;所述的轮毂模板信息提取模块用于提取轮毂模板图像上的SIFT特征点、圆心和气嘴的位置以及轮毂外边缘圆周上的四个点;待检测轮毂特征点提取模块用于提取待检测轮毂图像上的SIFT特征点信息;特征点匹配模块用于寻找待检测轮毂图像与轮毂模板图像匹配的特征点对,并计算待检测轮毂与模板轮毂的空间映射关系;轮毂定位模块用于定位待检测图像中对应的轮毂的圆心、气嘴以及轮毂外边缘圆周上的四个点的位置,并计算待检测图像中轮毂的半径长度;In order to achieve the above object, the technical solution of the present invention is as follows: a device for hub workpiece positioning, including an image acquisition module, a hub template information extraction module, a hub feature point extraction module to be detected, a feature point matching module and a hub positioning module; The image acquisition module is used to collect the grayscale image of the hub workpiece; the described hub template information extraction module is used to extract the SIFT feature point on the hub template image, the position of the center of circle and the gas nozzle, and four points on the circumference of the outer edge of the hub; The detection wheel feature point extraction module is used to extract the SIFT feature point information on the wheel hub image to be detected; the feature point matching module is used to find the feature point pairs that match the wheel hub image to be detected and the wheel hub template image, and calculate the distance between the wheel hub to be detected and the template wheel hub Spatial mapping relationship; the hub positioning module is used to locate the center of the corresponding hub in the image to be detected, the position of the valve and four points on the circumference of the outer edge of the hub, and calculate the radius length of the hub in the image to be detected;
所述的图像采集模块的输出端分别与轮毂模板信息提取模块和待检测轮毂特征点提取模块连接,所述的特征点匹配模块的输入端分别与轮毂模板信息提取模块和待检测轮毂特征点提取模块连接,特征点匹配模块的输出端与轮毂定位模块连接。The output ends of the image acquisition module are respectively connected with the wheel hub template information extraction module and the wheel hub feature point extraction module to be detected, and the input ends of the feature point matching module are respectively connected with the wheel hub template information extraction module and the wheel hub feature point extraction module to be detected The modules are connected, and the output end of the feature point matching module is connected with the hub positioning module.
一种轮毂工件的定位装置的定位方法,包括以下步骤:A positioning method of a positioning device for a hub workpiece, comprising the following steps:
A、离线处理A. Offline processing
在离线处理阶段,采集轮毂工件图像,提取并存储轮毂图像的SIFT特征点信息,并在模板图像上预先标记圆心和气嘴的位置;具体包括以下步骤:In the offline processing stage, the image of the hub workpiece is collected, the SIFT feature point information of the hub image is extracted and stored, and the position of the center of the circle and the valve is pre-marked on the template image; specifically, the following steps are included:
A1、采集轮毂工件图像A1. Collect the image of the hub workpiece
由图像采集模块采集轮毂工件灰度图像,采取时,需要在光照情况良好、噪声较小的环境,获取理想的轮毂模板图像;要求该模板图像的背景颜色均匀,并且图像中只有轮毂,无其它干扰物;The grayscale image of the wheel hub workpiece is collected by the image acquisition module. When taking it, it is necessary to obtain an ideal wheel hub template image in an environment with good lighting conditions and low noise; the background color of the template image is required to be uniform, and there are only wheels in the image without other interference;
A2、轮毂模板信息提取A2. Wheel template information extraction
由轮毂模板信息提取模块提取轮毂模板图像上的SIFT特征点,标定轮毂模板的圆心和气嘴位置,并测量轮毂模板的半径;具体步骤如下:The SIFT feature points on the hub template image are extracted by the hub template information extraction module, the center of circle and the position of the valve of the hub template are calibrated, and the radius of the hub template is measured; the specific steps are as follows:
A21、解析输入的轮毂工件模版图像,搜索轮毂工件区域中满足SIFT特征点特性的像素点,统计并存储SIFT特征点描述信息,即根据步骤1)~步骤5)得到轮毂的SIFT特征点模板信息:A21. Analyze the input hub workpiece template image, search for pixels in the hub workpiece area that meet the characteristics of SIFT feature points, count and store the description information of SIFT feature points, that is, obtain the SIFT feature point template information of the hub according to steps 1) to 5). :
A211、构造图像金字塔TA211. Construct image pyramid T
输入图像定义为f(x,y),对f(x,y)做I次下采样,得到(I+1)层的图像金字塔T,其中I=log2[min(M,N)]-3,M和N分别为f(x,y)的行数和列数。所述的下采样是指取彼此相邻的四个像素的均值作为下采样后的像素。The input image is defined as f(x, y), and f(x, y) is down-sampled I times to obtain the image pyramid T of the (I+1) layer, where I=log 2 [min(M,N)]- 3. M and N are the number of rows and columns of f(x,y) respectively. The down-sampling refers to taking the average value of four adjacent pixels as the down-sampled pixel.
定义图像金字塔模型T中第0层的图像为T0(x,y),即原始图像f(x,y);第i层的图像定义为Ti(x,y),即对原始的图像f(x,y)做I次下采样后得到的图像,i=0,1,2,...,I。Define the image of the 0th layer in the image pyramid model T as T 0 (x, y), that is, the original image f(x, y); the image of the i-th layer is defined as T i (x, y), that is, the original image The image obtained after f(x, y) is down-sampled I times, i=0, 1, 2,..., I.
A212、构造高斯金字塔LA212. Constructing a Gaussian Pyramid L
用高斯卷积核函数G(x,y,σ)对Ti(x,y)做卷积,并连续变化尺度空间因子σ,得到尺度空间为Li:Use the Gaussian convolution kernel function G(x,y,σ) to convolve T i (x,y), and continuously change the scale space factor σ to obtain the scale space as L i :
Li(x,y,σ)=G(x,y,σ)*Ti(x,y) (1)L i (x,y,σ)=G(x,y,σ)*T i (x,y) (1)
其中,符号‘*’表示卷积运算符,σ为尺度空间因子,i=0,1,2,...,I。Among them, the symbol '*' represents the convolution operator, σ is the scale space factor, i=0, 1, 2,..., I.
对T中(I+1)幅图像做同样操作,得到L。Do the same operation on (I+1) images in T to get L.
A213、构造DoG金字塔DA213. Construct DoG pyramid D
取Li中每两幅相邻的图像做差,得到DoG空间Di,即Take the difference between every two adjacent images in L i to get the DoG space D i , namely
Di(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*Ti(x,y)=Li(x,y,kσ)-Li(x,y,σ) (2)D i (x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*T i (x,y)=L i (x,y,kσ)-L i (x,y,σ) (2)
其中,符号‘*’表示卷积运算符,k为两相邻尺度空间倍数的常数,i=0,1,2,...,I。Among them, the symbol '*' represents a convolution operator, k is a constant that is a multiple of two adjacent scale spaces, and i=0, 1, 2,..., I.
对L中(I+1)组图像做同样操作,得到D。Do the same operation on the (I+1) group of images in L to get D.
A214、检测D中的空间局部极值点A214. Detect the spatial local extremum points in D
利用DoG函数的泰勒展开式,在求解该函数导数为零的情况Using the Taylor expansion of the DoG function, in the case where the derivative of the function is zero
其中,X=(x,y,σ)T。where X = (x, y, σ) T .
令D(X)的导数为零时,得到亚像素级精度的极值点即When the derivative of D(X) is set to zero, an extreme point with sub-pixel precision is obtained which is
A215、筛选不稳定极值点,得到SIFT特征点集A215. Screen unstable extreme points to obtain SIFT feature point set
首先去除图像中对比度较低的点,即满足的极值点然后利用Hessian矩阵去除边缘极值点。First remove the points with low contrast in the image, that is, satisfy extreme point of Then use the Hessian matrix to remove the edge extreme points.
DoG金字塔中某一尺度的图像在x方向上的二阶导数定义为Dxx,则Hessian矩阵表示为:The second derivative of an image of a certain scale in the DoG pyramid in the x direction is defined as D xx , then the Hessian matrix is expressed as:
H的两个特征值分别定义为λ1和λ2,其中λ1≥λ2且λ1/λ2=r,这里λ1和λ2分别对应图像对在x方向和y方向上的主曲率值,则当r大于阈值10时,判断该极值点位于DoG曲线的边缘位置。The two eigenvalues of H are defined as λ 1 and λ 2 , where λ 1 ≥ λ 2 and λ 1 /λ 2 = r, where λ 1 and λ 2 correspond to the principal curvatures of the image pair in the x and y directions, respectively value, when r is greater than the threshold of 10, it is judged that the extreme point is located at the edge of the DoG curve.
定义Tr(H)为H的迹,Det(H)为H的行列式,则Define Tr(H) as the trace of H, and Det(H) as the determinant of H, then
即通过计算Tr(H)和Det(H)来避免直接求特征值,从而减少计算量。That is, by calculating Tr(H) and Det(H) to avoid directly calculating the eigenvalues, thereby reducing the amount of calculation.
A216、计算SIFT特征点描述符A216, calculate SIFT feature point descriptor
以关键点周围任意大小的图像区域作为统计范围,将图像区域分成若干块;统计每个分块内的每个点的梯度直方图,计算出表示该区域图像信息的向量。Taking the image area of any size around the key point as the statistical range, the image area is divided into several blocks; the gradient histogram of each point in each block is counted, and the vector representing the image information of the area is calculated.
定义梯度的模值为m(x,y),方向为θ(x,y),则The modulus value of the defined gradient is m(x,y), and the direction is θ(x,y), then
首先计算描述符所需的图像区域,把特征点附近的邻域分为4×4个子区域,每个子区域的大小为3σ,其中σ为尺度空间因子;然后,统计各个子区域的梯度方向直方图:将特征点的方向作为基准方向,然后计算各个子区域中每一个像素点的梯度方向相对于基准方向的角度,投影到0~2π区间以π/4为间隔的8个方向上,并统计每个方向上梯度值的累加,在归一化操作后生成8维向量描述符;最后集合每个子区域的8维向量,构成一个4×4×8=126维的特征点描述符。First calculate the image area required by the descriptor, divide the neighborhood near the feature point into 4×4 sub-areas, and the size of each sub-area is 3σ, where σ is the scale space factor; then, count the gradient direction histogram of each sub-area Figure: Take the direction of the feature point as the reference direction, then calculate the angle of the gradient direction of each pixel point in each sub-region relative to the reference direction, and project it to 8 directions in the 0-2π interval with π/4 as the interval, and The accumulation of gradient values in each direction is counted, and an 8-dimensional vector descriptor is generated after the normalization operation; finally, the 8-dimensional vectors of each sub-region are collected to form a 4×4×8=126-dimensional feature point descriptor.
A22、轮毂模板信息提取模块标定并以像素为单位存储轮毂模板图像中六个像素点的位置信息:轮毂工件的圆心O(x0,y0)、轮毂工件的气嘴中心Ogas(xgas,ygas)和轮毂外边缘圆周上的四个点O1(x1,y1),O2(x2,y2),O3(x3,y3),O4(x4,y4)。A22. The hub template information extraction module calibrates and stores the position information of six pixels in the hub template image in units of pixels: the center of the hub workpiece O(x 0 ,y 0 ), the center of the valve nozzle of the hub workpiece O gas (x gas ,y gas ) and four points O 1 (x 1 ,y 1 ),O 2 (x 2 ,y 2 ),O 3 (x 3 ,y 3 ),O 4 (x 4 , y 4 ).
B、在线处理B. Online processing
在线处理阶段,首先提取待检测图像上的SIFT特征点;然后用Best-Bin-First搜索算法搜索与轮毂模板相匹配的特征点;接着用RANSAC算法剔除误匹配点,并计算出待检测图像中轮毂与模板图像之间的空间映射关系;最后,根据模板图像的标记点,计算出待检测图像中轮毂的圆心和气嘴的位置;具体包括以下步骤:In the online processing stage, first extract the SIFT feature points on the image to be detected; then use the Best-Bin-First search algorithm to search for feature points that match the hub template; then use the RANSAC algorithm to eliminate mismatching points, and calculate the The spatial mapping relationship between the hub and the template image; finally, according to the marked points of the template image, calculate the center of the hub and the position of the valve in the image to be detected; specifically include the following steps:
B1、提取待检测图像上的SIFT特征点B1. Extract SIFT feature points on the image to be detected
由待检测轮毂特征点提取模块按照步骤A21取待检测图像上的SIFT特征点,即解析输入的待检测轮毂图像,搜索图像中满足SIFT特征点特性的像素点,统计并存储The SIFT feature points on the image to be detected are obtained by the feature point extraction module of the wheel hub to be detected according to step A21, that is, the input wheel hub image to be detected is analyzed, and the pixels satisfying the characteristics of the SIFT feature point in the image are searched for, counted and stored
B2、匹配特征点B2. Match feature points
由特征点匹配模块寻找待检测轮毂图像与轮毂模板图像匹配的特征点,并剔除误匹配点,计算待检测轮毂与模板轮毂的空间映射关系。具体步骤如下:The feature point matching module finds the matching feature points between the wheel hub image to be detected and the hub template image, and eliminates the wrong matching points, and calculates the spatial mapping relationship between the wheel hub to be detected and the template wheel hub. Specific steps are as follows:
B21、用最近邻/次近邻算法对参考图像和待匹配图像进行初始匹配B21. Initially match the reference image and the image to be matched using the nearest neighbor/second nearest neighbor algorithm
用BBF算法搜索到与待匹配特征点p(特征向量为vi)欧式距离最近的最邻近特征点pmin(特征向量为vmin)和次邻近特征点pmin2(特征向量vmin2),则满足以下条件的点对是匹配的特征点:Use the BBF algorithm to search for the nearest neighbor feature point p min (feature vector v min ) and the second neighbor feature point p min2 (feature vector v min2 ) with the closest Euclidean distance to the feature point p (feature vector v i ) to be matched, then Point pairs satisfying the following conditions are matched feature points:
其中,Dist(vi,vmin)表示vi和vmin之间的马氏距离,Dist(vi,vmin2)表示vi和vmin2之间的马氏距离,即Among them, Dist(v i , v min ) represents the Mahalanobis distance between v i and v min , and Dist(v i , v min2 ) represents the Mahalanobis distance between v i and v min2 , namely
其中上标T表示矩阵转置符号。where the superscript T denotes the matrix transpose notation.
B22、用RANSAC算法剔除误匹配点,并计算目标区域与模板图像的空间对应关系。B22. Use the RANSAC algorithm to eliminate false matching points, and calculate the spatial correspondence between the target area and the template image.
设点集A和B分别为模板图像和检测图像上得到的初始匹配点集合,则RANSAC算法具体步骤如下:Set point sets A and B as the initial matching point sets obtained on the template image and the detection image respectively, then the specific steps of the RANSAC algorithm are as follows:
B221、随机在点对集合A和B中,选取4对匹配点对,计算这四对点对的投影变换矩阵H:B221. Randomly select 4 pairs of matching point pairs in point pair sets A and B, and calculate the projection transformation matrix H of these four pairs of point pairs:
对图像中的一点p(x,y),该点通过矩阵H变换到点p′(x′,y′),即For a point p(x,y) in the image, the point is transformed to a point p'(x',y') through the matrix H, that is
其中, in,
即H可通过匹配点对p(x,y)和p′(x′,y′)求出。每4对匹配点对可计算出一个投影变换矩阵。That is, H can be obtained by matching point pairs p(x, y) and p'(x', y'). A projection transformation matrix can be calculated for every 4 pairs of matching points.
B222、用步骤B221中计算得到的投影变换矩阵H,对点集A中的所有特征点做空间变换,得到点集B′;B222. Using the projective transformation matrix H calculated in step B221, perform spatial transformation on all the feature points in point set A to obtain point set B′;
计算点集B和B′中所有对应点的坐标误差,即e=||B-B′||;设定的误差阈值σ,如果e<σ,则认为该点对为内点对,否则为外点对;Calculate the coordinate error of all corresponding points in point sets B and B′, that is, e=||B-B′||; set the error threshold σ, if e<σ, the point pair is considered as an inner point pair, otherwise it is an outer point pair Point right;
B223、重复步骤B221和B222,找到内点对数量最多的一次变换,并将该次变换得到的内点对集合作为新的点集A和B,进行新一轮的迭代;B223, repeat steps B221 and B222, find a transformation with the largest number of interior point pairs, and use the interior point pair sets obtained by this transformation as new point sets A and B, and perform a new round of iteration;
B224迭代终止判断:当迭代得到的内点对数目与此次迭代前点集A和B中的点对数目一致时,迭代终止;B224 Iteration Termination Judgment: When the number of internal point pairs obtained by iteration is consistent with the number of point pairs in point sets A and B before this iteration, the iteration terminates;
B225、迭代结果:最后这次迭代的A和B,就是剔除误匹配特征点对后的匹配点集,相应的投影变换矩阵H代表了我们所要求的原始图像和待检测图像之间的空间变换关系。B225. Iterative results: A and B of the last iteration are the matching point sets after eliminating the mismatched feature point pairs, and the corresponding projection transformation matrix H represents the space transformation between the original image and the image to be detected that we require relation.
B3、定位待检测图像中轮毂的圆心和气嘴的位置B3. Locate the center of the hub and the position of the valve in the image to be detected
由轮毂定位模块定位待检测图像中轮毂的圆心和气嘴的位置,并计算待检测图像中轮毂的半径长度。具体步骤如下:The hub positioning module locates the center of the hub in the image to be detected and the position of the valve, and calculates the radius length of the hub in the image to be detected. Specific steps are as follows:
B31、根据步骤B222得到的空间变换矩阵H,计算出待检测图像中与标定点对应的六个像素点:轮毂工件的圆心O′(x′0,y′0)、轮毂工件的气嘴中心O′gas(x′gas,y′gas)和轮毂外边缘圆周上的四个点O′1(x′1,y′1),O′2(x′2,y′2),O′3(x′3,y′3),O′4(x′4,y′4)。B31, according to the space transformation matrix H obtained in step B222, calculate six pixel points corresponding to the calibration points in the image to be detected: the center of circle O'(x' 0 , y' 0 ) of the hub workpiece, the center of the valve of the hub workpiece O′ gas (x′ gas ,y′ gas ) and four points O′ 1 (x′ 1 ,y′ 1 ), O′ 2 (x′ 2 ,y′ 2 ), O′ 3 (x′ 3 , y′ 3 ), O′ 4 (x′ 4 , y′ 4 ).
以轮毂圆心位置的坐标O′(x′0,y′0)为例:Take the coordinate O′(x′ 0 ,y′ 0 ) of the center of the hub as an example:
B32、计算待检测图像中轮毂工件的半径R′:B32. Calculate the radius R′ of the hub workpiece in the image to be detected:
其中 in
与现有技术相比,本发明具有以下有益效果:Compared with the prior art, the present invention has the following beneficial effects:
1、为了有效地进行轮毂工件定位,本发明考虑到轮毂图像匹配过程中遇到的光照影响以及平移、旋转、尺度变化等问题,采用尺度不变特征变换(SIFT)特征点匹配方法来匹配出模板图像和待检测图像中空间对应的点对,然后通过这些点对判断模板图像和待检测图像中轮毂区域图像的空间对应关系,最后将模板图像中已知的标定点通过二者的空间对应关系计算出待检测图像中轮毂区域对应的点,从而达到轮毂定位的目的。文献”Lowe DG.Distinctive image features from scale-invariant keypoints.InternationalJournal of Computer Vision,2004,60(2):91-110.”中证明,图像上满足SIFT特性的特征点在图像发生光照变化、平移、旋转、尺度变化的时候都能保持良好的SIFT特性,所以本发明对环境光线、视角变化和部分遮挡都具有较好的鲁棒性,在不同干扰环境下都可以对轮毂工件进行定位,具有良好的定位效果。1. In order to effectively locate the hub workpiece, the present invention considers the influence of light and the problems of translation, rotation, and scale change encountered in the hub image matching process, and adopts the scale-invariant feature transformation (SIFT) feature point matching method to match out The spatial corresponding point pairs in the template image and the image to be detected, and then judge the spatial correspondence between the template image and the hub area image in the image to be detected through these point pairs, and finally pass the known calibration points in the template image through the spatial correspondence between the two The relationship calculates the points corresponding to the hub area in the image to be detected, so as to achieve the purpose of hub positioning. The document "Lowe DG. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 2004, 60(2): 91-110." proves that the feature points on the image that meet the SIFT characteristics change in illumination, translation, Good SIFT characteristics can be maintained during rotation and scale changes, so the present invention has better robustness to ambient light, viewing angle changes and partial occlusion, and can locate the hub workpiece under different interference environments, with good positioning effect.
2、本发明在实际定位前,通过离线处理方式来获取轮毂模板图像的特征点信息,并预先标定轮毂模板的圆心和气嘴,这减少了轮毂在实际定位过程中的计算量;2. Before the actual positioning, the present invention obtains the feature point information of the hub template image through offline processing, and pre-calibrates the center of the hub template and the air nozzle, which reduces the amount of calculation in the actual positioning process of the hub;
3、本发明采用SIFT算法作为特征点匹配方法,即能够克服图像匹配过程中遇到的光照、平移、旋转、尺度变换问题,同时对背景噪声、视角变化和部分遮挡都有较好的鲁棒性。3. The present invention uses the SIFT algorithm as the feature point matching method, which can overcome the problems of illumination, translation, rotation, and scale transformation encountered in the image matching process, and has better robustness to background noise, viewing angle changes, and partial occlusions. sex.
4、本发明采用随机RANSAC方法剔除误匹配点对,提高了匹配精度。4. The present invention adopts the random RANSAC method to eliminate mismatching point pairs, thereby improving the matching accuracy.
附图说明Description of drawings
本发明共有附图7张,其中:The present invention has 7 accompanying drawings, wherein:
图1是一种基于SIFT特征的轮毂工件定位方法流程图。Fig. 1 is a flow chart of a wheel hub workpiece location method based on SIFT features.
图2是一种基于SIFT特征的轮毂工件定位装置组成示意图。Fig. 2 is a schematic composition diagram of a hub workpiece positioning device based on SIFT features.
图3是轮毂模板标记点示意图。Fig. 3 is a schematic diagram of marking points of the hub template.
图4是轮毂旋转平移情况下的定位结果。Figure 4 shows the positioning results in the case of hub rotation and translation.
图5是轮毂周边有干扰物情况下的定位结果。Figure 5 shows the positioning results when there are disturbances around the hub.
图6是轮毂图像背景不均匀时的定位结果。Figure 6 is the positioning result when the background of the hub image is uneven.
图7是轮毂部分缺失情况下的定位结果。Figure 7 is the positioning result when the hub part is missing.
具体实施方式detailed description
下面结合附图对本发明进行进一步地描述。一种轮毂工件定位的装置的组成如图2所示,具体方法流程如图1所示。The present invention will be further described below in conjunction with the accompanying drawings. The composition of a hub workpiece positioning device is shown in FIG. 2 , and the specific method flow is shown in FIG. 1 .
为了验证本发明的有效性,进行了客观测试和主观测试。In order to verify the effectiveness of the present invention, objective tests and subjective tests were carried out.
1、主观性能测试(视觉效果)1. Subjective performance test (visual effect)
摄像头采集轮毂图像时,会出现不同的噪声干扰情况。为了验证本发明方法的有效性,采集了不同干扰情况下若干图像进行实验。When the camera captures the image of the wheel hub, there will be different noise interference situations. In order to verify the effectiveness of the method of the present invention, several images under different interference conditions were collected for experiments.
实验中模板图像大小为690×691像素,检测图像的大小均为1280×960像素,其中轮毂模板及其标记点情况如图3所示。在图3中,用十字标号标记了轮毂模版的圆心和气嘴中心位置,用实线圆圈标记了轮毂模板外边缘圆周。在轮毂定位过程中,需要首先选取模板图像中相关的坐标值,本实验中,模板图像中轮毂圆心的坐标为(353,351),气嘴中心点坐标为(127,246),轮毂半径为339像素。In the experiment, the size of the template image is 690×691 pixels, and the size of the detection image is 1280×960 pixels. The wheel hub template and its marking points are shown in Figure 3. In Fig. 3, the center of the hub template and the center of the air nozzle are marked with a cross, and the circumference of the outer edge of the hub template is marked with a solid circle. In the process of hub positioning, it is necessary to first select the relevant coordinate values in the template image. In this experiment, the coordinates of the hub circle center in the template image are (353,351), the coordinates of the valve center point are (127,246), and the hub radius is 339 pixels.
考虑到篇幅限制,在轮毂不同干扰情况下各选取一幅图像,将其定位结果显示在图4~7中,并用星标标记了实际轮毂圆心点和气嘴中心位置,用实线圆圈标记了检测得到的轮毂外边缘圆周,用十字标记了本发明检测得到的轮毂圆心点和气嘴中心位置,用虚线圆圈标记了检测得到的轮毂外边缘圆周。另外,在每个轮毂图右侧将圆心和气嘴部位进行了放大,以更清楚地显示检测结果。Considering the space limit, one image is selected under different interference conditions of the hub, and the positioning results are shown in Fig. The obtained outer edge circumference of the hub is marked with a cross to mark the center point of the wheel hub and the central position of the air nozzle detected by the present invention, and the detected outer edge circumference of the hub is marked with a dotted circle. In addition, on the right side of each wheel hub diagram, the center of the circle and the valve position are enlarged to show the detection results more clearly.
2、客观性能标准2. Objective performance standards
为了对定位精确度进行客观评价,本发明统计了每种干扰情况下得到的半径值、圆心与气嘴中心坐标值与实际值的绝对差值,即定位结果与实际值偏差的绝对值,并计算平均绝对差值。在不同干扰情况下,轮毂圆心、气嘴中心、半径的定位结果与实际值的平均绝对差值情况如表1所示,表中数据的单位均为像素。In order to objectively evaluate the positioning accuracy, the present invention counts the absolute difference between the radius value, the center of the circle and the center coordinates of the gas nozzle and the actual value obtained under each interference situation, that is, the absolute value of the deviation between the positioning result and the actual value, and Computes the mean absolute difference. Under different interference conditions, the average absolute difference between the hub circle center, valve center, and radius positioning results and the actual value is shown in Table 1, and the units of the data in the table are pixels.
表1 本发明方法的轮毂定位结果与实际值的平均绝对差值Table 1 The wheel hub alignment result of the inventive method and the mean absolute difference of the actual value
从表1可以看出,虽然图像中轮毂区域存在旋转平移变换、背景不均匀或有干扰物,甚至轮毂存在部分缺失,但基于SIFT特征点匹配的轮毂定位均可以得到良好的定位结果,几乎不受干扰因素的影响。It can be seen from Table 1 that although there are rotation-translation transformations in the hub area in the image, uneven background or interference objects, and even partial missing of the hub, the hub positioning based on SIFT feature point matching can obtain good positioning results, almost no affected by interfering factors.
3、针对本发明的技术方案,以下替代方案同样能完成发明目的3. For the technical solution of the present invention, the following alternatives can also accomplish the purpose of the invention
(1)对于较理想的工作环境,如光照充足、没有遮挡等情况,可以采用鲁棒性较差但计算量也较小的其他特征点匹配算法,如SURF算法来做匹配;(1) For a more ideal working environment, such as sufficient lighting and no occlusion, other feature point matching algorithms with less robustness but less calculation, such as the SURF algorithm, can be used for matching;
(2)SIFT算法是一种基于特征点匹配的方法,对于轮毂这种规则的旋转对称形状工件,也可以使用线特征代替点特征来做匹配。(2) The SIFT algorithm is a method based on feature point matching. For a regular rotationally symmetrical shape workpiece such as a wheel hub, line features can also be used instead of point features for matching.
(3)基于本发明,在光照条件较好的情况下,还可通过图像区域分割预先分割出轮毂区域,再进行SIFT特征点匹配,可以有效减少定位方法的运算量。(3) Based on the present invention, in the case of good lighting conditions, the hub area can also be pre-segmented by image area segmentation, and then the SIFT feature point matching can be performed, which can effectively reduce the calculation amount of the positioning method.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410349103.9A CN104123542B (en) | 2014-07-18 | 2014-07-18 | Device and method for positioning hub workpiece |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410349103.9A CN104123542B (en) | 2014-07-18 | 2014-07-18 | Device and method for positioning hub workpiece |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104123542A CN104123542A (en) | 2014-10-29 |
CN104123542B true CN104123542B (en) | 2017-06-27 |
Family
ID=51768947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410349103.9A Expired - Fee Related CN104123542B (en) | 2014-07-18 | 2014-07-18 | Device and method for positioning hub workpiece |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104123542B (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104680550A (en) * | 2015-03-24 | 2015-06-03 | 江南大学 | Method for detecting defect on surface of bearing by image feature points |
CN105423975B (en) * | 2016-01-12 | 2018-02-09 | 济南大学 | The calibration system and method for a kind of large-scale workpiece |
CN105976358B (en) * | 2016-04-27 | 2018-07-27 | 北京以萨技术股份有限公司 | A method of the fast convolution for the more convolution kernels of feature pyramid calculates |
CN106325205B (en) * | 2016-09-20 | 2019-01-25 | 图灵视控(北京)科技有限公司 | A kind of hub installing hole flexibility automatic processing system based on machine vision |
CN109427050B (en) * | 2017-08-23 | 2022-04-29 | 阿里巴巴集团控股有限公司 | Guide wheel quality detection method and device |
CN107866386B (en) * | 2017-09-30 | 2020-10-16 | 绿港环境资源股份公司 | Perishable waste identification system and method |
CN107862690B (en) * | 2017-11-22 | 2023-11-14 | 佛山科学技术学院 | Circuit board component positioning method and device based on feature point matching |
CN108491841A (en) * | 2018-03-21 | 2018-09-04 | 东南大学 | A kind of automotive hub type identification monitoring management system and method |
CN108665057A (en) * | 2018-03-29 | 2018-10-16 | 东南大学 | A kind of more production point wheel hub image classification methods based on convolutional neural networks |
CN109060262A (en) * | 2018-09-27 | 2018-12-21 | 芜湖飞驰汽车零部件技术有限公司 | A kind of wheel rim weld joint air-tight detection device and air-tightness detection method |
CN109592433B (en) * | 2018-11-29 | 2021-08-10 | 合肥泰禾智能科技集团股份有限公司 | Goods unstacking method, device and system |
CN109871854B (en) * | 2019-02-22 | 2023-08-25 | 大连工业大学 | Quick hub identification method |
CN111191708A (en) * | 2019-12-25 | 2020-05-22 | 浙江省北大信息技术高等研究院 | Automatic sample key point marking method, device and system |
CN111259971A (en) * | 2020-01-20 | 2020-06-09 | 上海眼控科技股份有限公司 | Vehicle information detection method and device, computer equipment and readable storage medium |
CN111687444B (en) * | 2020-06-16 | 2021-04-30 | 浙大宁波理工学院 | A method and device for identifying and locating a three-dimensional identification code of an automobile wheel hub |
CN112198161A (en) * | 2020-10-10 | 2021-01-08 | 安徽和佳医疗用品科技有限公司 | PVC gloves real-time detection system based on machine vision |
CN112883963B (en) * | 2021-02-01 | 2022-02-01 | 合肥联宝信息技术有限公司 | Positioning correction method, device and computer readable storage medium |
CN113432585A (en) * | 2021-06-29 | 2021-09-24 | 沈阳工学院 | Non-contact hub position accurate measurement method based on machine vision technology |
CN113591923A (en) * | 2021-07-01 | 2021-11-02 | 四川大学 | Engine rocker arm part classification method based on image feature extraction and template matching |
CN113720280A (en) * | 2021-09-03 | 2021-11-30 | 北京机电研究所有限公司 | Bar center positioning method based on machine vision |
CN114800533B (en) * | 2022-06-28 | 2022-09-02 | 诺伯特智能装备(山东)有限公司 | Sorting control method and system for industrial robot |
CN116977341B (en) * | 2023-09-25 | 2024-01-09 | 腾讯科技(深圳)有限公司 | Dimension measurement method and related device |
CN117058151B (en) * | 2023-10-13 | 2024-01-05 | 山东骏程金属科技有限公司 | Hub detection method and system based on image analysis |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799859A (en) * | 2012-06-20 | 2012-11-28 | 北京交通大学 | Method for identifying traffic sign |
CN103077512A (en) * | 2012-10-18 | 2013-05-01 | 北京工业大学 | Feature extraction and matching method and device for digital image based on PCA (principal component analysis) |
WO2014061221A1 (en) * | 2012-10-18 | 2014-04-24 | 日本電気株式会社 | Image sub-region extraction device, image sub-region extraction method and program for image sub-region extraction |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3576987B2 (en) * | 2001-03-06 | 2004-10-13 | 株式会社東芝 | Image template matching method and image processing apparatus |
-
2014
- 2014-07-18 CN CN201410349103.9A patent/CN104123542B/en not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799859A (en) * | 2012-06-20 | 2012-11-28 | 北京交通大学 | Method for identifying traffic sign |
CN103077512A (en) * | 2012-10-18 | 2013-05-01 | 北京工业大学 | Feature extraction and matching method and device for digital image based on PCA (principal component analysis) |
WO2014061221A1 (en) * | 2012-10-18 | 2014-04-24 | 日本電気株式会社 | Image sub-region extraction device, image sub-region extraction method and program for image sub-region extraction |
Non-Patent Citations (3)
Title |
---|
基于图像匹配技术的轮毂定位方法;李丹丹;《学术资源发现平台》;20140616;第1页 * |
基于改进SIFT算法的图像匹配方法;程德志等;《计算机仿真》;20110731;第28卷(第7期);第285-289页 * |
基于面阵CCD的轮毂形位参数检测方法;乐莹 等;《科技通报》;20090331;第25卷(第2期);第196-201页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104123542A (en) | 2014-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104123542B (en) | Device and method for positioning hub workpiece | |
CN110163853B (en) | Edge defect detection method | |
CN110148162A (en) | A kind of heterologous image matching method based on composition operators | |
CN111862037A (en) | Method and system for geometric feature detection of precision hole parts based on machine vision | |
CN107862690B (en) | Circuit board component positioning method and device based on feature point matching | |
CN107705293A (en) | A kind of hardware dimension measurement method based on CCD area array cameras vision-based detections | |
CN112686920A (en) | Visual measurement method and system for geometric dimension parameters of circular part | |
CN110110760A (en) | A kind of workpiece positioning and recognition methods based on machine vision | |
CN105675626A (en) | Character defect detecting method of tire mold | |
CN109986172B (en) | Welding seam positioning method, equipment and system | |
CN108257171A (en) | Car radar assembling aperture detection method based on light vision | |
CN111062940A (en) | Screw positioning and identifying method based on machine vision | |
CN109815822B (en) | Patrol diagram part target identification method based on generalized Hough transformation | |
CN105868766A (en) | Method for automatically detecting and identifying workpiece in spraying streamline | |
CN110222661B (en) | Feature extraction method for moving target identification and tracking | |
CN106504262A (en) | A kind of small tiles intelligent locating method of multiple features fusion | |
CN113705564A (en) | Pointer type instrument identification reading method | |
TWI543117B (en) | Method for recognizing and locating object | |
CN113436262A (en) | Vision-based vehicle target position and attitude angle detection method | |
CN107388991A (en) | A kind of more fillet axial workpiece radius of corner measuring methods in end face | |
CN105224941B (en) | Object identification and positioning method | |
CN118247331B (en) | Automatic part size detection method and system based on image recognition | |
CN114187269B (en) | Rapid detection method for surface defect edge of small component | |
CN102289495B (en) | Image search matching optimization method applied to model matching attitude measurement | |
CN104966050B (en) | A kind of method for detecting lane lines based on ASM |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20170627 |