CN103325106A - Moving workpiece sorting method based on LabVIEW - Google Patents

Moving workpiece sorting method based on LabVIEW Download PDF

Info

Publication number
CN103325106A
CN103325106A CN2013101293679A CN201310129367A CN103325106A CN 103325106 A CN103325106 A CN 103325106A CN 2013101293679 A CN2013101293679 A CN 2013101293679A CN 201310129367 A CN201310129367 A CN 201310129367A CN 103325106 A CN103325106 A CN 103325106A
Authority
CN
China
Prior art keywords
workpiece
algorithm
camera
workpieces
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101293679A
Other languages
Chinese (zh)
Other versions
CN103325106B (en
Inventor
徐建明
朱海涛
何德峰
张健
陈张雷
陆群
丁毅
杨金桥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201310129367.9A priority Critical patent/CN103325106B/en
Publication of CN103325106A publication Critical patent/CN103325106A/en
Application granted granted Critical
Publication of CN103325106B publication Critical patent/CN103325106B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Manipulator (AREA)

Abstract

Provided is a moving workpiece sorting method based on LabVIEW. The moving workpiece sorting method based on the LabVIEW comprises the following steps of (1) an image processing algorithm for video streams, wherein the image processing algorithm for the video stream is used for detecting the positions and the postures of moving workpieces from collected video, (2) a calibration algorithm for a camera, wherein the calibration algorithm for the camera is used for converting pixel coordinates of the moving workpieces in images of the camera into physical coordinates of the workpieces in a world coordinate system, (3) a correction algorithm for radial distortion of the camera, wherein the correction algorithm for the radial distortion of the camera is used for eliminating radial distortion of images introduced by a wide-angle lens, (4) a Kalman prediction algorithm, wherein the Kalman prediction algorithm is used for carrying out prediction on the positions of the moving workpieces to obtain the workpiece positions without the time lag, (5) a workpiece type recognition algorithm, wherein when the workpieces are conveyed through a conveyor belt, the camera collects the images of the workpieces, recognition is carried out on the types of the workpieces, and classification is facilitated, and (6) a motion control method for a mechanical arm, wherein the motion control method for the mechanical arm comprises a track planning method and a track control method, and the mechanical arm can select an optimal path to rapidly sort the workpieces.

Description

基于LabVIEW的运动工件分拣方法Sorting method of moving workpiece based on LabVIEW

技术领域technical field

本专利涉及基于机器视觉的分拣方法技术领域,尤其涉及一种视觉检测方法和产品分拣方法。This patent relates to the technical field of sorting methods based on machine vision, in particular to a visual detection method and a product sorting method.

背景技术Background technique

机械臂作为自动控制领域中出现的一项新技术,它的巨大作用也正逐步被人们所认识:第一、机械臂能代替部分人工完成一些重复性体力劳动;第二、机械臂能按照人们预先设定的流程高效快速的完成作业;第三、机械臂大大改善了工人的劳动条件,显著地提高劳动生产率,加快了工业生产自动化的步伐。As a new technology in the field of automatic control, the mechanical arm is gradually being recognized by people for its great role: first, the mechanical arm can replace some manual labor to complete some repetitive physical labor; The pre-set process can complete the work efficiently and quickly; thirdly, the mechanical arm has greatly improved the working conditions of workers, significantly increased labor productivity, and accelerated the pace of industrial production automation.

随着经济的快速发展,企业竞争越来越激烈,为提高效率、降低生产成本,传送带得到了广泛的应用。传送带广泛应用于工业生产系统,传送带的应用不仅节约了劳动力,提高了生产效率,而且降低了生产成本,在工业生产中发挥了巨大的作用。不管是搬运、装配、工件质量检测都需要先对目标进行抓取,因此,对传送带上目标的分类抓取功能有很高的需求。With the rapid development of the economy, the competition of enterprises is becoming more and more fierce. In order to improve efficiency and reduce production costs, conveyor belts have been widely used. Conveyor belts are widely used in industrial production systems. The application of conveyor belts not only saves labor, improves production efficiency, but also reduces production costs, and plays a huge role in industrial production. Whether it is handling, assembly, or workpiece quality inspection, the target needs to be grasped first. Therefore, there is a high demand for the classification and grasping function of the target on the conveyor belt.

传统的工业机器人一般采用示教或离线编程的方式对加工任务进行路径规划和运动编程,加工过程中只是简单地重复预先编程设定的动作,因此传统的工业机器人控制技术无法对运动工件进行分拣。Traditional industrial robots generally use teaching or offline programming to perform path planning and motion programming for processing tasks. During the processing process, they simply repeat the pre-programmed actions. Therefore, traditional industrial robot control technology cannot analyze the moving workpiece. pick.

发明内容Contents of the invention

本专利的一个目的是提供一种检测精度高、可以适应光照条件变化的运动工件检测方法。One purpose of this patent is to provide a moving workpiece detection method with high detection accuracy and adaptable to changes in lighting conditions.

本专利的另一个目的是提供一种自动化程度高、可以实现较多种类工件分类、可靠性较高的工件分拣方法。Another object of this patent is to provide a workpiece sorting method with a high degree of automation, which can realize the sorting of more kinds of workpieces and has high reliability.

本专利的运动工件分拣方法,包括以下步骤:The moving workpiece sorting method of this patent comprises the following steps:

1)视频流的图像处理算法,用该算法从采集的视频当中检测出运动中工件的位置和姿态;1) The image processing algorithm of the video stream, which is used to detect the position and posture of the moving workpiece from the collected video;

2)摄像机的标定算法,用该标定算法将运动工件在摄像机图像中的像素坐标转化为世界坐标系中工件的物理坐标。2) The calibration algorithm of the camera, which is used to convert the pixel coordinates of the moving workpiece in the camera image into the physical coordinates of the workpiece in the world coordinate system.

3)摄像机的径向畸变校正算法,用以消除因广角镜头引入的图像的径向畸变。3) The radial distortion correction algorithm of the camera is used to eliminate the radial distortion of the image introduced by the wide-angle lens.

4)卡尔曼预测算法,用以对运动工件的位置进行预测,获取无时滞的工件位置。4) The Kalman prediction algorithm is used to predict the position of the moving workpiece and obtain the workpiece position without time lag.

5)工件的类型识别算法,当工件从传送带上经过时,摄像机采集工件图像,对工件类型进行识别,便于分类。5) The type recognition algorithm of the workpiece. When the workpiece passes through the conveyor belt, the camera collects the image of the workpiece to identify the type of the workpiece, which is convenient for classification.

6)机械手的运动控制方法,包括轨迹规划方法,轨迹控制方法,使机械手可以选择最优路径,快速地分拣工件。6) The motion control method of the manipulator, including the trajectory planning method and the trajectory control method, so that the manipulator can choose the optimal path and quickly sort the workpieces.

其中,图像处理的算法包括运动目标的提取、运动工件的姿态检测。本专利采用背景差分法获得运动工件的图像坐标。背景差分法是采用图像序列中的当前帧和背景参考模型比较来检测运动物体的一种方法,其性能依赖于所使用的背景建模技术。由于背景模型在不断地更新,所以它可以适应光线的变化,而一般的阈值法,当光线环境变化时必须改变其阈值。定义image(x,y)为当前帧,acc(x,y)为背景模型,frimage(x,y)为前景帧,α为背景更新率,其范围为0-1,acc(x,y)初始化为0。Among them, the algorithm of image processing includes the extraction of moving objects and the posture detection of moving workpieces. This patent adopts the background difference method to obtain the image coordinates of the moving workpiece. The background subtraction method is a method to detect moving objects by comparing the current frame in the image sequence with the background reference model, and its performance depends on the background modeling technology used. Since the background model is constantly updated, it can adapt to changes in light, while the general threshold method must change its threshold when the light environment changes. Define image(x,y) as the current frame, acc(x,y) as the background model, frimage(x,y) as the foreground frame, α as the background update rate, the range is 0-1, acc(x,y) Initialized to 0.

accacc (( xx ,, ythe y )) == αα ** imageimage (( xx ,, ythe y )) ++ (( 11 -- αα )) ** accacc (( xx ,, ythe y ))

frimagefrimage (( xx ,, ythe y )) == imageimage (( xx ,, ythe y )) -- accacc (( xx ,, ythe y ))

最后根据分离的前景图像,检测出工件位置。运动工件的姿态检测先通过灰度处理,再寻找在所选检测方向上梯度值超过阈值的点,对这些点进行直线拟合,计算出角度。Finally, according to the separated foreground image, the position of the workpiece is detected. The attitude detection of the moving workpiece is first processed through gray scale, and then the points whose gradient value exceeds the threshold in the selected detection direction are found, and the straight line fitting is performed on these points to calculate the angle.

其中,摄像机的标定算法主要针对单目二维视觉系统的标定。将摄像机垂直于工作平面安装。世界坐标系位于工作平面上,Z轴垂直平面向下,摄像机坐标系与世界坐标系无旋转,只存在平移关系,于是有旋转矩阵R=I(单位阵)且平移矩阵P=[0 0 d]T,d是摄像机光轴中心点Oc到工作平面的距离。Among them, the calibration algorithm of the camera is mainly aimed at the calibration of the monocular two-dimensional vision system. Mount the camera perpendicular to the work plane. The world coordinate system is located on the working plane, the vertical plane of the Z axis is downward, the camera coordinate system and the world coordinate system have no rotation, only translation relationship exists, so there is a rotation matrix R=I (identity matrix) and a translation matrix P=[0 0 d ] T , d is the distance from the camera optical axis center point O c to the working plane.

xx cc ythe y cc zz cc 11 == RR PP 00 11 xx ww ythe y ww zz ww 11 == xx ww ythe y ww dd 11

式中:(xc,yc,zc)为景物点在摄像机坐标系下的坐标,(xw,yw,zw)为景物点在世界坐标系下的坐标。由上式可以获得景物点在摄像机坐标系下的坐标,易得,xc=xw,yc=yw,在工作平面上运动工件的坐标zw=0。In the formula: (x c , y c , z c ) are the coordinates of the scene point in the camera coordinate system, and (x w , y w , z w ) are the coordinates of the scene point in the world coordinate system. The coordinates of the scene point in the camera coordinate system can be obtained from the above formula, which is easy to get, x c = x w , y c = y w , and the coordinates of the moving workpiece on the working plane z w =0.

uu vv 11 == kk xx 00 uu 00 00 kk ythe y vv 00 00 00 11 xx cc // zz cc ythe y cc // zz cc 11

kk xdxd == uu 22 -- uu 11 xx ww 22 -- xx ww 11 kk ydyd == vv 22 -- vv 11 ythe y ww 22 -- ythe y ww 11

其中(u,v)是参考点的图像坐标;(u1,v1)是点P1的图像坐标,(xw1,yw1)为点P1的二维世界坐标;(u2,v2)是点P2的图像坐标,(xw2,yw2)为点P2的二维世界坐标;(u0,v0)是摄像机光轴中心的图像坐标;kxd=kxd,kyd=kyd是标定出的摄像机参数。Where (u, v) is the image coordinate of the reference point; (u 1 , v 1 ) is the image coordinate of point P 1 , (x w1 , y w1 ) is the two-dimensional world coordinate of point P 1 ; (u 2 , v 2 ) is the image coordinates of point P 2 , (x w2 , y w2 ) is the two-dimensional world coordinates of point P 2 ; (u 0 , v 0 ) is the image coordinates of the center of the optical axis of the camera; k xd =k x d, k yd = ky d is the calibrated camera parameter.

xx wiwi == xx ww 11 ++ (( uu ii -- uu 11 )) // kk xdxd ythe y wiwi == ythe y ww 11 ++ (( vv ii -- vv 11 )) // kk ydyd

式中:(ui,vi)是任意一点Pi的图像坐标,(xwi,ywi)为点Pi的二维世界坐标。In the formula: (u i , v i ) is the image coordinate of any point P i , (x wi , y wi ) is the two-dimensional world coordinate of point P i .

其中,由于广角摄像头会有畸变,而其中以径向畸变最为明显,因此我们对摄像头进行径向畸变校正。本专利中用线性回归(LR)的方法拟合出了摄像机的径向畸变矩阵。首先给出径向畸变的畸变方程:Among them, since the wide-angle camera will have distortion, and the radial distortion is the most obvious, we correct the radial distortion of the camera. In this patent, the radial distortion matrix of the camera is fitted by a linear regression (LR) method. Firstly, the distortion equation of the radial distortion is given:

uu ′′ == aa 0000 ++ aa 1010 uu ++ aa 0101 vv ++ aa 1111 uvuv ++ aa 2020 uu 22 ++ aa 0202 vv 22 vv ′′ == bb 0000 ++ bb 1010 uu ++ bb 1010 vv ++ bb 1111 uvuv ++ bb 2020 uu 22 ++ bb 0202 vv 22

uu ′′ vv ′′ == aa 0000 aa 1010 aa 0101 aa 1111 aa 2020 aa 0202 bb 0000 bb 1010 bb 0101 bb 1111 bb 2020 bb 0202 11 uu vv uvuv uu 22 vv 22

式中:(u,v)为参考点的图像坐标,(u',v')为参考点畸变校正后的图像坐标,ast、bst为校正系数,s、t=0,1,2。In the formula: (u, v) is the image coordinate of the reference point, (u', v') is the image coordinate after distortion correction of the reference point, a st and b st are correction coefficients, s, t=0,1,2 .

其中,运动工件的速度未知,位置预测采用卡尔曼预测算法。分别采集两帧图像,间隔1s,工件位置的变化Δx即为所估计的速度值。建立系统模型,然后用以上所述的标定方法计算得到的运动工件二维世界坐标作为位置观测值,同时将估计出的速度作为速度测量值,组成观测向量,计算出卡尔曼滤波器对工件运动状态的最优预测值。Among them, the speed of the moving workpiece is unknown, and the position prediction adopts the Kalman prediction algorithm. Two frames of images are collected respectively, with an interval of 1s, and the change Δx of the workpiece position is the estimated velocity value. Establish a system model, and then use the two-dimensional world coordinates of the moving workpiece calculated by the above-mentioned calibration method as the position observation value, and at the same time use the estimated speed as the speed measurement value to form an observation vector, and calculate the Kalman filter for the workpiece motion The best predicted value of the state.

其中,工件的类型识别要采集各种工件的样本,对各样本进行特征检测,并训练形状模型,建立工件样本数据库。然后对实时图像的感兴趣区域(ROI)中目标的特征进行检测,并与数据库中的模型进行匹配,计算出匹配值,选取匹配值最高的结果作为分类器的输出。Among them, the type recognition of the workpiece needs to collect samples of various workpieces, perform feature detection on each sample, train the shape model, and establish a database of workpiece samples. Then detect the features of the target in the region of interest (ROI) of the real-time image, match it with the model in the database, calculate the matching value, and select the result with the highest matching value as the output of the classifier.

其中,机械手的运动控制方法包括轨迹规划,机械臂的轨迹控制、电动夹爪的抓取控制。机械臂控制器根据上位机预测的工件位置信息对机械臂进行轨迹规划,并控制机械臂按期望轨迹运动,同时调整夹爪姿态,抓取传送带上的运动工件。最后根据要求将不同类型的工件分类摆放,然后进行下一轮工件分拣。Among them, the motion control method of the manipulator includes trajectory planning, trajectory control of the mechanical arm, and grasping control of the electric gripper. The robotic arm controller plans the trajectory of the robotic arm according to the workpiece position information predicted by the host computer, and controls the robotic arm to move according to the expected trajectory, and at the same time adjusts the posture of the gripper to grab the moving workpiece on the conveyor belt. Finally, different types of workpieces are classified and placed according to requirements, and then the next round of workpiece sorting is carried out.

附图说明Description of drawings

图1为差分法背景更新示意图;Figure 1 is a schematic diagram of the background update of the difference method;

图2为工件分类器原理图;Fig. 2 is a schematic diagram of the workpiece classifier;

图3为卡尔曼预测系统的原理框图;Fig. 3 is the functional block diagram of the Kalman prediction system;

图4为预测位置与观测位置之间关系图;Fig. 4 is the relationship diagram between predicted position and observed position;

图5为预测位置与观测位置误差图。Figure 5 is the error diagram of predicted position and observed position.

图6为机械臂伺服控制轨迹规划示意图。Figure 6 is a schematic diagram of trajectory planning for the servo control of the robotic arm.

图7为机械臂伺服控制流程图。Figure 7 is a flow chart of the servo control of the manipulator.

具体实施方式Detailed ways

以下结合附图和实施例对本专利的技术方案作进一步描述。The technical solution of this patent will be further described below in conjunction with the accompanying drawings and embodiments.

实施例:Example:

采用背景差分法提取出运动目标首先要对背景进行建模。使背景模型不断更新,就可以适应周围光线的变化。图1为系统的背景更新示意图,背景帧按照示意图中的循环迭代更新。To extract the moving target by background subtraction method, the background should be modeled first. By continuously updating the background model, it is possible to adapt to changes in ambient light. Figure 1 is a schematic diagram of the background update of the system, and the background frame is iteratively updated according to the cycle in the schematic diagram.

accacc (( xx ,, ythe y )) == αα ** imageimage (( xx ,, ythe y )) ++ (( 11 -- αα )) ** accacc (( xx ,, ythe y ))

frimagefrimage (( xx ,, ythe y )) == imageimage (( xx ,, ythe y )) -- accacc (( xx ,, ythe y ))

其中image(x,y)为当前帧,acc(x,y)为背景模型,frimage(x,y)为前景帧,α为背景更新率,选取α=0.15,acc(x,y)初始化为0。最后根据分离的前景图像,检测出工件位置。Where image(x,y) is the current frame, acc(x,y) is the background model, frimage(x,y) is the foreground frame, α is the background update rate, select α=0.15, acc(x,y) is initialized as 0. Finally, according to the separated foreground image, the position of the workpiece is detected.

工件的姿态检测先将彩色图片进行灰度处理,测量工件竖直方向和水平方向所占的最大距离,判断工件的姿态趋于竖直还是水平。若趋于竖直姿态则从左向右边缘检测;若趋于水平姿态则从上往下边缘检测。边缘检测需要检测在所选方向上梯度值超过阈值的点,再对这些点进行直线拟合,计算出角度。The posture detection of the workpiece first processes the color image in grayscale, measures the maximum distance occupied by the vertical and horizontal directions of the workpiece, and judges whether the posture of the workpiece tends to be vertical or horizontal. If it tends to a vertical posture, it detects the edge from left to right; if it tends to a horizontal posture, it detects the edge from top to bottom. Edge detection needs to detect points where the gradient value exceeds the threshold in the selected direction, and then perform straight line fitting on these points to calculate the angle.

流水线上的工件识别要求在短时间内给出分类结果,工件的形状,纹理等特征相对简单,因此,直接在顶视图中进行模型匹配,分类器的原理图如图2。Workpiece recognition on the assembly line requires classification results to be given in a short time. The shape, texture and other features of the workpiece are relatively simple. Therefore, model matching is performed directly in the top view. The schematic diagram of the classifier is shown in Figure 2.

首先,在LabVIEW的pattern classify程序中输入各种工件的样本,对各样本进行特征检测,并训练形状模型,建立工件样本数据库。然后对实时图像的感兴趣区域(ROI)中目标的特征进行检测,并与数据库中的模型进行匹配,计算出匹配值,选取匹配值最高的结果作为分类器的输出。First, input samples of various workpieces in the pattern classify program of LabVIEW, perform feature detection on each sample, and train the shape model to establish a database of workpiece samples. Then detect the features of the target in the region of interest (ROI) of the real-time image, match it with the model in the database, calculate the matching value, and select the result with the highest matching value as the output of the classifier.

畸变校正实施过程:为了计算摄像机径向畸变矩阵中的12个参数,至少需要6个点校正前后的图像坐标。采集一张标定板图片,设置阈值,使参考点可被检测到,我们选择所需校正区域的12个点,分别记录其图像坐标(ui,vi)和(ui',vi'),i=1~12,见表,组成如下式所示的方程,求解出参数a、b。Distortion correction implementation process: In order to calculate the 12 parameters in the camera radial distortion matrix, at least 6 image coordinates before and after correction are required. Collect a picture of the calibration board, set the threshold so that the reference point can be detected, we select 12 points in the required correction area, and record their image coordinates (u i , v i ) and (u i ', v i ' ), i=1~12, see the table, form the equation shown in the following formula, and solve the parameters a and b.

uu ′′ vv ′′ == aa 0000 aa 1010 aa 0101 aa 1111 aa 2020 aa 0202 bb 0000 bb 1010 bb 0101 bb 1111 bb 2020 bb 0202 11 11 11 uu 11 uu 22 uu 1212 vv 11 vv 22 .. .. .. .. vv 1212 uu 11 vv 11 uu 22 vv 22 .. .. .. .. uu 1212 vv 1212 uu 11 22 uu 22 22 uu 1212 22 vv 11 22 vv 22 22 vv 1212 22

点序point sequence 11 22 33 44 55 66 77 88 99 1010 1111 1212 u(pixel)u(pixel) 538.1538.1 626.67626.67 536.94536.94 625.82625.82 537.88537.88 627.1627.1 538.59538.59 627.4627.4 569.88569.88 599.41599.41 598.97598.97 569.74569.74 v(pixel)v(pixel) 330.81330.81 325.28325.28 190.68190.68 192.61192.61 225.62225.62 225.48225.48 296.06296.06 292.15292.15 260.09260.09 259.49259.49 225.48225.48 294.81294.81 u’(pixel)u'(pixel) 550.9550.9 653.71653.71 551.16551.16 653.94653.94 551.22551.22 653.83653.83 551.14551.14 653.87653.87 585.58585.58 620.01620.01 620.11620.11 585.55585.55 v’(pixel)v'(pixel) 348.46348.46 347.99347.99 210.82210.82 210.58210.58 244.95244.95 245.01245.01 314.02314.02 314.02314.02 279.49279.49 279.76279.76 244.98244.98 314.12314.12

求得的畸变校正矩阵为:The obtained distortion correction matrix is:

aa 0000 aa 1010 aa 0101 aa 1111 aa 2020 aa 0202 bb 0000 bb 1010 bb 0101 bb 1111 bb 2020 bb 0202 == 253.137253.137 0.114010.11401 -- 0.1687910.168791 3.664613.66461 EE. -- 55 0.000887230.00088723 33 0.000264850.00026485 94.318794.3187 -- 0.1314710.131471 0.6664740.666474 0.000604780.00060478 11 -- 4.545894.54589 EE. -- 66 -- 1.976981.97698 EE. -- 55

单目二维视觉系统标定实施过程:选取标定板上2个参考点校正后的图像坐标及其世界坐标,点P1:(u1',v1')=(454.62,194.04),(x1,y1)=(280.70,2.41);点P2:(u2',v2')=(324.29,280.64),(x2,y2)=(69.89,142.48)。根据前面所述单目二维视觉系统原理,编程实现摄像机参数的标定。The implementation process of monocular two-dimensional vision system calibration: select the corrected image coordinates and world coordinates of two reference points on the calibration board, point P 1 : (u 1 ', v 1 ')=(454.62,194.04), (x 1 ,y 1 )=(280.70,2.41); point P 2 : (u 2 ',v 2 ')=(324.29,280.64), (x 2 ,y 2 )=(69.89,142.48). According to the principle of the monocular two-dimensional vision system mentioned above, the calibration of the camera parameters is realized by programming.

卡尔曼预测算法实施过程:首先建立系统模型,然后用以上所述的标定方法计算得到的运动工件二维世界坐标作为位置观测值,同时将估计出的速度作为速度测量值,组成观测向量。于是得到了卡尔曼滤波器的最优预测值,图3是预测系统原理框图。建立以下系统模型:The implementation process of the Kalman prediction algorithm: first establish a system model, and then use the above-mentioned calibration method to calculate the two-dimensional world coordinates of the moving workpiece as the position observation value, and at the same time use the estimated speed as the speed measurement value to form an observation vector. Then the optimal prediction value of the Kalman filter is obtained. Figure 3 is a block diagram of the prediction system. Model the following system:

dxdx // dtdt == 00 11 00 00 00 11 00 00 00 xx (( tt )) ++ 00 00 11 uu (( tt )) ++ 11 00 00 00 11 00 00 00 11 ww (( tt ))

ythe y (( tt )) == 11 00 00 00 11 00 xx (( tt )) ++ 00 00 uu (( tt )) ++ vv (( tt ))

此模型中,第一个方程为系统状态方程,x(t)为由X方向工件的位置x,速度vx,加速度ax组成的状态向量,u(t)为运动系统的控制量。若u(t)=0时,系统加速度为0,为匀速运动,w(t)为系统噪声,一般设为均值为零的高斯白噪声。第二个方程为输出方程,定义系统的输出值,此系统的输出值y(t)被定义为由位置和速度组成的向量(测量值要和输出值类型保持一致),v(t)为测量噪声,也设为均值为零的高斯白噪声。In this model, the first equation is the state equation of the system, x(t) is the state vector composed of the position x of the workpiece in the X direction, the velocity v x , and the acceleration a x , and u(t) is the control quantity of the motion system. If u(t)=0, the acceleration of the system is 0, which is a uniform motion, and w(t) is the system noise, which is generally set as Gaussian white noise with a mean value of zero. The second equation is the output equation, which defines the output value of the system. The output value y(t) of this system is defined as a vector composed of position and velocity (the measurement value must be consistent with the output value type), and v(t) is The measurement noise is also set to Gaussian white noise with zero mean.

卡尔曼预测算法的编程实现:分别采集两帧图像,间隔1s,工件位置的变化Δx即为所估计的速度值,作为初始速度v0输入Kalman滤波器。根据系统模型,位置、速度组成的观测向量,卡尔曼滤波器计算出运动状态的最优估计值。图4为位置预测值X(实线)和位置观测值X’(虚线)之间的关系图。如图,由于速度估计值有一定的偏差,导致X与X’之间有一定的偏差,但是由于卡尔曼预测模型的不断更新,使预测值X逐渐逼近X’,两者的误差逐渐收敛到零。图5为预测值X与观测值X’之间的误差,图中点杂乱,是由于存在观测噪声,但易看出误差值趋与零。The programming implementation of Kalman prediction algorithm: collect two frames of images respectively, with an interval of 1s, and the change Δx of the workpiece position is the estimated velocity value, which is input into the Kalman filter as the initial velocity v 0 . According to the system model, the observation vector composed of position and velocity, the Kalman filter calculates the optimal estimated value of the motion state. FIG. 4 is a graph showing the relationship between the predicted position value X (solid line) and the observed position value X′ (dashed line). As shown in the figure, due to a certain deviation in the speed estimate, there is a certain deviation between X and X', but due to the continuous update of the Kalman prediction model, the predicted value X gradually approaches X', and the error between the two gradually converges to zero. Figure 5 shows the error between the predicted value X and the observed value X'. The dots in the figure are messy because of the observation noise, but it is easy to see that the error value tends to zero.

SMC LEFH32K2-32-R16N3电动夹爪的抓取控制:通过夹爪的编程软件编写抓取控制程序,设定动作0和动作1。动作0为抓取,动作1为释放。然后通过EPSON机械臂控制器的I/O口发送编码信号给SMC控制器,实现夹爪的控制。机械臂编程软件“Out1,208”指令即从端口1发送二进制码11010000,重置夹爪。“Out1,144”指令即发送二进制码10010000,写入动作0。“Out1,176”指令即发送二进制码10110000,将drive位置1,执行抓取动作。类似地,“Out1,145”,写动作1,“Out1,177”,执行释放动作。Grabbing control of SMC LEFH32K2-32-R16N3 electric gripper: Write the gripping control program through the programming software of the gripper, and set action 0 and action 1. Action 0 is grab and action 1 is release. Then send the coded signal to the SMC controller through the I/O port of the EPSON robotic arm controller to realize the control of the gripper. The "Out1,208" instruction of the robotic arm programming software sends binary code 11010000 from port 1 to reset the gripper. The "Out1,144" command sends binary code 10010000 and writes action 0. The "Out1,176" command sends the binary code 10110000, sets the drive bit to 1, and executes the grabbing action. Similarly, "Out1, 145", write action 1, "Out1, 177", execute release action.

EPSON SCARA-G6机械臂的伺服控制:PC机通过TCP/IP协议与机械臂的控制器建立连接,设定好控制器#201网口的IP地址与PC机在同一子网下,再用“OpenNet#201”指令打开相应的网口,“Read#201,data1$,12”读取接收到的运动信息包括X、Y、V、deg(角度),“MOVE”指令控制机械臂的运动。系统没有采用实时跟踪的方式,因为机械臂会遮挡住运动工件。系统设定一个指定位置x0=211(参考机械臂坐标系),当工件的x坐标到达x0后,对机械手进行轨迹规划,并控制机械臂按期望轨迹运动,再命令夹爪抓取工件,按要求将工件抓至所需位置,再发送信号释放工件,进行下一轮的抓取,整个伺服控制示意图见图6,伺服控制流程图见图7。Servo control of the EPSON SCARA-G6 robotic arm: the PC establishes a connection with the controller of the robotic arm through the TCP/IP protocol, and the IP address of the network port #201 of the controller is set in the same subnet as the PC, and then use ""OpenNet#201" command opens the corresponding network port, "Read#201,data1$,12" reads the received motion information including X, Y, V, deg (angle), and "MOVE" command controls the movement of the robotic arm. The system does not use real-time tracking because the robotic arm will block the moving workpiece. The system sets a specified position x 0 =211 (refer to the coordinate system of the manipulator). When the x coordinate of the workpiece reaches x 0 , it plans the trajectory of the manipulator, controls the movement of the manipulator according to the desired trajectory, and then commands the gripper to grab the workpiece , grab the workpiece to the desired position according to the requirements, and then send a signal to release the workpiece, and carry out the next round of grabbing. The whole servo control schematic diagram is shown in Figure 6, and the servo control flow chart is shown in Figure 7.

Claims (6)

1. workpiece method for sorting based on machine vision technique may further comprise the steps:
1) image processing algorithm of video flowing detects position and the attitude of workpiece the motion in the middle of the video that gathers with this algorithm;
2) calibration algorithm of video camera is converted into the pixel coordinate of Moving Workpieces in camera review with this calibration algorithm the physical coordinates of workpiece in the world coordinate system.
3) the Lens Distortion Correction algorithm of video camera is in order to eliminate the radial distortion of the image of introducing because of wide-angle lens.
4) Kalman Prediction algorithm is predicted in order to the position to Moving Workpieces, obtains the location of workpiece without time lag.5) the type identification algorithm of workpiece, when workpiece from the travelling belt through out-of-date, the camera acquisition workpiece image is identified the workpiece type, is convenient to classification.
6) motion control method of mechanical arm comprises method for planning track, and method for controlling trajectory makes mechanical arm can select optimal path, sorts rapidly workpiece.
2. the image processing algorithm of video flowing according to claim 1 is characterized in that detecting the background subtraction point-score of Moving Workpieces position and the method for detection workpiece attitude, may further comprise the steps:
1) video flowing that gathers is carried out background modeling:
acc(x,y)=α*image(x,y)+(1-α)*acc(x,y)
Wherein image (x, y) is present frame, and acc (x, y) is background model (being initialized as 0), and α is the context update rate, and its scope is 0-1, and system chooses α=0.15.
2) separate the prospect two field picture, detect the location of workpiece:
frimage(x,y)=image(x,y)-acc(x,y)
Wherein frimage (x, y) is the prospect frame.
3) colour picture being carried out gray scale processes.
4) with detecting workpiece vertical direction and the shared ultimate range of horizontal direction, judge that the attitude of workpiece is tending towards vertically or level.Then choose from left to right rim detection if be tending towards vertical attitude; Then choose from top to bottom rim detection if be tending towards horizontal attitude.Detection Grad on selected direction surpasses the point of threshold value, and these points are carried out fitting a straight line, calculates angle.
3. the Lens Distortion Correction algorithm of video camera according to claim 1 is characterized in that the method with linear regression (LR) has simulated the radial distortion matrix of video camera.May further comprise the steps:
1) sets up the Lens Distortion Correction model.
u ′ = a 00 + a 10 u + a 01 v + a 11 uv + a 20 u 2 + a 02 v 2 v ′ = b 00 + b 10 u + b 01 v + b 11 uv + b 20 u 2 + b 02 v 2
In the formula: (u, v) is the image coordinate of reference point, and (u', v') is the image coordinate behind the reference point distortion correction, a St, b StBe correction coefficient, s, t=0,1,2.
2) obtain the image coordinate that 12 reference point are proofreaied and correct front and back, ask for the radial distortion matrix.
4. Kalman Prediction algorithm according to claim 1 is characterized in that need not to detect by scrambler the speed of travelling belt, and its prediction may further comprise the steps:
1) estimates workpiece motion s speed.
2) calculate optimum state estimation value according to observation signal and model estimated signal.
5. workpiece kind identification method according to claim 1 is characterized in that can realizing classification than the multiple types workpiece based on the workpiece sorter of LabVIEW visual development module (VDM), and directly the 3D workpiece sample that gathers is carried out the sorter training.May further comprise the steps:
1) sample of the various workpiece of input in the pattern of LabVIEW classify program carries out feature detection to each sample, and the training shapes model, sets up the workpiece sample database.
2) clarification of objective in the area-of-interest (ROI) of realtime graphic is detected, and mate with model in the database, calculate matching value, choose the highest result of matching value as the output of sorter.
6. workpiece automatic sorting method according to claim 1 is characterized in that EPSON SCARA-G6 mechanical arm trajectory planning and track following, makes its continuously grabbing workpiece.May further comprise the steps:
1) according to the Work position information of host computer prediction, mechanical arm is carried out trajectory planning.
2) the control mechanical arm moves according to desired trajectory.
3) adjust the jaw attitude, the Moving Workpieces on the crawl travelling belt.Last as requested dissimilar workpiece the classification puts, and then makes manipulator motion to holding point, waits for the sorting of next round workpiece.
CN201310129367.9A 2013-04-15 2013-04-15 Based on the Moving Workpieces method for sorting of LabVIEW Active CN103325106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310129367.9A CN103325106B (en) 2013-04-15 2013-04-15 Based on the Moving Workpieces method for sorting of LabVIEW

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310129367.9A CN103325106B (en) 2013-04-15 2013-04-15 Based on the Moving Workpieces method for sorting of LabVIEW

Publications (2)

Publication Number Publication Date
CN103325106A true CN103325106A (en) 2013-09-25
CN103325106B CN103325106B (en) 2015-11-25

Family

ID=49193829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310129367.9A Active CN103325106B (en) 2013-04-15 2013-04-15 Based on the Moving Workpieces method for sorting of LabVIEW

Country Status (1)

Country Link
CN (1) CN103325106B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104148300A (en) * 2014-01-24 2014-11-19 北京聚鑫跃锋科技发展有限公司 Garbage sorting method and system based on machine vision
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching
CN105159248A (en) * 2015-08-05 2015-12-16 东莞理工学院 Machine vision based method for classifying industrial products
CN105225225A (en) * 2015-08-31 2016-01-06 臻雅科技温州有限公司 A kind of leather system for automatic marker making method and apparatus based on machine vision
CN105405139A (en) * 2015-11-12 2016-03-16 深圳市傲视检测技术有限公司 Monocular CCD (Charge Coupled Device) based method and system for rapidly positioning feeding of small-sized glass panel
CN105728328A (en) * 2016-05-13 2016-07-06 杭州亚美利嘉科技有限公司 Goods sorting system and method
CN107111739A (en) * 2014-08-08 2017-08-29 机器人视觉科技股份有限公司 The detection and tracking of article characteristics
CN107671008A (en) * 2017-11-13 2018-02-09 中国科学院合肥物质科学研究院 A kind of part stream waterline automatic sorting boxing apparatus of view-based access control model
CN108160530A (en) * 2017-12-29 2018-06-15 苏州德创测控科技有限公司 A kind of material loading platform and workpiece feeding method
CN108188039A (en) * 2018-01-15 2018-06-22 苏州工业园区服务外包职业学院 A kind of fruit Automated Sorting System and method
CN108406780A (en) * 2018-05-18 2018-08-17 苏州吉成智能科技有限公司 pharmacy fault scanning method
CN108458655A (en) * 2017-02-22 2018-08-28 上海理工大学 Support the data configurableization monitoring system and method for vision measurement
CN108782797A (en) * 2018-06-15 2018-11-13 广东工业大学 The control method and arm-type tea frying machine of arm-type tea frying machine stir-frying tealeaves
CN109279325A (en) * 2018-10-16 2019-01-29 深圳市正和忠信股份有限公司 A kind of automatic feeding system
CN109863002A (en) * 2016-10-21 2019-06-07 通快机床两合公司 Workpiece collects dot element and the method for auxiliary work-piece processing
CN109927033A (en) * 2019-04-01 2019-06-25 杭州电子科技大学 A kind of target object dynamic adaptation method applied to conveyer belt sorting
CN110180799A (en) * 2019-06-28 2019-08-30 中船黄埔文冲船舶有限公司 A kind of part method for sorting and system based on machine vision
CN110711701A (en) * 2019-09-16 2020-01-21 华中科技大学 A grab-type flexible sorting method based on RFID spatial positioning technology
CN110861076A (en) * 2019-12-11 2020-03-06 深圳市盛世鸿恩科技有限公司 Hand eye calibration device of mechanical arm
CN110936372A (en) * 2018-09-21 2020-03-31 许昌学院 Control system of cigarette carton stacking robot
CN111346829A (en) * 2020-02-28 2020-06-30 西安电子科技大学 PYNQ-based binocular camera three-dimensional sorting system and method
CN112525157A (en) * 2020-10-13 2021-03-19 江苏三立液压机械有限公司 Hydraulic oil cylinder size measurement and pose estimation method and system based on video image
CN113814986A (en) * 2021-11-23 2021-12-21 广东隆崎机器人有限公司 Method and system for controlling SCARA robot based on machine vision
CN114749981A (en) * 2022-05-27 2022-07-15 中迪机器人(盐城)有限公司 Feeding and discharging control system and method based on multi-axis robot
CN114798505A (en) * 2022-04-21 2022-07-29 无锡比益特科技有限公司 Cargo sorting device capable of achieving self-adaptive adjustment of cargo pose
CN114888851A (en) * 2022-05-30 2022-08-12 北京航空航天大学杭州创新研究院 Moving object robot grabbing device based on visual perception
CN116423528A (en) * 2023-06-13 2023-07-14 国网浙江省电力有限公司宁波供电公司 Transformer oil sample sorting method and system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1253309A (en) * 1998-11-10 2000-05-17 富士摄影胶片株式会社 Posture regulating device and classifying device for use on photographic film set with lens
CN1806940A (en) * 2006-01-23 2006-07-26 湖南大学 Defective goods automatic sorting method and equipment for high-speed automated production line
CN101402199A (en) * 2008-10-20 2009-04-08 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN102151661A (en) * 2010-11-24 2011-08-17 季广厚 Method and equipment for sorting test tube samples
CN102171531A (en) * 2008-10-08 2011-08-31 本田技研工业株式会社 Device for estimating shape of work and method for estimating shape of work
CN102207988A (en) * 2011-06-07 2011-10-05 北京邮电大学 Efficient dynamic modeling method for multi-degree of freedom (multi-DOF) mechanical arm
CN102430530A (en) * 2010-08-31 2012-05-02 株式会社安川电机 Robot system
CN102692618A (en) * 2012-05-23 2012-09-26 浙江工业大学 RFID (radio frequency identification) positioning method based on RSSI (received signal strength indicator) weight fusion
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1253309A (en) * 1998-11-10 2000-05-17 富士摄影胶片株式会社 Posture regulating device and classifying device for use on photographic film set with lens
CN1806940A (en) * 2006-01-23 2006-07-26 湖南大学 Defective goods automatic sorting method and equipment for high-speed automated production line
CN102171531A (en) * 2008-10-08 2011-08-31 本田技研工业株式会社 Device for estimating shape of work and method for estimating shape of work
CN101402199A (en) * 2008-10-20 2009-04-08 北京理工大学 Hand-eye type robot movable target extracting method with low servo accuracy based on visual sensation
CN102430530A (en) * 2010-08-31 2012-05-02 株式会社安川电机 Robot system
CN102151661A (en) * 2010-11-24 2011-08-17 季广厚 Method and equipment for sorting test tube samples
CN102207988A (en) * 2011-06-07 2011-10-05 北京邮电大学 Efficient dynamic modeling method for multi-degree of freedom (multi-DOF) mechanical arm
CN102692618A (en) * 2012-05-23 2012-09-26 浙江工业大学 RFID (radio frequency identification) positioning method based on RSSI (received signal strength indicator) weight fusion
CN102914967A (en) * 2012-09-21 2013-02-06 浙江工业大学 Autonomous navigation and man-machine coordination picking operating system of picking robot

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
彭艳芳: "《视频运动目标检测与跟踪算法研究》", 《中国优秀硕士学位论文全文数据库》, 15 December 2010 (2010-12-15) *
戴剑锋: "《摄像头径向畸变自动校正系统》", 《中国优秀硕士学位论文全文数据库》, 15 March 2011 (2011-03-15) *
钞萌: "《基于机器人视觉的定位》", 《中国优秀硕士学位论文全文数据库》, 15 November 2010 (2010-11-15) *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104148300B (en) * 2014-01-24 2017-02-15 北京聚鑫跃锋科技发展有限公司 Garbage sorting method and system based on machine vision
CN104148300A (en) * 2014-01-24 2014-11-19 北京聚鑫跃锋科技发展有限公司 Garbage sorting method and system based on machine vision
CN107111739A (en) * 2014-08-08 2017-08-29 机器人视觉科技股份有限公司 The detection and tracking of article characteristics
CN104268602A (en) * 2014-10-14 2015-01-07 大连理工大学 Shielded workpiece identifying method and device based on binary system feature matching
CN105159248A (en) * 2015-08-05 2015-12-16 东莞理工学院 Machine vision based method for classifying industrial products
CN105159248B (en) * 2015-08-05 2019-01-29 东莞理工学院 A method of classifying to industrial products based on machine vision
CN105225225B (en) * 2015-08-31 2017-12-22 温州城电智能科技有限公司 A kind of leather system for automatic marker making method and apparatus based on machine vision
CN105225225A (en) * 2015-08-31 2016-01-06 臻雅科技温州有限公司 A kind of leather system for automatic marker making method and apparatus based on machine vision
CN105405139A (en) * 2015-11-12 2016-03-16 深圳市傲视检测技术有限公司 Monocular CCD (Charge Coupled Device) based method and system for rapidly positioning feeding of small-sized glass panel
CN105728328A (en) * 2016-05-13 2016-07-06 杭州亚美利嘉科技有限公司 Goods sorting system and method
CN109863002A (en) * 2016-10-21 2019-06-07 通快机床两合公司 Workpiece collects dot element and the method for auxiliary work-piece processing
CN108458655A (en) * 2017-02-22 2018-08-28 上海理工大学 Support the data configurableization monitoring system and method for vision measurement
CN107671008A (en) * 2017-11-13 2018-02-09 中国科学院合肥物质科学研究院 A kind of part stream waterline automatic sorting boxing apparatus of view-based access control model
CN108160530A (en) * 2017-12-29 2018-06-15 苏州德创测控科技有限公司 A kind of material loading platform and workpiece feeding method
CN108188039A (en) * 2018-01-15 2018-06-22 苏州工业园区服务外包职业学院 A kind of fruit Automated Sorting System and method
CN108406780A (en) * 2018-05-18 2018-08-17 苏州吉成智能科技有限公司 pharmacy fault scanning method
CN108782797A (en) * 2018-06-15 2018-11-13 广东工业大学 The control method and arm-type tea frying machine of arm-type tea frying machine stir-frying tealeaves
CN110936372A (en) * 2018-09-21 2020-03-31 许昌学院 Control system of cigarette carton stacking robot
CN109279325A (en) * 2018-10-16 2019-01-29 深圳市正和忠信股份有限公司 A kind of automatic feeding system
CN109279325B (en) * 2018-10-16 2024-04-26 深圳市正和忠信股份有限公司 Automatic feeding system
CN109927033A (en) * 2019-04-01 2019-06-25 杭州电子科技大学 A kind of target object dynamic adaptation method applied to conveyer belt sorting
CN110180799A (en) * 2019-06-28 2019-08-30 中船黄埔文冲船舶有限公司 A kind of part method for sorting and system based on machine vision
CN110711701A (en) * 2019-09-16 2020-01-21 华中科技大学 A grab-type flexible sorting method based on RFID spatial positioning technology
CN110861076A (en) * 2019-12-11 2020-03-06 深圳市盛世鸿恩科技有限公司 Hand eye calibration device of mechanical arm
CN111346829A (en) * 2020-02-28 2020-06-30 西安电子科技大学 PYNQ-based binocular camera three-dimensional sorting system and method
CN112525157A (en) * 2020-10-13 2021-03-19 江苏三立液压机械有限公司 Hydraulic oil cylinder size measurement and pose estimation method and system based on video image
CN113814986A (en) * 2021-11-23 2021-12-21 广东隆崎机器人有限公司 Method and system for controlling SCARA robot based on machine vision
CN114798505A (en) * 2022-04-21 2022-07-29 无锡比益特科技有限公司 Cargo sorting device capable of achieving self-adaptive adjustment of cargo pose
CN114798505B (en) * 2022-04-21 2024-02-20 无锡比益特科技有限公司 Cargo sorting device capable of realizing cargo pose self-adaptive adjustment
CN114749981A (en) * 2022-05-27 2022-07-15 中迪机器人(盐城)有限公司 Feeding and discharging control system and method based on multi-axis robot
CN114888851A (en) * 2022-05-30 2022-08-12 北京航空航天大学杭州创新研究院 Moving object robot grabbing device based on visual perception
CN114888851B (en) * 2022-05-30 2024-12-27 北京航空航天大学杭州创新研究院 A robot grasping device for moving objects based on visual perception
CN116423528A (en) * 2023-06-13 2023-07-14 国网浙江省电力有限公司宁波供电公司 Transformer oil sample sorting method and system
CN116423528B (en) * 2023-06-13 2023-10-17 国网浙江省电力有限公司宁波供电公司 Transformer oil sample sorting method and system

Also Published As

Publication number Publication date
CN103325106B (en) 2015-11-25

Similar Documents

Publication Publication Date Title
CN103325106B (en) Based on the Moving Workpieces method for sorting of LabVIEW
CN110202583B (en) A humanoid manipulator control system based on deep learning and its control method
CN108399639B (en) Rapid automatic grabbing and placing method based on deep learning
CN107992881B (en) Robot dynamic grabbing method and system
CN111462154B (en) Target positioning method and device based on depth vision sensor and automatic grabbing robot
CN111496770A (en) Intelligent handling robotic arm system and using method based on 3D vision and deep learning
CN104058260B (en) The robot automatic stacking method that view-based access control model processes
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
CN109926817B (en) Transformer automatic assembly method based on machine vision
CN113814986B (en) Method and system for controlling SCARA robot based on machine vision
CN111923053A (en) Industrial robot object grabbing teaching system and method based on depth vision
CN112102368B (en) Deep learning-based robot garbage classification and sorting method
CN111251295A (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN113146172A (en) Multi-vision-based detection and assembly system and method
Husain et al. Realtime tracking and grasping of a moving object from range video
CN115070781B (en) Object grabbing method and two-mechanical-arm cooperation system
CN109732610A (en) Human-machine collaborative robot grasping system and its working method
CN117162094A (en) Multi-target self-adaptive angle grabbing method of visual servo mechanical arm
CN109079777B (en) Manipulator hand-eye coordination operation system
Uçar et al. Determination of angular status and dimensional properties of objects for grasping with robot arm
Gao et al. An automatic assembling system for sealing rings based on machine vision
CN118122642A (en) Leaf spring pressure sorting method and sorting system
WO2025000778A1 (en) Gripping control method and apparatus for test tube
Zhou et al. Visual servo control system of 2-DOF parallel robot
Zhao et al. POSITIONING AND GRABBING TECHNOLOGY OF INDUSTRIAL ROBOT BASED ON VISION.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant