CN103895042A - Industrial robot workpiece positioning grabbing method and system based on visual guidance - Google Patents

Industrial robot workpiece positioning grabbing method and system based on visual guidance Download PDF

Info

Publication number
CN103895042A
CN103895042A CN 201410073766 CN201410073766A CN103895042A CN 103895042 A CN103895042 A CN 103895042A CN 201410073766 CN201410073766 CN 201410073766 CN 201410073766 A CN201410073766 A CN 201410073766A CN 103895042 A CN103895042 A CN 103895042A
Authority
CN
Grant status
Application
Patent type
Prior art keywords
workpiece
robot
image
target
step
Prior art date
Application number
CN 201410073766
Other languages
Chinese (zh)
Inventor
翟敬梅
董鹏飞
张铁
Original Assignee
华南理工大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Abstract

The invention discloses an industrial robot workpiece positioning grabbing method based on visual guidance. The method includes the following steps that 1, a system parametric model is established, and calibration of a camera is conducted; 2, a workpiece feature template is determined; 3, workpiece template living examples are sought for, and the positions of workpieces are determined according to coordinate information of the living examples; 4, the speed of a target workpiece is calculated; 5, the pose, in a robot basic coordinate system, of the workpiece is predicted when the workpiece is located at a station to be grabbed; 6, a robot moves according to a planned path to approach and grab the workpiece and place the workpiece at the position of a target point. The invention further discloses a system for implementing the industrial robot workpiece positioning grabbing method based on visual guidance in the patent claim 1. The system comprises a conveying belt, an optoelectronic switch, a camera, an industrial control computer, the robot and the target workpiece. The system has the advantages of being high in positioning accuracy, high in working efficiency, high in automation degree and the like.

Description

一种基于视觉引导的工业机器人工件定位抓取方法及系统 Gripping one kind of method and system based on industry work positioning visually guided robot

技术领域 FIELD

[0001] 本发明涉及一种机器人工件抓取技术,特别涉及一种基于视觉引导的工业机器人工件定位抓取方法及系统。 [0001] The present invention relates to a workpiece gripping robot technology, particularly to a workpiece positioned on the industrial robot vision-guided gripping method and system.

背景技术 Background technique

[0002] 工件抓取是生产线上工业机器人的一项重要技术,目前,生产线上大部分的工业机器人仅能在严格定义的结构化环境中执行预定的指令,一旦工件的状态发生改变,机器人往往不能做出正确的应变,近年来,视觉引导与定位技术已经成为工业机器人获得作业周围环境信息的主要手段,它可以使工业机器人在实际作业中具有自主判断能力,从而使机器人应用灵活性和工作质量大大提高。 [0002] gripping the workpiece is an important technology in the production line of industrial robots, at present, most of the production line of an industrial robot can only execute a predetermined instruction in a structured environment strictly defined, once the state of the workpiece is changed, the robot is often can not make the right strain in recent years, visual guidance and positioning technology has become the primary means of industrial robots get job information of the surrounding environment, it can make industrial robots with independent judgment in actual operation, so that the robot application flexibility and work quality greatly improved. 当前基于视觉引导的工业机器人抓取系统已有的研究主要有两种: Previous studies grab the current system based on vision-guided industrial robots are mainly two:

[0003] ( I)基于单目视觉的工业机器人抓取方法:该类方法一般采用将摄像机安装在工业机器人工作空间的上方,目标工件和机械手末端同时出现在摄像机视野中,通过摄像机的媒介作用建立起目标与机器人手之间的关系。 [0003] (I) based on monocular vision gripping industrial robot Method: Such methods generally used in an industrial camera is mounted above the working space of the robot, target workpiece and the robot terminal also appear in the camera view, the media by the action of the camera establish a relationship between the target and the robot hand.

[0004] (2)基于立体视觉的工业机器人抓取方法:该类方法一般是使用两台摄像机对目标工件同时进行拍摄,利用视差和立体匹配技术获取目标的空间位姿,从而引导机器人实现抓取动作 [0004] (2) Based on the stereoscopic industrial robot crawl methods: These methods are generally of a target workpiece using two cameras simultaneously shoot, and the parallax stereo matching technology acquisition target spatial position and orientation so as to guide the robot to achieve grip take action

[0005] 第一种方法要求摄像机能够同时观察到目标工件和机器人手部末端,当机器人向目标移动及操作时,会对目标造成遮挡,导致抓取失败。 [0005] The first method requires the camera tip can be simultaneously observed target portion of the workpiece and the robot hand, and when the operation of the robot moves to the target, the target will result in occlusion, resulting in failure to crawl. 第二种方法采用双摄像机拍摄,成本提高的同时增加了计算量,标定过程复杂。 The second method uses dual cameras, raising the cost while increasing the amount of calculation, the calibration process is complicated. 实时性不能准确保证。 We can not guarantee accurate real-time.

发明内容 SUMMARY

[0006] 本发明的首要目的在于克服现有技术的缺点与不足,提供一种基于视觉引导的工业机器人工件定位抓取方法,该方法将摄像机和工业机器人分别固定安装在传送带两端,通过识别算法和跟踪算法定位和抓取工件,该方法的工件定位精度高、实时性好。 [0006] The primary object of the present invention is to overcome the disadvantages and deficiencies of the prior art, to provide a visual guide industrial robot gripping the workpiece positioning method based on the method and the industrial robot camera are fixedly mounted at both ends of the conveyor belt, by identifying locating and tracking algorithm and algorithm gripping the workpiece, the workpiece positioning high accuracy of the method, real good.

[0007] 本发明的另一目的在于克服现有技术的缺点与不足,提供一种实现基于视觉引导的工业机器人工件定位抓取方法的系统,该系统适用于生产线上工业机器人工件抓取作业,提高了机器人的工作效率和工作质量,适合普遍推广使用。 [0007] Another object of the present invention is to overcome the disadvantages and deficiencies of the prior art, to provide a gripping system for locating method based vision-guided industrial robot to achieve a workpiece, the system is suitable for production line operation the industrial robot gripping the workpiece, improve work efficiency and quality of work robots for general widely used.

[0008] 本发明的首要目的通过下述技术方案实现:一种基于视觉引导的工业机器人工件定位抓取方法,包括以下步骤: [0008] The primary object of the present invention is achieved by the following technical scheme: based on Industrial workpiece positioning robot-guided gripping visual method, comprising the steps of:

[0009] 步骤S1:根据摄像机和机器人的位置关系建立系统的参数化模型并标定摄像机内外参数,通过摄像机为中间媒介建立起目标工件与机器人之间的相对位姿关系; [0009] Step S1: establishing parameters of the system model and the positional relationship between the camera and the robot and external camera calibration parameters, to establish an intermediary between the relative pose of the robot by a camera and a target workpiece is;

[0010] 步骤S2:当工件通过时,光电开关触发摄像机采集经传送带送入摄像机视野的工件图像并传送到工业控制计算机进行图像滤波预处理和图像增强预处理,降低噪声的影响,同时确定目标工件的特征模板,作为用于识别算法的依据。 [0010] Step S2: When the workpiece, the photoelectric switch is triggered by the workpiece image captured by the camera into the camera field of view and the conveyor belt to the industrial control computer image filtering and image enhancement preprocessing preprocessing to reduce the influence of noise, while determining a target feature template of the workpiece, as the basis for the recognition algorithm.

[0011] 步骤S3:采用模板匹配识别算法在工件图像中搜索工件特征模板实例,得到工件在拍照时刻的位置(χ,y,z)和偏转角度Θ,根据步骤SI的标定结果将(Χ,y,Ζ)和Θ映射到机器人基础坐标系中; [0011] Step S3: template matching recognition algorithm search template instances workpiece feature in the workpiece image, the photographing position obtained at the time of the workpiece (χ, y, z) and the deflection angle [Theta], in accordance with the calibration results of the step SI ([chi], y, Ζ), and Θ is mapped to the robot base coordinate system;

[0012] 步骤S4:对步骤S2采集的工件图像进行步骤S3的灰度模板匹配后,根据经过模板匹配后的5-10帧工件图像,计算目标工件中心在运动方向的位移S,除以拍摄这5-10帧图像的时间Τ,即得到目标工件速度V的计算式: [0012] Step S4: After the workpiece image acquisition step S2 gradation template matching step S3, the workpiece image according to the frame 5-10 after the template matching, calculates a target workpiece center displacement in the direction of movement S, divided by shooting this 5-10 Τ image frame time, i.e., the formula is calculated to obtain the target speed V of workpiece:

[0013] V=S/T, [0013] V = S / T,

[0014] 其中,S表不位移,T表不时间;[0015] 步骤S5:跟踪算法采用卡尔曼滤波模型预测工件处于待抓取工位时的位置与偏转角度,卡尔曼滤波模型能准确估计某一时刻目标工件在传送带上的位置; [0014] where, S table is not displaced, T is not time table; [0015] Step S5: tracking algorithm Kalman filter model to predict the deflection angle of the workpiece in a position to be grasped at the station, the Kalman filter model can accurately estimate a time position of a target workpiece on the conveyor;

[0016] 步骤S6:根据待抓取工位和目标点的位置规划机器人的运动轨迹,机器人控制器发出指令控制机器人按照规划的运动轨迹接近并抓取工件,并放置到目标点位置。 [0016] Step S6: The robot trajectory planning of the position and the gripping station to be the target point, the robot controller gives an instruction to control the robot trajectory planning according to proximity and gripping a workpiece, and placed in a target position.

[0017] 所述步骤SI包括以下步骤: [0017] The step SI comprises the steps of:

[0018] 步骤11、对所述摄像机用平面标靶标定法进行标定,并通过标定板图像在传送带上建立一个参考坐标系(oMfxMfYMfzMf),得到参考坐标系和摄像机坐标系之间的相对位姿 [0018] Step 11, the camera calibration method planar calibration target, by establishing a reference image of the calibration plate coordinate system (oMfxMfYMfzMf) on a conveyor belt, to obtain the relative position and orientation between the reference coordinate system and the camera coordinate system

caniTTnref ; caniTTnref;

[0019] 步骤12、通过离线测量的方式,得到参考坐标系和机器人基础坐标系(OwXwYwZw)之间的相对位姿basTu,以参考坐标系((WUrfzMf)为中间媒介可得到摄像机坐标系和机器人基础坐标系之间的位姿关系baseHcam=basIf.XΉμZ1,通过目标定位得到 [0019] Step 12, as measured by off-line manner, to obtain the relative pose between the reference basTu coordinate system and the robot base coordinate system (OwXwYwZw), the reference coordinate system ((WUrfzMf) obtained intermediary of the camera coordinate system and the robot position and orientation relationship between the base coordinate system baseHcam = basIf.XΉμZ1, obtained by targeting

[0020] camHobj,则目标工件与机器人之间的相对位姿关系bas%w的表达式为: [0020] The relationship between the relative poses camHobj, the target workpiece and the robot bas% w expression is:

[0021] [0021]

Figure CN103895042AD00061

[0022] 其中,baseH。 [0022] wherein, baseH. 》表示摄像机坐标系和机器人基础坐标系之间的位姿关系。 "Indicates the position and orientation relationship between the camera coordinate system and the robot base coordinate system.

[0023] 所述步骤S2包括以下步骤: [0023] Step S2 comprises the steps of:

[0024] 步骤21、当工件通过时,光电开关触发摄像机采集经传送带送入摄像机视野的5-10帧工件的图像,并记录每帧图像的拍照时刻,将第一帧图像的拍照时刻定为计时起点 [0024] Step 21, when a workpiece, 5-10 photoelectric switch trigger image frame captured by the camera through the workpiece into the camera view of the conveyor belt, and the recording time of each image photographed, the photographing time of the first frame image as starting time

lO, lO,

[0025] 步骤22、将图像传送到工业控制计算机,工业计算机对图像进行平滑滤波操作,所述平滑滤波操作采用均值滤波方法,所述均值滤波方法的表达式为: [0025] Step 22, transfer the image to the industrial control computer, an industrial computer for image smoothing operation, the smoothing operation method using the mean filter, the mean expression for filtering method:

[0026] [0026]

Figure CN103895042AD00062

[0027] 所述均值滤波方法是将原图中一个像素的灰度值和它周围邻近8个像素的灰度值相加,然后求得的平均值,经过均值滤波后,图像噪声得到平滑,在预处理后的图像中提取工件灰度特征模板,用于作为步骤S3中的模板匹配识别算法的依据。 The [0027] mean filtering method is eight pixels adjacent to the gradation value of a pixel in the original gray value and adding it around, and then the obtained average value, after the mean filter, a smooth image noise, workpiece feature extraction template gradation image after preprocessing, a recognition algorithm according to a template matching in step S3.

[0028] 所述步骤S3包括以下步骤: [0028] The step S3 comprises the steps of:

[0029] 步骤31、采用基于灰度特征的模板匹配识别算法,将工件特征模板从工件图像的左上角开始遍历图像,搜索工件特征模板实例,在工件图像(i,j)点处的匹配结果可以用归一化相关系数NCC (i,j)表示,其中NCC (i,j)的数学表达式为: [0029] Step 31, the template matching recognition algorithm based on the gradation characteristics, the workpiece feature template from the upper left corner of the image of the workpiece traversing image search workpiece feature template instance, the matching result at point in the image work (i, j) can use the normalized correlation coefficient NCC (i, j), where NCC (i, j) is the mathematical expression:

Figure CN103895042AD00071

[0031] 当工件特征模板与待匹配的子图完全相同时,NCC(i, j)=l,当被搜索图像完成全部搜索后,NCC(i,j)最大值处对应的子图即为工件实例,且得到实例在图像中的位置(X,Y, Z)和偏转角度Θ ; Sub-picture [0031] When the workpiece feature identical to the template to be matched subgraph, NCC (i, j) = l, when all searches search image is completed, the maximum value of NCC (i, j) is the corresponding at examples of the workpiece, and the resulting position of the instance in the image (X, Y, Z) and the deflection angle [Theta];

[0032] 步骤32、根据步骤SI中的标定结果将(X,Y,Z)和Θ映射到机器人基础坐标系中,得到工件位姿在机器人基础坐标系中的表示(Xb,Yb,Zb)和0b。 [0032] Step 32, according to the calibration result in step SI in the (X, Y, Z), and Θ is mapped to the robot base coordinate system, to obtain the workpiece position and orientation represented in the robot base coordinate system (Xb, Yb, Zb) and 0b.

[0033] 所述步骤S5包括以下步骤: [0033] The S5 step comprises the steps of:

[0034] 步骤51、所述卡尔曼滤波模型对传送带上的工件建立匀速直线运动的运动模型,卡尔曼滤波模型通过时间更新方程和状态更新方程预测并修正目标的状态,目标工件的运动模型可用数学表达式表示为: [0034] Step 51, the moving Kalman filter model uniform linear motion model of the workpiece on the conveyor belt, the model predicted by the Kalman filter time update equations and equation of state update and correct state of the target, a target workpiece motion model available mathematical expression is expressed as:

Figure CN103895042AD00072

[0036] 化成卡尔曼滤波的矩阵形式得: [0036] Kalman filter into matrix form obtained:

Figure CN103895042AD00073

[0038] 在图像中只能观测到工件的位置,因此观测模型Zk为: [0038] In the image the position of the workpiece can be observed, so as Zk observation model:

Figure CN103895042AD00074

[0040] 步骤52、将第一帧工件图像通过识别算法检测出的工件在机器人基础坐标系下的定位信息x0,Y0, z0)和θ 0,结合步骤S4计算出的工件运动速度V初始化卡尔曼滤波模型,预测工件在At时间后处于待抓取工位时在机器人基础坐标系下的位置(HZ1)和偏转角度θ1。 [0040] Step 52, the positioning information x0 first frame image of the workpiece detected by the workpiece recognition algorithm in the robot base coordinate system, Y0, z0), and θ 0, the combining step S4 of the workpiece calculated velocity V initialization Carr Man filter model prediction at the time in the workpiece in position when the robot base coordinate system to be gripped station (the HZ1) and the deflection angle θ1.

[0041] 本发明的另一目的通过以下技术方案实现:一种实现权利要求1所述的基于视觉引导的工业机器人工件定位抓取方法的系统,包括:传送带、光电开关、摄像机、工业控制计算机、机器人和目标工件,摄像机安装于传送带的一端,机器人安装于传送带的另一端,光电开关安装于传送带上,光电开关、摄像机、工业控制计算机和机器人依次电气连接。 [0041] Another object of the present invention is achieved by the following technical solution: A system for implementing a method of gripping a workpiece industrial robot localization based on the visual guide as claimed in claim 1, comprising: a conveyor belt, photoelectric switches, camera, industrial control computer , the target robot and the workpiece, the camera is attached to an end of the conveyor belt, the robot is attached to the other end of the conveyor belt, the conveyor belt is mounted on the photoelectric switch, photoelectric switches, camera, industrial robot control computer and in turn electrically connected.

[0042] 本发明的原理:本发明的机器人工件定位抓取方法包括工件识别算法和工件跟踪算法,以完成传送带上运动工件的识别检测和跟踪抓取作业任务。 [0042] The principles of the present invention: a robot gripping workpiece positioning method of the present invention includes a workpiece and a workpiece tracking algorithm recognition algorithm to detect movement of the workpiece to complete the identification and tracking of the belt to fetch a job task.

[0043] 本发明相对于现有技术具有如下的优点及效果: [0043] The prior art relative to the present invention has the following advantages and effects:

[0044] 1、本发明的识别算法能够准确计算工件中心,偏转角度等参数,定位精度高。 [0044] 1, the identification algorithm of the present invention can accurately calculate the center of the workpiece, the deflection angle and other parameters, high positioning accuracy.

[0045] 2、本发明的跟踪算法能够有效解决工件的跟踪和定位问题,且满足工业生产线实时性的要求,提高了机器人的工作效率和工作质量,适合普遍推广使用。 [0045] 2, the tracking algorithm of the present invention can effectively solve the problem of tracking and positioning of the workpiece, and to meet the requirements of real-time industrial production line, improving the work efficiency and the quality of the robot, for use to promote widespread.

[0046] 3、本发明中的模板匹配识别算法采用基于灰度特征的模板匹配算法,该算法原理简单,实现容易,精度和实时性好。 [0046] 3, in the present invention, template matching using template matching algorithm recognition algorithm based on gray feature, the algorithm principle is simple, easy to implement, real-time and with good accuracy.

附图说明 BRIEF DESCRIPTION

[0047] 附图1是本发明结构主视图。 [0047] Figure 1 is a front view of the structure of the present invention.

[0048] 附图2是本发明的基于视觉引导机器人抓取动态工件的流程示意图。 [0048] Figure 2 is a schematic flow diagram of the robot gripping a workpiece based on a dynamic visual guide of the invention.

[0049] 附图3是本发明的坐标系转换原理图。 [0049] Figure 3 is a coordinate conversion principle of the present invention FIG.

[0050] 附图4是本发明所采用的卡尔曼滤波算法流程图。 [0050] Figure 4 is a flowchart of a Kalman filter algorithm used in the present invention.

具体实施方式 detailed description

[0051] 下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。 [0051] and the following description in conjunction with the accompanying drawings of the embodiments of the present invention will be further detailed embodiment, the embodiment of the present invention is not limited thereto.

[0052] 实施例 [0052] Example

[0053] 如图1所示,一种基于视觉引导的工业机器人工件定位抓取系统,包括:传送带1、光电开关2、摄像机3、工业控制计算机4、机器人5和目标工件6,摄像机3安装于传送带I的一端,机器人5安装于传送带I的另一端,光电开关2安装于传送带I上,光电开关2、摄像机3、工业控制计算机4和机器人5依次电气连接。 [0053] 1 A gripping workpiece positioning system based on the industry visually guided robot, comprising: a conveyor belt 1, the photoelectric switch 2, the camera 3, industrial control computer 4, robot 5, and a target workpiece 6, the camera 3 is mounted at one end of the belt I, the robot 5 is attached to the other end of the belt I, photoelectric switch mounted on the conveyor belt 2 I, 2 photoelectric switch, the camera 3, 4 and industrial robot control computer 5 are electrically connected in sequence.

[0054] 如图2所示,一种基于视觉引导的工业机器人工件定位抓取方法,包括以下步骤: [0054] As shown A method for gripping the workpiece positioning robot based industrial visual guide 2, comprising the steps of:

[0055] 步骤S1:如图3所示,对所述摄像机3用平面标靶标定法进行标定,并通过其中一副标定板图像在传送带I上建立一个参考坐标系(0MfXMfYMfZMf),得到参考坐标系和摄像机坐标系之间的相对位姿同时通过离线测量的方式得到参考坐标系和机器人5基础坐标系(OwXwYwZw)之间的相对位姿baseHref,以参考坐标系(0MfXMfYMfZMf)为中间媒介可得到摄像机坐标系和机器人基础坐标系之间的位姿关系: [0055] Step S1: As shown, the camera 3 is a plane calibration method for calibration of the target 3, and wherein a calibration plate image by establishing a reference coordinate system (0MfXMfYMfZMf) on the conveyor belt I, the reference coordinate system obtained baseHref relative pose between and the relative pose between camera coordinate system and the reference coordinate system at the same time to give off-line measurement by the robot base coordinate system of the embodiment 5 (OwXwYwZw), the reference coordinate system (0MfXMfYMfZMf) obtained intermediary of the camera pose the relationship between the coordinate system and the robot base coordinate system:

[0056] baseHcam=baseHref.(camHref)-1, [0056] baseHcam = baseHref. (CamHref) -1,

[0057] 通过目标定位得到.Η__,则目标工件6与机器人5的相对位姿关系为: [0057] By targeting .Η__ obtained, the relative pose of the robot and a target workpiece 5 to 6:

[0058] baseHob卢seHcam.camHobj, [0058] baseHob Lu seHcam.camHobj,

[0059] 以建立目标工件和机器人之间的联系; [0059] In certain establish contact between the workpiece and the robot;

[0060] 步骤S2:工件通过时,所述光电开关2触发摄像机3采集经传送带送入摄像机视野的5-10帧工件图像并记录每帧图像的拍照时刻,将图像传送到所述工业控制计算机4进行图像平滑滤波操作,系统噪声主要是由传送带反光、CCD电路和工业现场环境引起的随机噪声。 [0060] Step S2: When the workpiece by the photoelectric switch 2 to trigger the camera view camera 3 is fed via a collection belt and the workpiece image recording 5-10 photographing frame timing of each frame, the image is transferred to said industrial control computer 4 image smoothing operation, random noise is mainly composed of a conveyor belt system noise reflective, and the CCD circuit caused industrial environment. 图像滤波采用均值滤波方法,降低噪声的影响,均值滤波方法的数学公式是: Image filtering method using the mean filter, to reduce the influence of noise, the mathematical formula for the filtering method are:

I I

[0061] g(x,y)=^j Σ [0061] g (x, y) = ^ j Σ

射UH V.[0062] 均值滤波方法是将原图中一个像素的灰度值和它周围邻近8个像素的灰度值相加,然后求得的平均值。 Exit UH V. [0062] The mean filtering is the original gray value of a pixel and the surrounding eight pixels adjacent that gray scale values ​​are added, then the obtained average value. 经过均值滤波后,图像噪声得到平滑。 After a mean filter, a smooth image noise. 在预处理后的图像中提取工件灰度特征模板,用于步骤S3中的识别算法的依据; Workpiece feature extraction template gradation image after preprocessing, a recognition algorithm in accordance with step S3;

[0063] 步骤S3:采用基于灰度特征的模板匹配识别算法,将工件特征模板从工件图像的左上角开始遍历图像,搜索工件特征模板实例,在工件图像(i,j)点处的匹配结果可以用归一化相关系数NCC (i,j)表示,其中NCC (i,j)的数学表达式为: [0063] Step S3: template matching recognition algorithm based on the gradation characteristics, the workpiece feature template from the upper left corner of the image of the workpiece traversing image search workpiece feature template instance, the matching result at point in the image work (i, j) can use the normalized correlation coefficient NCC (i, j), where NCC (i, j) is the mathematical expression:

Figure CN103895042AD00091

[0065] 当工件特征模板与待匹配的子图完全相同时,NCC(i, j)=l,当被搜索图像完成全部搜索后,NCC(i,j)最大值处对应的子图即为工件实例。 Sub-picture [0065] When the workpiece feature identical to the template to be matched subgraph, NCC (i, j) = l, when all searches search image is completed, the maximum value of NCC (i, j) is the corresponding at examples of the workpiece. 且得到实例在图像中的位置(X,Y, Z)和偏转角度Θ,根据步骤SI的标定结果将(X,Y, Z)和Θ映射到机器人基础坐标系中。 And the resulting position of the instance in the image (X, Y, Z) and the deflection angle Θ, the calibration according to the result of step SI will be (X, Y, Z), and [Theta] is mapped to the robot base coordinate system. 得到工件位姿在机器人基础坐标系表示为(xb,Yb,zb)和0b。 To give the robot pose a workpiece coordinate system as the basis of (xb, Yb, zb), and 0b.

[0066] 步骤S4:对步骤S2采集的5-10帧工件图像都进行步骤S3的灰度模板匹配后,计算目标工件中心在运动方向的位移S,除以拍摄这5-10帧图像的时间T,即可得到目标工件的速度V=S/T。 [0066] Step S4: After 5-10 workpiece image frames are acquired at step S2 gradation step S3, the template matching, calculates a target workpiece center displacement in the direction of movement S, divided by the time of photographing a frame image which 5-10 T, to obtain a target workpiece speed V = S / T.

[0067] 步骤S5:跟踪算法采用卡尔曼滤波模型对传送带上的工件建立匀速直线运动的运动模型,如图4所示,卡尔曼滤波模型通过时间更新方程和状态更新方程预测并修正目标的状态。 [0067] Step S5: tracking algorithm Kalman filter model of the workpiece on the conveyor belt motion model uniform rectilinear motion, as shown in FIG 4 by a Kalman filter model predicts time update equations and equation of state update and correct target state . 目标工件6的运动模型可用数学表达式表示为: 6 target workpiece motion model can be used mathematical expressions as:

[0068] [0068]

Figure CN103895042AD00092

[0069] 化成卡尔曼滤波的矩阵形式得: [0069] Kalman filter into matrix form obtained:

[0070] [0070]

Figure CN103895042AD00093

[0071] 在图像中只能观测到工件的位置,因此观测模型Zk为: [0071] In the image the position of the workpiece can be observed, so as Zk observation model:

[0072] [0072]

Figure CN103895042AD00094

[0073] 将第一帧工件图像通过识别算法检测出的工件在机器人基础坐标系下的定位信息(Xtl, Y0, Z0)和偏转角度Θ ^结合步骤S4计算出的工件运动速度V初始化卡尔曼滤波模型,预测工件在At时间后处于待抓取工位时在机器人基础坐标系下的位置(XpYdZ1)和偏转角度Θ !O [0073] The location information of the first frame image of the workpiece detected by the workpiece recognition algorithm in the robot base coordinate system (Xtl, Y0, Z0) and the tilt angle Θ ^ binding step S4 of the workpiece calculated velocity V initialization Kalman filter model prediction time at after the workpiece is in position in the base coordinate system of the robot to be gripped when the station (XpYdZ1) and the deflection angle Θ! O

[0074] 步骤S6:根据待抓取工位(Xe,Ye, Ze)和目标点(Xtm,Ytar, Ztar)的位置规划机器人的运动轨迹,机器人控制器发出指令控制机器人按照规划的运动轨迹运动,接近并调整机器人的末端执行器位姿,以准确抓取工件放置到目标点位置。 [0074] Step S6: The station to be gripped trajectory (Xe, Ye, Ze) and the target point (Xtm, Ytar, Ztar) position planning robot, the robot controller gives an instruction to control the robot movement according to the trajectory planning , close and adjust the robot end effector position and orientation, in order to accurately place the workpiece gripping position to the target point.

[0075] 上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。 [0075] The preferred embodiment of the present invention embodiment, but the embodiment of the present invention is not limited to the above embodiments, changes made to any other without departing from the spirit and principle of the present invention, modifications, substitutions , combined, simplified, should be equivalent replacement method, it is included within the scope of the present invention.

Claims (6)

  1. 1.一种基于视觉引导的工业机器人工件定位抓取方法,其特征在于,包括以下步骤: 步骤S1:根据摄像机和机器人的位置关系建立系统的参数化模型并标定摄像机内外参数,通过摄像机为中间媒介建立起目标工件与机器人之间的相对位姿关系; 步骤S2:当工件通过时,光电开关触发摄像机采集经传送带送入摄像机视野的工件图像并传送到工业控制计算机进行图像滤波预处理和图像增强预处理,降低噪声的影响,同时确定目标工件的特征模板,作为用于识别算法的依据; 步骤S3:采用模板匹配识别算法在工件图像中搜索工件特征模板实例,得到工件在拍照时刻的位置(X,Y,Z)和偏转角度Θ,根据步骤SI的标定结果将(Χ,Υ,Ζ)和Θ映射到机器人基础坐标系中; 步骤S4:对步骤S2采集的工件图像进行步骤S3的灰度模板匹配后,根据经过模板匹配后的5-10帧工件图像, An industrial robot gripping workpiece positioning method based on visual guidance, characterized in that it comprises the following steps: Step S1: establishing parameters of the system model and the positional relationship between the camera and the robot and external camera calibration parameters of the camera through an intermediate media established between the relative pose of the robot and a target workpiece; step S2: when the workpiece, the photoelectric switch is triggered by the workpiece image captured by the camera into the camera field of view and the conveyor belt to the industrial control computer image filtering and image pre-processing pretreatment enhanced, reducing the influence of noise, while the target template is determined characteristic of the workpiece, as the basis for identifying the algorithm; step S3: template matching recognition algorithm search template instances workpiece feature in the workpiece image, the position of the workpiece obtained at the time of photographing (X, Y, Z) and the deflection angle Θ, in accordance with the calibration results of the step SI (Χ, Υ, Ζ) and [Theta] is mapped to the coordinate system of the robot base; step S4: the workpiece image acquisition is performed in step S2 step S3 after the template matching gray, 5-10 according to the frame image of the workpiece after the template matching, 计算目标工件中心在运动方向的位移S,除以拍摄这5-10帧图像的时间Τ,即得到目标工件速度V的计算式: V=S/T, 其中,S表不位移,T表不时间; 步骤S5:跟踪算法采用卡尔曼滤波模型预测工件处于待抓取工位时的位置与偏转角度; 步骤S6:根据待抓取工位和目标点的位置规划机器人的运动轨迹,机器人控制器发出指令控制机器人按照规划的运动轨迹接近并抓取工件,并放置到目标点位置。 Calculating a target displacement of the center of the workpiece in the direction of movement S, divided by the photographing time Τ 5-10 This frame image, i.e., the formula is calculated to obtain a target workpiece speed V: V = S / T, wherein, the table is not displaced S, T table does not time; step S5: tracking algorithm Kalman filter model to predict the deflection angle of the workpiece in a position to be gripped when the station; step S6: the robot trajectory planning of the position to be gripped station and the target point, the robot controller instructs the robot control in accordance with a workpiece gripping and close trajectory planning and placement to a target position.
  2. 2.根据权利要求1所述的基于视觉引导的工业机器人工件定位抓取方法,其特征在于,所述步骤SI包括以下步骤: 步骤11、对所述摄像机用平面标靶标定法进行标定,并通过标定板图像在传送带上建立一个参考坐标系(OrrfXMfYMfZMf),得到参考坐标系和摄像机坐标系之间的相对位姿caniTTnref ; 步骤12、通过离线测量的方式,得到参考坐标系和机器人基础坐标系(OwXwYwZw)之间的相对位姿bas^f,以参考坐标系(CWmf)为中间媒介可得到摄像机坐标系和机器人基础坐标系之间的位姿关系: base-rj _baseTj 9 /caniTT \ -1 ^cam nref \ ^ref ^ , 通过目标定位得到,则目标工件与机器人之间的相对位姿关系bassHcjbj的表达式为: base-rj _baseTj.caniTT ^obj ^cam ^obj, 其中,baseHram表示摄像机坐标系和机器人基础坐标系之间的位姿关系,表示摄像机坐标系和工件坐标系之间的位姿关系。 The workpiece positioning robot gripping industrial visual guidance based on the method of claim 1, wherein said step SI includes the following steps: Step 11, the cameras are calibrated with the calibration method of the target plane, and by establishing a reference image of the calibration plate coordinate system (OrrfXMfYMfZMf), to give caniTTnref relative pose between the reference coordinate system and the camera coordinate system on the conveyor belt; step 12, by way of off-line measurement, to obtain the reference coordinate system and the robot base coordinate system ( relative pose between the bas OwXwYwZw) ^ f, the reference coordinate system (CWmf) as intermediary between the obtained pose of the camera coordinate system and the robot base coordinate system: base-rj _baseTj 9 / caniTT \ -1 ^ expression relative pose relationship between bassHcjbj cam nref \ ^ ref ^, obtained by targeting, target workpiece and the robot: base-rj _baseTj.caniTT ^ obj ^ cam ^ obj, wherein, baseHram camera coordinate system represents and the relationship between the position and orientation of the robot base coordinate system, showing the relationship between the position and orientation of the camera coordinate system and the workpiece coordinate system.
  3. 3.根据权利要求1所述的基于视觉引导的工业机器人工件定位抓取方法,其特征在于,所述步骤S2包括以下步骤: 步骤21、当工件通过时,光电开关触发摄像机采集经传送带送入摄像机视野的5-10帧工件的图像,并记录每帧图像的拍照时刻; 步骤22、将图像传送到工业控制计算机,工业计算机对图像进行平滑滤波操作,所述平滑滤波操作采用均值滤波方法,所述均值滤波方法的表达式为: The workpiece positioning robot gripping industrial visual guidance based method according to claim 1, characterized in that the step S2 comprises the following steps: Step 21, when a workpiece, the photoelectric switch is triggered by the conveyor belt into the camera to capture 5-10 frame image of the camera view of the workpiece, and the recording time of each image photographed; step 22, transfer the image to the industrial control computer, an industrial computer for image smoothing operation, the smoothing operation method using the mean filter, the filtering method as mean expression:
    Figure CN103895042AC00031
    其中,f (x, y)表示(x, y)像素处的灰度值,g(x, y)表示经过均值滤波处理后(x, y)处的像素值,M表示为该模板中包含当前像素在内的像素总个数; 所述均值滤波方法是将原图中一个像素的灰度值和它周围邻近8个像素的灰度值相加,然后求得的平均值,经过均值滤波后,图像噪声得到平滑,在预处理后的图像中提取工件灰度特征模板,用于作为步骤S3中的模板匹配识别算法的依据。 Wherein, f (x, y) represents the (x, y) at the pixel gray value, g (x, y) represents a pixel after filtering process is Mean values ​​(x, y) at, M is included in the template for the the total number of pixels including a current pixel; the mean filter is a pixel in the original image gradation value and its neighboring eight pixels surrounding the gray values ​​are added, then the obtained average value, the mean filter through after the, image noise to obtain a smooth, gray workpiece feature extraction template image after preprocessing, a recognition algorithm according to a template matching in step S3.
  4. 4.根据权利要求1所述的基于视觉引导的工业机器人工件定位抓取方法,其特征在于,所述步骤S3包括以下步骤: 步骤31、采用基于灰度特征的模板匹配识别算法,将工件特征模板从工件图像的左上角开始遍历图像,搜索工件特征模板实例,在工件图像(i,j)点处的匹配结果可以用归一化相关系数NCC (i,j)表示,其中NCC (i,j)的数学表达式为: The workpiece positioning robot gripping industrial visual guidance based on the method of claim 1, wherein said step S3 comprises the following steps: step 31, the template matching recognition algorithm based on gray feature, the workpiece feature template image from the upper left of the workpiece traversing image search workpiece feature template instance, the workpiece image (i, j) at the point of the matching result may be normalized correlation coefficient NCC (i, j), where NCC (i, j) the mathematical expression is:
    Figure CN103895042AC00032
    其中,T (m, n)为模板在坐标(m,n)处的像素值,S(i+m, j+n)表示被搜索图像在(i+m, j+n)坐标处的像素值,(i, j)表示像素的位置; 当工件特征模板与待匹配的子图完全相同时,NCC(i,j)=l,当被搜索图像完成全部搜索后,NCC(i,j)最大值处对应的子图即为工件实例,且得到实例在图像中的位置(X,Y,Z)和偏转角度Θ ; 步骤32、根据步骤SI中的标定结果将(X,Y,Ζ)和Θ映射到机器人基础坐标系中,得到工件位姿在机器人基础坐标系,所述基础坐标系用(Xb,Yb,Zb)和0b表示。 Wherein, T (m, n) is the pixel value of the template at coordinates (m, n) at a, S (i + m, j + n) represents a pixel is the (i + m, j + n) at the coordinates search image values, (i, j) represents the position of a pixel; FIG workpiece feature when the sub-template to be matched exactly the same time, NCC (i, j) = l, when all searches search image is completed, NCC (i, j) maximum value corresponding to the workpiece is the subgraph example, and the resulting position of the instance in the image (X, Y, Z) and the deflection angle [Theta]; step 32, according to the calibration result in step SI in the (X, Y, Ζ) and Θ is mapped to the robot base coordinate system, obtained workpiece position and orientation in the robot base coordinate system, the base coordinate system by (Xb, Yb, Zb) and 0b FIG.
  5. 5.根据权利要求1所述的基于视觉引导的工业机器人工件定位抓取方法,其特征在于,所述步骤S5包括以下步骤: 步骤51、所述卡尔曼滤波模型对传送带上的工件建立匀速直线运动的运动模型,卡尔曼滤波模型通过时间更新方程和状态更新方程预测并修正目标的状态,目标工件的运动模型可用数学表达式表示为: The workpiece positioning robot gripping industrial visual guidance based on the method of claim 1, wherein said step S5 comprises the following steps: Step 51, the uniform linear Kalman filter model of the workpiece on the conveyor belt movement of a motion model, the model predicted by the Kalman filter time update equations and equation of state update and correct state of the target, the motion model may be represented by a mathematical expression of a target workpiece as:
    Figure CN103895042AC00033
    其中,Xk,yk表示k时刻目标工件的位置坐标,Xk^1, Yk^1表示k-1时刻目标工件的位置坐标,Vx, Vy表示目标工件在X和y方向的速度,dt为采样时间间隔; 化成卡尔曼滤波的矩阵形式得到 Wherein, Xk, yk represents time k target position of the workpiece coordinate, Xk ^ 1, Yk ^ positional coordinates 1 represents a k-1 time target workpiece, Vx, Vy represents a target workpiece speed X and y directions, dt is a sampling time interval; Kalman filter into a matrix form to give
    Figure CN103895042AC00034
    其中,Xk,yk表示k时刻目标工件的位置坐标,Xk^1, Yk^1表示k-1时刻目标工件的位置坐标,Vx, Vy表示目标工件在X和y方向的速度,dt为采样时间间隔,W为过程激励噪声; 在图像中只能观测到工件的位置,因此观测模型Zk为: Wherein, Xk, yk represents time k target position of the workpiece coordinate, Xk ^ 1, Yk ^ positional coordinates 1 represents a k-1 time target workpiece, Vx, Vy represents a target workpiece speed X and y directions, dt is a sampling time spacing, W is a noise excitation process; only observed in the image position of the workpiece, so as Zk observation model:
    Figure CN103895042AC00041
    其中,Zk为观测模型,Xk, yk表示k时刻目标工件的位置坐标,Xk+ Yk^1表示k-ι时刻目标工件的位置坐标,vx,vy表示目标工件在X和y方向的速度,dt为采样时间间隔,V为测量噪声; 步骤52、将第一帧工件图像通过识别算法检测出的工件在机器人基础坐标系下的定位信息(Xci, Y0, Z0)和θ 0,结合步骤S4计算出的工件运动速度V初始化卡尔曼滤波模型,预测工件在dt时间后处于待抓取工位时在机器人基础坐标系下的位置(XdU1)和偏转角度Θ 1;因工件做平面运动,固工件的ζ方向坐标为常数。 Wherein, Zk is the observation model, Xk, yk represents a position coordinate at time k target workpiece, Xk + Yk ^ positional coordinates 1 represents a k-ι time target workpiece, vx, vy represents the target workpiece speed X and y directions, dt is sampling interval, V is the measurement noise; step 52, the location information of the first frame image of the workpiece detected by the workpiece recognition algorithm in the robot base coordinate system (Xci, Y0, Z0), and θ 0, the combining step S4 is calculated the moving speed V of the workpiece model Kalman filter initialization, prediction time dt after the workpiece is in a position at the robot base coordinate system to be gripped when the station (XdU1) and the deflection angle Θ 1; made by planar motion a workpiece, the workpiece solid ζ is a constant orientation coordinates.
  6. 6.一种实现权利要求1所述的基于视觉引导的工业机器人工件定位抓取方法的系统,其特征在于,包括:传送带、光电开关、摄像机、工业控制计算机、机器人和目标工件;摄像机安装于传送带的一端的相机支架上,机器人安装于传送带的另一端,光电开关安装于传送带相机支架的下方,光电开关、摄像机、工业控制计算机和机器人依次电气连接。 A system for implementing a method of gripping a workpiece positioned industrial robot based on said visual guide as claimed in claim 1, characterized in that, comprising: a conveyor belt, photoelectric switches, camera, industrial control computer, the robot and a target workpiece; mounting the camera the camera mount at one end of the belt, the robot is attached to the other end of the conveyor belt, the conveyor belt installed below the photoelectric switch of the camera mount, photoelectric switches, camera, and computer control industrial robot are electrically connected sequentially.
CN 201410073766 2014-02-28 2014-02-28 Industrial robot workpiece positioning grabbing method and system based on visual guidance CN103895042A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201410073766 CN103895042A (en) 2014-02-28 2014-02-28 Industrial robot workpiece positioning grabbing method and system based on visual guidance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201410073766 CN103895042A (en) 2014-02-28 2014-02-28 Industrial robot workpiece positioning grabbing method and system based on visual guidance

Publications (1)

Publication Number Publication Date
CN103895042A true true CN103895042A (en) 2014-07-02

Family

ID=50986772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201410073766 CN103895042A (en) 2014-02-28 2014-02-28 Industrial robot workpiece positioning grabbing method and system based on visual guidance

Country Status (1)

Country Link
CN (1) CN103895042A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104385282A (en) * 2014-08-29 2015-03-04 暨南大学 Visual intelligent numerical control system and visual measuring method thereof
CN104400788A (en) * 2014-12-03 2015-03-11 安徽省库仑动力自动化科技有限公司 Visual identity robot system for dismantling waste lead battery
CN104626169A (en) * 2014-12-24 2015-05-20 四川长虹电器股份有限公司 Robot part grabbing method based on vision and mechanical comprehensive positioning
CN104647388A (en) * 2014-12-30 2015-05-27 东莞市三瑞自动化科技有限公司 Machine vision-based intelligent control method and machine vision-based intelligent control system for industrial robot
CN104760812A (en) * 2015-02-26 2015-07-08 三峡大学 Monocular vision based real-time location system and method for products on conveying belt
CN104796617A (en) * 2015-04-30 2015-07-22 北京星河康帝思科技开发股份有限公司 Visual inspection method and system for assembly lines
CN105451461A (en) * 2015-11-25 2016-03-30 四川长虹电器股份有限公司 PCB board positioning method based on SCARA robot
CN106041296A (en) * 2016-07-14 2016-10-26 武汉大音科技有限责任公司 Online dynamic vision laser precise processing method
CN106331620A (en) * 2016-08-25 2017-01-11 中国大冢制药有限公司 Medicine bottle positioning analysis method of filling production line
CN106346486A (en) * 2016-11-04 2017-01-25 武汉海默自控股份有限公司 Six-axis cooperated robot multi-loop control system and control method thereof
CN106393103A (en) * 2016-08-23 2017-02-15 苏州博众精工科技有限公司 Self-adaptive material taking method based on machine vision and used for array type material box
CN106556341A (en) * 2016-10-08 2017-04-05 浙江国自机器人技术有限公司 Shelf posture deviation detecting method and system based on characteristic information graph
CN106625676A (en) * 2016-12-30 2017-05-10 易思维(天津)科技有限公司 Three-dimensional vision accurate guiding and positioning method for automobile intelligent manufacturing automatic feeding
CN106671083A (en) * 2016-12-03 2017-05-17 安徽松科智能装备有限公司 Assembly robot system based on machine vision detection
CN106903720A (en) * 2017-04-28 2017-06-30 安徽捷迅光电技术有限公司 Automatic visual correction method for coordinates of conveying platform and robot
CN106956292A (en) * 2017-04-28 2017-07-18 安徽捷迅光电技术有限公司 Coordinate visual physical correction method based on conveying platform and robot
CN107044837A (en) * 2016-12-26 2017-08-15 北京京东尚科信息技术有限公司 Method for calibrating coordinate system of detection tool, device and control apparatus
CN107300100A (en) * 2017-05-22 2017-10-27 浙江大学 Visual guidance and approach method of cascade-connection mechanical arm driven by on-line CAD model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120323358A1 (en) * 2011-06-20 2012-12-20 Kabushiki Kaisha Yaskawa Denki Picking system
CN103112008A (en) * 2013-01-29 2013-05-22 上海智周自动化工程有限公司 Method of automatic positioning and carrying of dual-vision robot used for floor cutting

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120323358A1 (en) * 2011-06-20 2012-12-20 Kabushiki Kaisha Yaskawa Denki Picking system
CN103112008A (en) * 2013-01-29 2013-05-22 上海智周自动化工程有限公司 Method of automatic positioning and carrying of dual-vision robot used for floor cutting

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
冯形松: "基于全方位视觉的运动目标检测跟踪研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
刘振宇: "基于机器视觉的工业机器人分拣技术研究", 《制造业自动化》 *
赵彬: "基于机器视觉的工业机器人分拣技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
邹广华: "基于几何特征的快速模板匹配算法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104385282A (en) * 2014-08-29 2015-03-04 暨南大学 Visual intelligent numerical control system and visual measuring method thereof
CN104400788A (en) * 2014-12-03 2015-03-11 安徽省库仑动力自动化科技有限公司 Visual identity robot system for dismantling waste lead battery
CN104626169A (en) * 2014-12-24 2015-05-20 四川长虹电器股份有限公司 Robot part grabbing method based on vision and mechanical comprehensive positioning
CN104647388A (en) * 2014-12-30 2015-05-27 东莞市三瑞自动化科技有限公司 Machine vision-based intelligent control method and machine vision-based intelligent control system for industrial robot
CN104760812A (en) * 2015-02-26 2015-07-08 三峡大学 Monocular vision based real-time location system and method for products on conveying belt
CN104796617A (en) * 2015-04-30 2015-07-22 北京星河康帝思科技开发股份有限公司 Visual inspection method and system for assembly lines
CN104796617B (en) * 2015-04-30 2018-08-28 北京星河泰视特科技有限公司 Visual detection methods and systems for pipeline
CN105451461A (en) * 2015-11-25 2016-03-30 四川长虹电器股份有限公司 PCB board positioning method based on SCARA robot
CN106041296A (en) * 2016-07-14 2016-10-26 武汉大音科技有限责任公司 Online dynamic vision laser precise processing method
CN106041296B (en) * 2016-07-14 2017-07-28 武汉大音科技有限责任公司 An online Dynamic visual laser precision machining method
CN106393103A (en) * 2016-08-23 2017-02-15 苏州博众精工科技有限公司 Self-adaptive material taking method based on machine vision and used for array type material box
CN106331620A (en) * 2016-08-25 2017-01-11 中国大冢制药有限公司 Medicine bottle positioning analysis method of filling production line
CN106556341A (en) * 2016-10-08 2017-04-05 浙江国自机器人技术有限公司 Shelf posture deviation detecting method and system based on characteristic information graph
CN106346486B (en) * 2016-11-04 2018-07-27 武汉海默机器人有限公司 Kind of six-axis robot collaborative multi-loop control system and control method
CN106346486A (en) * 2016-11-04 2017-01-25 武汉海默自控股份有限公司 Six-axis cooperated robot multi-loop control system and control method thereof
CN106671083A (en) * 2016-12-03 2017-05-17 安徽松科智能装备有限公司 Assembly robot system based on machine vision detection
CN107044837A (en) * 2016-12-26 2017-08-15 北京京东尚科信息技术有限公司 Method for calibrating coordinate system of detection tool, device and control apparatus
CN106625676B (en) * 2016-12-30 2018-05-29 易思维(天津)科技有限公司 Species automotive automatic feeding intelligent manufacturing a three-dimensional visual guide precisely positioning method
CN106625676A (en) * 2016-12-30 2017-05-10 易思维(天津)科技有限公司 Three-dimensional vision accurate guiding and positioning method for automobile intelligent manufacturing automatic feeding
CN106956292A (en) * 2017-04-28 2017-07-18 安徽捷迅光电技术有限公司 Coordinate visual physical correction method based on conveying platform and robot
CN106903720A (en) * 2017-04-28 2017-06-30 安徽捷迅光电技术有限公司 Automatic visual correction method for coordinates of conveying platform and robot
CN107300100A (en) * 2017-05-22 2017-10-27 浙江大学 Visual guidance and approach method of cascade-connection mechanical arm driven by on-line CAD model

Similar Documents

Publication Publication Date Title
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
Jorg et al. Flexible robot-assembly using a multi-sensory approach
Du et al. Markerless human–robot interface for dual robot manipulators using Kinect sensor
CN102294695A (en) Robot calibration method and calibration system
Matsuno et al. Flexible rope manipulation by dual manipulator system using vision sensor
CN102902271A (en) Binocular vision-based robot target identifying and gripping system and method
US9089971B2 (en) Information processing apparatus, control method thereof and storage medium
CN102128589A (en) Method for correcting azimuth errors of inner bore of part in process of assembling axle hole
JP2004243215A (en) Robot teaching method for sealer applicator and sealer applicator
CN102922521A (en) Mechanical arm system based on stereo visual serving and real-time calibrating method thereof
US20140288710A1 (en) Robot system and calibration method
CN103706568A (en) System and method for machine vision-based robot sorting
CN102794767A (en) B spline track planning method of robot joint space guided by vision
CN101127083A (en) Welding image identification method
Song et al. On-line stable evolutionary recognition based on unit quaternion representation by motion-feedforward compensation
CN104384765A (en) Automatic welding method based on three-dimensional model and machine vision and welding device based on three-dimensional model and machine vision
CN101396829A (en) Robot control method and robot
CN101359400A (en) Process for positioning spatial position of pipe mouth based on vision
Kragic et al. A framework for visual servoing
CN101840736A (en) Device and method for mounting optical glass under vision guide
US20150224648A1 (en) Robotic system with 3d box location functionality
JPH0847881A (en) Method of remotely controlling robot
JP2011201007A (en) Robot system with visual sensor
CN104049634A (en) Intelligent body fuzzy dynamic obstacle avoidance method based on Camshift algorithm
Perrin et al. Unknown object grasping using statistical pressure models

Legal Events

Date Code Title Description
C06 Publication
C10 Entry into substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)