CN104802166B - Robot control system, a robot, a robot control program, and a method of - Google Patents

Robot control system, a robot, a robot control program, and a method of Download PDF

Info

Publication number
CN104802166B
CN104802166B CN 201510137541 CN201510137541A CN104802166B CN 104802166 B CN104802166 B CN 104802166B CN 201510137541 CN201510137541 CN 201510137541 CN 201510137541 A CN201510137541 A CN 201510137541A CN 104802166 B CN104802166 B CN 104802166B
Authority
CN
Grant status
Grant
Patent type
Prior art keywords
information
image
object
position
robot
Prior art date
Application number
CN 201510137541
Other languages
Chinese (zh)
Other versions
CN104802166A (en )
Inventor
山口如洋
长谷川浩
稻积满广
狩户信宏
元吉正树
恩田健至
Original Assignee
精工爱普生株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Grant date

Links

Abstract

一种机器人控制系统,包括:拍摄图像获取部,其获取拍摄图像;以及控制部,其根据拍摄图像来控制机器人,拍摄图像获取部获取映有组装作业的组装对象物与被组装对象物中的、至少被组装对象物的拍摄图像,控制部根据拍摄图像,进行被组装对象物的特征量检测处理,并根据被组装对象物的特征量,使组装对象物移动。 A robot control system comprising: a captured image acquisition unit, which acquires the captured image; and a control unit which controls the robot according to the captured image, the captured image acquiring section acquires enantioselective assembling work assembling object and the target object is assembled at least assembled object captured image, the control unit from the captured image, the feature amount detection process for the object to be assembled, and the feature quantity of the object to be assembled, the assembly by moving the object.

Description

机器人控制系统、机器人、程序从及机器人控制方法 Robot control system, a robot, and the robot program control method

[0001 ] 本申请是申请日为2014年10月10日、申请号为201410531769.6、发明名称为"机器人控制系统、机器人、程序W及机器人控制方法"的申请的分案申请。 [0001] This application is filed October 10, 2014, Application No. 201410531769.6, entitled "robot control systems, robotics, robot control method and program W" divisional application application.

技术领域 FIELD

[0002] 本发明设及机器人控制系统、机器人、程序W及机器人控制方法等。 [0002] The present invention is provided and a robot control system, a robot, a robot control program and W method.

背景技术 Background technique

[0003] 近几年,在生产现场中,为了使人所进行的作业机械化、自动化,大多导入工业用机器人。 [0003] In recent years, the production site, in order to make people work carried out by mechanization, automation, mostly imported industrial robots. 但是,在进行机器人的定位时,精密的校准成为前提,是机器人导入的障碍。 However, during the positioning of the robot, to be precise calibration premise, robots are barriers to import.

[0004] 运里,作为进行机器人定位的手段之一,有视觉伺服。 [0004] shipped in, as a means to carry out one of the robot positioning, visual servo. 现有的视觉伺服是根据参照图像(goal图像、目标图像)与拍摄图像(当前的图像)的差别,对机器人进行反馈控制的技术。 Conventional visual servo reference image is based on the difference (Goal image, the target image) and the captured image (the current image), the robot feedback control technology. 某种视觉伺服在不要求校准精密度方面很有用,并且作为降低机器人导入障碍的技术而被关注。 Some kind of visual servo calibration does not require precision in terms of useful and robot technology as a reduce import barriers are concerned.

[0005] 作为与该视觉伺服相关的技术,例如存在专利文献1所记载的现有技术。 [0005] As related to the visual servoing techniques, such as the prior art described in Patent Document 1 exists.

[0006] 专利文献1:日本特开2011-143494号公报 [0006] Patent Document 1: Japanese Laid-Open Patent Publication No. 2011-143494

[0007] 在通过视觉伺服使机器人进行将组装对象物组装于被组装对象物的组装作业的情况下,被组装对象物的位置姿势在每次进行组装作业时都会变化。 [0007] the robot by visual servoing the case assembly target object in the assembling work of assembling the object, will be a change in the target object position and orientation of the assembly is assembled at each job. 在被组装对象物的位置姿势变化的情况下,被组装对象物与成为组装状态的组装对象物的位置姿势也产生变化。 In the case of changes in the assembly position and posture of the object, changes in position and posture are also produced and assembled object assembled state of the object assembly.

[000引此时,若每次使用相同的参照图像来进行视觉伺服,则无法实现正确的组装作业。 [000 cited case, when each reference image using the same visual servo can not achieve the correct assembly work. 运是因为无论成为组装状态的组装对象物的位置姿势是否变化,都会使组装对象物向映在参照图像中的组装对象物的位置姿势移动。 Transport position because no matter whether the posture becomes assembly assembled state of the object changes, the assembly will cause the object to move to a position in the reference image reflected assembled object pose.

[0009] 另外,虽然从理论上来讲只要每次实际的被组装对象物的位置变化时使用不同的参照图像,就能够通过使用参照图像的视觉伺服来进行组装作业,但是在运种情况下需要准备大量的参照图像,是不现实的。 [0009] Further, although in theory as long as each with a different change in position of the reference image when the object actually assembled, it is possible by using a visual image of the servo reference to the assembly operation, but requires operation in the case prepare a large number of reference images, it is unrealistic.

发明内容 SUMMARY

[0010] 本发明的一方式设及一种机器人控制系统,其包括:拍摄图像获取部,其获取拍摄图像;W及控制部,其根据上述拍摄图像来控制机器人,上述拍摄图像获取部获取映有组装作业的组装对象物与被组装对象物的中的、至少上述被组装对象物的上述拍摄图像,上述控制部根据上述拍摄图像,进行上述被组装对象物的特征量检测处理,并根据上述被组装对象物的特征量,使上述组装对象物移动。 [0010] one embodiment of the present invention is provided and a control system of a robot, comprising: a captured image acquisition unit, which acquires the captured image; and W is a control unit which controls the robot based on the captured image, the captured image acquiring section acquires enantiomer there assembling work assembling object and the assembled object of at least the above-described is the captured image assembling object, and the control unit in accordance with the captured image, the above-described feature value detection processing assembly target object, and based on the is assembled object feature amount, so that the assembly by moving the object.

[0011] 在本发明的一方式中,根据从拍摄图像检测出的被组装对象物的特征量,使组装对象物移动。 [0011] In one embodiment of the present invention, assembled in accordance with the object feature amount is detected from the captured image, the assembling object movement.

[0012] 由此,即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业。 [0012] Accordingly, even when the assembly posture of the change in the position of the object, the assembly work can be performed correctly.

[0013] 另外,在本发明的一方式中,也可W构成为,上述控制部根据映有上述组装对象物W及上述被组装对象物的1个或者多个拍摄图像,进行上述组装对象物W及上述被组装对象物的上述特征量检测处理,并根据上述组装对象物的特征量W及上述被组装对象物的特征量,W使上述组装对象物与上述被组装对象物的相对位置姿势关系成为目标相对位置姿势关系的方式,使上述组装对象物移动。 [0013] Further, in one embodiment of the present invention may also be configured as W, and the control unit has one or more of the above-described assembly of the captured image and said object W is assembled object based enantiomers, the above-described object assembly W and said to be the feature value detection processing assembly target object, and the feature amount of the feature amount W of the above-described assembly of the object and said assembled object, W so that the assembling object and the above relative position posture of the assembly of the object relationship manner becomes a target relative position and posture relationship, so that the assembly by moving the object.

[0014] 由此,能够根据从拍摄图像检测出的组装对象物的特征量、W及被组装对象物的特征量,进行组装作业等。 [0014] Accordingly, the feature quantity can be detected from the captured image of the object to the assembly, W, and the object feature amount assembled, the assembling operation and the like.

[0015] 另外,在本发明的一方式中,也可W构成为,上述控制部根据上述被组装对象物的特征量中的作为目标特征量而设定的特征量、与上述组装对象物的特征量中的作为关注特征量而设定的特征量,W使上述相对位置姿势关系成为上述目标相对位置姿势关系的方式,使上述组装对象物移动。 [0015] Further, in one embodiment of the present invention may also be configured as W, the control unit in accordance with the target amount of characteristic feature amount of the object to be assembled object feature amount is set, the above-described object assembly feature amount as a target feature amount of the feature quantity setting, W so that the relative position and posture relationship between the above-described manner becomes a target posture relative positional relationship, so that the assembly by moving the object.

[0016] 由此,能够W使设定的组装对象物的组装部分与设定的被组装对象物的被组装部分的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动等。 [0016] Accordingly, it is possible to make the W relative position and posture relationship between the assembly portion and the portion of the set is assembled assembly target object set in the assembly pattern of the target object becomes the posture relative positional relationship of the moving object like assembly .

[0017] 另外,在本发明的一方式中,也可W构成为,上述控制部W使上述组装对象物的关注特征点与上述被组装对象物的目标特征点一致或者接近的方式,使上述组装对象物移动。 [0017] Further, in one embodiment of the present invention may also be W configured, the control unit W so that the target feature point of interest feature point of the assembling object with said assembled object coincide with or approach a manner, so that the moving the object to the assembly.

[0018] 由此,能够将组装对象物的组装部分组装于被组装对象物的被组装部分等。 [0018] Thus, the assembly can be partially assembled object assembled to the assembling part is assembled like object.

[0019] 另外,在本发明的一方式中,也可W构成为,包括参照图像存储部,该参照图像存储部存储对采取目标位置姿势的上述组装对象物进行显示的参照图像,上述控制部根据映有上述组装对象物的第一拍摄图像与上述参照图像,使上述组装对象物向上述目标位置姿势移动,在使上述组装对象物移动后,根据至少映有上述被组装对象物的第二拍摄图像,进行上述被组装对象物的上述特征量检测处理,并根据上述被组装对象物的特征量,使上述组装对象物移动。 [0019] Further, in one embodiment of the present invention may also be W configured to include a reference image storage section, the pair of taking the target position and orientation of the assembling object to the reference image displayed reference image storage section stores the control unit the above-described first photographed image enantiomer assembling object with the reference image, so that the assembly is moved to the object to the target position posture, so that the assembly of the moving object, the enantiomeric accordance with at least the above objects is assembled from a second the captured image, perform the above-described feature amount detection processing assembly target object, and the feature quantity of the object to be assembled, the assembly so that the object moves.

[0020] 由此,能够在反复进行相同的组装作业时,使用相同的参照图像,使组装对象物向被组装对象物的附近移动,之后,对照实际的被组装对象物的详细的位置姿势进行组装作业等。 [0020] Thus, when the same assembly work is repeated, using the same reference picture, the assembling object is moved to the vicinity of the assembly of the object, after the position and posture control in detail the actual object to be assembled assembly work and so on.

[0021] 另外,在本发明的一方式中,也可W构成为,上述控制部根据映有上述组装作业中的第一被组装对象物的第一拍摄图像,进行上述第一被组装对象物的上述特征量检测处理,并根据上述第一被组装对象物的特征量,使上述组装对象物移动,在使上述组装对象物移动后,根据至少映有第二被组装对象物的第二拍摄图像,进行上述第二被组装对象物的上述特征量检测处理,并根据上述第二被组装对象物的特征量,使上述组装对象物W及上述第一被组装对象物移动。 [0021] Further, in one embodiment of the present invention may also be configured as W, and the control portion of the first shot image is a first object of the assembly in accordance with the above-described assembling work of enantiomers, the above-described first object is to be assembled from the feature amount detection process, and based on the first feature amount to be assembled object, moving the object so that the assembly, in the assembly so that the movement of the object, according to a second enantiomer is assembled from at least a second imaging subject image, the above-described second feature amount detection process by the above assembling object, and based on the second feature amount being assembled object, so that the object W assembly is assembled and said first object is a moving object.

[0022] 由此,在每次进行组装作业时,即使第一被组装对象物、第二被组装对象物的位置偏移,也能够进行组装对象物、第一被组装对象物、W及第二被组装对象物的组装作业等。 [0022] Accordingly, each time for assembling work, even if the first object to be assembled, the assembly position of the second object is shifted, it is possible to assemble the object, the first object to be assembled, W and second and the like are assembled two assembling work object.

[0023] 另外,在本发明的一方式中,也可W构成为,上述控制部根据映有上述组装作业中的上述组装对象物W及第一被组装对象物的1个或者多个第一拍摄图像,进行上述组装对象物W及上述第一被组装对象物的上述特征量检测处理,并根据上述组装对象物的特征量W及上述第一被组装对象物的特征量,W使上述组装对象物与上述第一被组装对象物的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使上述组装对象物移动,并根据映有第二被组装对象物的第二拍摄图像,进行上述第二被组装对象物的上述特征量检测处理,并根据上述第一被组装对象物的特征量W及上述第二被组装对象物的特征量,W使上述第一被组装对象物与上述第二被组装对象物的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使上述组装对象物与上述第一被组装对象物 [0023] Further, in one embodiment of the present invention may also be configured as W, and the control unit according to one or more of the enantiomers have the above-described assembling work assembling the object W and the first object to be assembled first objects the captured image, the above-described assembly and said first object W is the feature value detection processing assembly object, according to the above-described feature quantities W assembling object and said first object to be assembled objects, so that the assembly W the first object to be assembled relative position and posture relationship between the object and the target becomes the first embodiment of the relative position and posture relationship, so that the assembly moving the object, and the second captured image is a second object based on the mapping assembly, for the second feature amount detection process by the above assembling object, and is based on the first feature amount of the object W assembly and said second assembly being the object feature amount, W so that the first object to be assembled from the above-described a second relative position and posture relationship between the assembled target object becomes a second embodiment of a target relative position and posture relationship, so that the assembly of the first object and the object to be assembled from 动。 move.

[0024] 由此,能够W使组装对象物的关注特征点与第一被组装对象物的目标特征点接近、第一被组装对象物的关注特征点与第二被组装对象物的目标特征点接近的方式,进行视觉伺服等。 [0024] Accordingly, it is possible that the feature point W is assembled with the object approaches the target feature point is a first object of the assembly, a first feature point of interest is assembled target feature point of the object and the second object to be assembled close to the way visual servo and so on.

[0025] 另外,在本发明的一方式中,也可W构成为,上述控制部根据映有上述组装作业中的上述组装对象物、第一被组装对象物W及第二被组装对象物的1个或者多个拍摄图像,进行上述组装对象物、上述第一被组装对象物W及上述第二被组装对象物的上述特征量检测处理,并根据上述组装对象物的特征量W及上述第一被组装对象物的特征量,W使上述组装对象物与上述第一被组装对象物的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使上述组装对象物移动,并根据上述第一被组装对象物的特征量W及上述第二被组装对象物的特征量,W使上述第一被组装对象物与上述第二被组装对象物的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使上述第一被组装对象物移动。 [0025] Further, in one embodiment of the present invention may also be configured as W, and the control unit in the above-described assembling work assembling the mapping object according to a first object W is assembled and the second object to be assembled objects one or a plurality of captured images, perform the assembling object, the first object W is assembled and the second feature amount detection process by the above assembling object, and assembling the above-described feature quantity W of the first object and a feature value of the object assembled, W so that the assembly of the first object and the relative position and posture relationship between the object becomes assembled manner relative position and posture relationship between the first target and moving the object so that the assembly, in accordance with the first and a feature quantities W is assembling the object and the second object to be assembled objects, W is relatively assembled so that the first object is the second relative position and posture relationship between the object and the assembly becomes a second target position posture relationship embodiment, the assembly is moved so that the first target object.

[00%]由此,能够进行Ξ个工件的同时组装作业等。 [00%] Accordingly, the workpiece can be performed simultaneously Ξ assembling workability.

[0027]另外,在本发明的一方式中,也可W构成为,上述控制部根据映有上述组装作业中的第二被组装对象物的第一拍摄图像,进行上述第二被组装对象物的上述特征量检测处理,并根据上述第二被组装对象物的特征量,使第一被组装对象物移动,并根据映有移动后的上述第一被组装对象物的第二拍摄图像,进行上述第一被组装对象物的上述特征量检测处理,并根据上述第一被组装对象物的特征量,使上述组装对象物移动。 [0027] Further, in one embodiment of the present invention may also be configured as W, the control unit has a first captured image is a second object of the assembly in accordance with the above-described assembling work of enantiomers, the above-described second object is to be assembled from the feature amount detection process, and the second is based on the object feature amount assembled, is assembled from a first mobile object and the second captured image in accordance with the movement of the enantiomers has a first object to be assembled from, for the first feature amount detection process by the above assembling object, and based on the first feature amount to be assembled object, so that the assembly by moving the object.

[00%]由此,无需使组装对象物与第一被组装对象物同时移动,而能够更加容易地进行机器人的控制等。 [00%] Thus, the assembly without moving the object while the object to be assembled with the first object, and can be more easily controlled robots and the like.

[0029] 另外,在本发明的一方式中,也可W构成为,上述控制部通过进行基于上述拍摄图像的视觉伺服,控制上述机器人。 [0029] Further, in one embodiment of the present invention may also be configured as W, by the control unit based visual servoing of the captured image, the control of the robot.

[0030] 由此,能够根据当前的作业状况,对机器人进行反馈控制等。 [0030] Accordingly, it is possible for the robot performs feedback control based on the current job status and the like.

[0031] 另外,本发明的另一方式设及一种机器人,其包括:拍摄图像获取部,其获取拍摄图像;W及控制部,其根据上述拍摄图像来控制机器人,上述拍摄图像获取部获取映有组装作业的组装对象物与被组装对象物中的、至少上述被组装对象物的上述拍摄图像,上述控制部根据上述拍摄图像,进行上述被组装对象物的特征量检测处理,并根据上述被组装对象物的特征量,使上述组装对象物移动。 [0031] Further, another embodiment of the present invention is provided and a robot comprising: a captured image acquisition unit, which acquires the captured image; and W is a control unit which controls the robot based on the captured image, the captured image acquisition unit acquires enantioselective assembly target object operations with the assembly in the object, and at least the above-described captured image is assembled target object, and the control unit based on the captured image, a feature amount detection process described above the object of the assembled objects, and in accordance with the above-described is assembled object feature amount, so that the assembly by moving the object.

[0032] 另外,在本发明的另一方式中,设及一种使计算机作为上述各部而发挥功能的程序。 [0032] Further, in another embodiment of the present invention, and is provided for causing a computer to function as each of the program.

[0033] 另外,本发明的另一方式设及一种机器人控制方法,其包括获取映有组装作业的组装对象物与被组装对象物中的、至少上述被组装对象物的拍摄图像的步骤;根据上述拍摄图像,进行上述被组装对象物的特征量检测处理的步骤;W及根据上述被组装对象物的特征量,使上述组装对象物移动的步骤。 [0033] Further, another embodiment of the present invention is provided a control method of a robot and which comprises obtaining the object of the assembly, at least the captured image is the step of assembling the object enantioselective assembling operation of the assembly object; according to the captured image, the step of assembling the above-described object feature amount detection process; and W is assembled based on the object feature amount, the step of assembling the moving object.

[0034] 根据本发明的几个方式,能够提供即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业的机器人控制系统、机器人、程序W及机器人控制方法等。 [0034] According to some aspects of the present invention, it can be provided even in the case of changing the posture of the assembly position of the object, the assembly work can be performed accurately robot control system, a robot, a robot control program and W method.

[0035] 另外,另一方式为机器人控制装置,其特征在于,具备:第一控制部,其W使机器人的臂的端点根据基于设定的1个W上的指导位置而形成的路径向目标位置移动的方式,生成指令值;图像获取部,其获取上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像、W及上述端点处于当前位置时的包含上述端点的图像亦即当前图像;第二控制部,其W使上述端点根据上述当前图像W及上述目标图像从上述当前位置向上述目标位置移动的方式,生成指令值;W及驱动控制部,其使用由上述第一控制部生成的指令值与由上述第二控制部生成的指令值而使上述臂移动。 [0035] Further, in another embodiment, the robot control apparatus comprising: a first control unit W so that the endpoint of the robot arm to the target position according to the route guidance based on the set formed by a W position a manner to generate a command value; an image acquiring unit which acquires the image including the endpoint at the endpoint when the target image including the endpoint i.e. when the target position, W and said end position in the current i.e. the current image; a second control unit which W so that the W terminal based on the current image and said object image is moved from the current position to the target position mode, generating a command value; W, and a drive control unit, which is controlled by using the first the unit generates command value generated by the second control unit so that the arm movement command value.

[0036] 根据本方式,W使机器人的臂的端点根据基于设定的1个W上的指导位置而形成的路径向目标位置移动的方式,生成指令值,并且W使端点根据当前图像W及目标图像从当前位置向目标位置移动的方式,生成指令值。 [0036] According to the present embodiment, W so that the endpoint of the robot arm moves to the target position based on the guidance route based on the set position of W is formed a way to generate a command value, and the current image so that the endpoint W and W a target image from the current position to a target position in a manner to generate a command value. 然后,使用运些指令值而使臂移动。 Then, using the transport arm moves these command values. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0037] 另外,另一方式为机器人控制装置,其特征在于,具备:控制部,其W使机器人的臂的端点与目标位置接近的方式生成上述端点的轨道;W及图像获取部,其获取上述端点处于当前位置时的包含上述端点的图像亦即当前图像W及上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像,上述控制部根据基于设定的1个W上的指导位置而形成的路径、与上述当前图像及上述目标图像,使上述臂移动。 [0037] Further, in another embodiment, the robot control apparatus comprising: a control unit, so that the endpoint which W to approach the target position of the robot arm to generate the above-described track endpoint; W and an image acquisition unit which acquires i.e., an image including the image of the target terminal when an image including the endpoint when the endpoint is the current position of the current image i.e., W and said endpoint is in the target position, the control unit according to the instructions based on a set of W position path is formed, the above-described current image and said object image, so that the arm moves. 由此,维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Thus, to maintain the high-speed position control, and can be changed corresponding to the target position.

[0038] 运里,上述驱动控制部也可W使用分别W规定的分量将由上述第一控制部生成的指令值与由上述第二控制部生成的指令值叠加而成的信号,使上述臂移动。 [0038] in operation, the drive control unit may be used separately W W component by a predetermined portion of the first control command value generated by the command of the second control unit generates a superimposed value of a signal from the movement of the arm . 由此,能够W成为希望的轨道的方式,使端点的轨道移动。 Thus, W is track a desired manner, so that the movement track endpoint. 例如,能够使端点的轨道形成为虽然不理想但为在手眼摄像机的视角包含对象物的轨道。 For example, it is possible to make the end of the rail is formed as, although not ideal, but in the perspective of the hand-eye camera comprising the object track.

[0039] 运里,上述驱动控制部也可W根据上述当前位置与上述目标位置的差分,决定上述规定的分量。 [0039] in operation, the drive control unit W may be based on the current position of the target position of the differential, determines the predetermined component. 由此,能够与距离对应地连续地改变分量,因此能够顺利地切换控制。 Thus, the component can be continuously changed in correspondence to the distance, it is possible to smoothly switch control.

[0040] 运里,也可W具备输入上述规定的分量的输入部。 [0040] in operation, W may be provided with the predetermined input unit component. 由此,能够在使用者希望的轨道上控制臂。 Thus, a user desired track on the control arm.

[0041] 运里,也可W具备存储上述规定的分量的存储部。 [0041] in operation, it includes a storage unit W may be stored in the predetermined component. 由此,能够使用预先初始设定的分量。 Accordingly, it is possible to use initially set predetermined component.

[0042] 运里,上述驱动控制部也可W构成为,在上述当前位置满足规定的条件的情况下, 使用基于由上述第一控制部生成的轨道的指令值来驱动上述臂,在上述当前位置不满足上述规定的条件的情况下,使用基于由上述第一控制部生成的轨道的指令值、W及基于由上述第二控制部生成的轨道的指令值来驱动上述臂。 [0042] in operation, the drive control unit may be configured as W, with satisfying the predetermined condition when the current position, based on instructions generated by using the first control unit drives the rail arm value, in the current If the position does not satisfy the predetermined conditions, based on instructions generated by the use of the first control track section value, W, and based on the track generated by the second control command value to the driving section of the arm. 由此,能够更高速地进行处理。 Thus, a higher speed processing.

[0043] 运里,也可W具备:力检测部,其对施加于上述端点的力进行检测;W及第Ξ控制部,其根据上述力检测部所检测的值,W使上述端点从上述当前位置向上述目标位置移动的方式,生成上述端点的轨道,上述驱动控制部使用基于由上述第一控制部生成的轨道的指令值、基于由上述第二控制部生成的轨道的指令值、W及基于由上述第Ξ控制部生成的轨道的指令值,或者使用基于由上述第一控制部生成的轨道的指令值、W及基于由上述第Ξ控制部生成的轨道的指令值,使上述臂移动。 [0043] in operation, W may be provided with: a force detection unit, which force applied to the endpoint detection; W Ξ second control unit, based on value of the force detected by the detection unit, so that the terminal from the W current position to the target position of the embodiment, generating the end of the track, the drive control unit based on instructions generated by using the first control value of the track, based on a command value generated by the second control section of the track, W and based on instructions generated by the first control unit Ξ track value, or use a value, W, and based on instructions generated by the first control unit Ξ track based on a command value generated by the first control section of the track, so that the arms mobile. 由此,即使在目标位置移动的情况下、在无法确认目标位置的情况下,也能够维持位置控制的高速并安全地进行作业。 Accordingly, even in a case where the target position is, in a case where the target position can not be confirmed, it is possible to maintain the high speed position control and safe operation.

[0044] 另外,另一方式为机器人系统,其特征在于,具备:机器人,其具有臂;第一控制部, 其W使上述臂的端点根据基于设定的1个W上的指导位置而形成的路径向目标位置移动的方式,生成指令值;拍摄部,其对上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像、W及上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像进行拍摄;第二控制部,其W使上述端点根据上述当前图像W及上述目标图像从上述当前位置向上述目标位置移动的方式,生成指令值;W及驱动控制部,其使用由上述第一控制部生成的指令值与由上述第二控制部生成的指令值,使上述臂移动。 [0044] Further, another embodiment of a robot system comprising: a robot having an arm; a first control unit W so that the endpoint of the arm in accordance with guidance based on the position of a set W is formed when the imaging unit, which is above the target endpoint image including the endpoint i.e. when the target position, W is in a current position and said end position as the current time; moving the path to the target position mode, generates a command value comprising an image of the current image i.e. endpoint photographing; and a second control unit W so that the W terminal based on the current image and said object image is moved from the current position to the target position mode, generating a command value; and W a drive control unit, using the command value generated by the first control unit and the command value generated by the second control unit, so that the arm moves. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0045] 另外,另一方式为机器人系统,其特征在于,具备:机器人,其具有臂;控制部,其W 使上述臂的端点与目标位置接近的方式生成上述端点的轨道;W及拍摄部,其对上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像、W及上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像进行拍摄,上述控制部根据基于设定的1个W上的指导位置而形成的路径、与上述当前图像及上述目标图像,使上述臂移动。 [0045] Further, another embodiment of a robot system comprising: a robot having an arm; control unit so that the endpoint W to approach the target position of the arm above the track generation endpoint; W and the image pickup unit , which is an image including the current position of the terminal when the current time as the position of the current image i.e., W and said end is in an image that is a target image including the endpoint of the target position when photographing the above endpoints, and the control the route guidance section based on the position of the set formed by W 1, the above-described current image and said object image, so that the arm moves. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0046] 另外,另一方式为机器人,其特征在于,具备:臂;第一控制部,其W使上述臂的端点根据基于设定的1个W上的指导位置而形成的路径向目标位置移动的方式,生成指令值; 图像获取部,其获取上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像、 W及上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像;第二控制部,其W使上述端点根据上述当前图像W及上述目标图像从上述当前位置向上述目标位置移动的方式,生成指令值,W及驱动控制部,其使用由上述第一控制部生成的指令值与由上述第二控制部生成的指令值,使上述臂移动。 [0046] Further, in another embodiment, robot comprising: an arm; a first control unit W so that the endpoint of the arm to the target position based on the guidance route based on the location of the set formed by a W including the endpoint of the image acquisition portion which acquires an image including the target image, ie when the end endpoint is the target position, W is in a current position and said end position as the current time; a movable manner, generates a command value i.e. the current image of the image; and a second control unit which W so that the W terminal based on the current image and said object image is moved from the current position to the target position mode to generate a command value, W, and a drive control unit that uses generated by the first control command value and the command generating unit by the second control unit value, so that the arm moves. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0047] 另外,另一方式为机器人,其特征在于,具备:臂;控制部,其W使上述臂的端点与目标位置接近的方式生成上述端点的轨道;W及图像获取部,其获取上述端点处于当前位置时的包含上述端点的图像亦即当前图像、W及上述端点处于上述目标位置时的包含上述端点的图像亦即目标图像,上述控制部根据基于设定的1个W上的指导位置而形成的路径、 与上述当前图像W及上述目标图像,使上述臂移动。 [0047] Further, in another embodiment, robot comprising: an arm; a control unit, so that the endpoint W with which to approach the target position of the arm above the track generation endpoint; W and an image acquisition unit which acquires the i.e., an image including the image of the target terminal when an image including the endpoint of the endpoint is when the current position of the current image i.e., W and said endpoint is in the target position, the control unit is set in accordance with instructions on the basis of a W position path is formed, the above-described current image and said object image W, so that the arm moves. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0048] 另外,另一方式为机器人控制方法,其特征在于包括:获取机器人的臂的端点处于目标位置时的包含上述端点的图像亦即目标图像的步骤;获取上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像的步骤;W及W根据基于设定的1个W上的指导位置而形成的路径使上述端点向上述目标位置移动的方式生成指令值,并且W根据上述当前图像W及上述目标图像使上述端点从上述当前位置向上述目标位置移动的方式生成指令值,从而使用上述指令值而使上述臂移动的步骤。 [0048] Further, in another embodiment, a robot control method, characterized by comprising: a step of acquiring an image including the target image, i.e., the endpoint at the endpoint of the robot arm at a target position; acquires the current time as the end position is i.e., the current image comprises the step of when the position of the current image of the endpoint; W and W so that the terminal moves to the target position command value generating manner guidance route based on the location of the set formed by W 1, and W so that the W terminal based on the current image and said object image generation instruction value from the current position to the target position of the embodiment, the step of using the command value so that the movement of the arm. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0049] 另外,另一方式为机器人控制方法,其特征在于,其为控制具有臂、W及获取上述臂的端点处于当前位置时的包含上述端点的图像亦即当前图像及上述端点处于目标位置时的包含上述端点的图像亦即目标图像的图像获取部的机器人的上述臂的机器人控制方法,并且使用根据基于设定的1个W上的指导位置而形成的路径所进行的位置控制的指令值、与根据上述当前图像W及上述目标图像所进行的视觉伺服的指令值,从而控制上述臂。 [0049] Further, in another embodiment, a robot control method, which is characterized in that it has control of the arm, when the image including the endpoint of W, and acquires the current position of the arm, i.e. at the endpoints of the current picture and the target position in the end i.e., the robot image including the target image acquired during the endpoint of the robot arm portion of the control method, and using the instruction position based on the route guidance performed according to the set position on W to form a control value, and the visual servo W based on the current image and said object image performed by the command value, thereby controlling the arm. 由此,能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0050]另外,另一方式为机器人控制方法,其特征在于,其为控制具有臂、W及获取上述臂的端点处于当前位置时的包含上述端点的图像亦即当前图像及上述端点处于目标位置时的包含上述端点的图像亦即目标图像的图像获取部的机器人的上述臂的机器人控制方法,并且同时进行根据基于设定的1个W上的指导位置而形成的路径所进行的位置控制、与根据上述当前图像W及上述目标图像所进行的视觉伺服。 [0050] Further, in another embodiment, a robot control method, which is characterized in that it has control of the arm, when the image including the endpoint of W, and acquires the current position of the arm, i.e. at the endpoints of the current picture and the target position in the end i.e., the robot image including the target image acquired during the endpoint of the robot arm portion of the control method, position control and at the same time based on the guidance path based on the position of the set formed by a W performed, the servo based on the current visual image and said object image W performed. 由此,能够维持位置控制的高速, 并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0051 ]另外,另一方式为机器人控制程序,其特征在于,使运算装置执行如下步骤:获取机器人的臂的端点处于目标位置时的包含上述端点的图像亦即目标图像的步骤、上述端点处于作为当前时刻的位置的当前位置时的包含上述端点的图像亦即当前图像的步骤;W及W使上述端点根据基于设定的1个W上的指导位置而形成的路径向上述目标位置移动的方式生成指令值,并且W根据上述当前图像W及上述目标图像使上述端点从上述当前位置向上述目标位置移动的方式生成指令值,从而使用上述指令值而使上述臂移动的步骤。 [0051] Further, in another embodiment, a robot control program, wherein the computing device to execute: an image acquisition step i.e. the target image including the endpoint at the endpoint of the robot arm at the target position, the endpoint is above including the end point of an image at the present time point to the current position of the image that is the current step; W and W so that the end point is moved to the target position according to the guidance route based on the location of a set formed of W mode generation instruction value, and W so that the end generates a command value from said current position to said target position based on the embodiment of the current image and the object image W, such that the step of using the command value so that the arm movement. 由此, 能够维持位置控制的高速,并且也能够与目标位置变化的情况对应。 Accordingly, it is possible to maintain the high speed position control, and can be changed corresponding to the target position.

[0052] 另外,另一方式设及一种机器人控制装置,其包括:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其根据上述图像信息而求出图像特征量变化量;变化量推断部,其根据作为上述机器人或者对象物的信息并且作为上述图像信息W外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算; W及异常判定部,其通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定。 [0052] Further, another embodiment of a robot control apparatus and arranged, comprising: a robot control unit which controls the robot based on the image information; change amount calculation unit which obtains the image feature amount and the amount of change in accordance with the image information; change amount estimating unit, as based on information of the object or the robot and a change amount of the image information of the outside information message W inference, inference quantity of the feature amount of the image change amount of the image feature amount estimation i.e. amount of change operation; and W is abnormality determination section that performs abnormality determination and estimation of the comparison, the image feature amount by the amount of change of the image feature quantity variation.

[0053] 在本方式中,根据基于图像特征量变化量与变化量推断用信息而求出的推断图像特征量变化量,进行使用图像信息的机器人的控制的异常判定。 [0053] In the present embodiment, the image change amount estimation based on the image feature quantity of the feature quantity variation amount change amount information obtained by estimation, the abnormality determination control of the robot using the image information. 由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定等。 Accordingly, it is possible to control the robot using image information, in particular means using the image feature amount, the abnormality determination and the like appropriately.

[0054] 另外,在另一方式中,也可W构成为,上述变化量推断用信息是上述机器人的关节角信息。 [0054] Further, in another aspect, W may be arranged such that the change in the amount of information is used to infer information of the joint angle of the robot.

[0055] 由此,作为变化量推断用信息,能够使用机器人的关节角信息。 [0055] Accordingly, as the amount of change in information for estimation, it is possible to use a robot joint angle information.

[0056] 另外,在另一方式中,也可W构成为,上述变化量推断部通过对上述关节角信息的变化量,作用使上述关节角信息的变化量与上述图像特征量变化量相对应的雅克比矩阵, 从而对上述推断图像特征量变化量进行运算。 [0056] Further, in another aspect, W may be arranged such that the change amount estimating unit by a joint angle change amount of the above-described information, so that the amount of change of the role of the joint angle information with the image feature amount corresponding to the amount of change the Jacobian matrix, whereby the above-described image feature amount estimation computes the amount of change.

[0057] 由此,能够使用关节角信息的变化量与雅克比矩阵来求出推断图像特征量变化量等。 [0057] Accordingly, it is possible to use the joint angle information and the amount of change is obtained Jacobian matrix inferred image characteristic change in the amount and the like.

[0058] 另外,在另一方式中,也可W构成为,上述变化量推断用信息是上述机器人的末端执行器或者上述对象物的位置姿势信息。 [0058] Further, in another aspect, W may be arranged such that the change in the amount of information is used to infer the position and orientation information of the robot end effector or the above-described object.

[0059] 由此,作为变化量推断用信息,能够使用机器人的末端执行器或者上述对象物的位置姿势信息。 [0059] Accordingly, as the amount of change in information for estimation, it is possible to use the position information of the end effector of the robot posture, or the above-described object.

[0060] 另外,在另一方式中,也可W构成为,上述变化量推断部通过对上述位置姿势信息的变化量作用使上述位置姿势信息的变化量与上述图像特征量变化量相对应的雅克比矩阵,从而对上述推断图像特征量变化量进行运算。 [0060] Further, in another aspect, W may be arranged such that the change amount estimating unit by varying the amount of effect on the position that the posture information of the position information and the amount of change of the posture of the image feature amount corresponding to the change amount of Jacobian matrix, whereby the above-described image feature amount estimation computes the amount of change.

[0061] 由此,能够使用位置姿势信息的变化量与雅克比矩阵来求出推断图像特征量变化量等。 [0061] Accordingly, the amount of change in attitude information can be used to determine the Jacobian matrix inferred image characteristic change in the amount and the like.

[0062] 另外,在另一方式中,也可w构成为,在第ia为自然数)时刻获取第一图像信息的图像特征量η、并且在第j(j为满足j辛i的自然数)时刻获取第二图像信息的图像特征量f2 的情况下,上述变化量运算部将上述图像特征量η与上述图像特征量f2的差分作为上述图像特征量变化量而求出,上述变化量推断部在第k化为自然数)时刻获取与上述第一图像信息对应的上述变化量推断用信息pi、并且在第1(1为自然数)时刻获取与上述第二图像信息对应的上述变化量推断用信息p2的情况下,根据上述变化量推断用信息pi与上述变化量推断用信息p2,求出上述推断图像特征量变化量。 [0062] Further, in another embodiment, may be w is configured to, in the first ia is a natural number) time acquire the image feature amount of the first image information is [eta], and at the j (j is a natural number satisfying j oct-i) of the time when acquiring the image characteristic amount f2 of the second image information, said change amount calculating unit of the image feature amount difference η to the image feature amount f2 as the image feature amount of the amount of change is obtained, the above change amount estimation unit a k into a natural number) time to acquire the above-described amount of change of the first image information corresponding to inference information pi, and acquires the first (1 is a natural number) time estimation for the variation in the above-described second image information corresponding to information for p2 in the case where, based on the change amount information pi inference and inference information for the variation p2, wherein the estimated image feature amount calculated amount of change.

[0063] 由此,能够考虑时刻,而求出对应的图像特征量变化量与推断图像特征量变化量等。 [0063] Accordingly, it is possible to consider the time, and to obtain the image feature amount corresponding to the amount of change of the image feature amount change amount estimation and the like.

[0064] 另外,在另一方式中,也可W构成为,上述第k时刻是上述第一图像信息的获取时亥IJ,上述第1时刻是上述第二图像信息的获取时刻。 [0064] Further, in another embodiment may also be configured as W, the first time is acquired k Hai IJ the first image information, the first time is the acquisition time of the second image information.

[0065] 由此,在考虑可高速地进行关节角信息的获取的情形下,能够容易地进行考虑了时刻的处理等。 [0065] Accordingly, in consideration of the joint angle can be performed at high speed information of the acquired situation can be easily performed in consideration of processing time.

[0066] 另外,在另一方式中,也可W构成为,上述异常判定部进行上述图像特征量变化量与上述推断图像特征量变化量的差别信息、和阔值的比较处理,并且在上述差别信息比上述阔值大的情况下判定为异常。 [0066] Further, in another embodiment may also be configured as W, the abnormality determining section changes the image feature quantity the quantity of the estimation of the image feature amount change amount difference information, and comparison processing width value and the above large difference information than the determination value or width is abnormal.

[0067] 由此,能够通过阔值判定来进行异常判定等。 [0067] Thus, an abnormality can be determined by determining the like width value.

[0068] 另外,在另一方式中,也可W构成为,对于上述异常判定部而言,上述变化量运算部中的上述图像特征量变化量的运算所使用的两个上述图像信息的获取时刻之差越大,贝U 将上述阔值设定得越大。 [0068] Further, in another embodiment may also be configured as W, for the purposes of the abnormality determination section acquires two images of the image information of the amount of change in feature amount calculation unit calculating amount of change used in the the greater the difference in time, the above-described U shell width value is set larger.

[0069] 由此,能够与状况对应地变更阔值等。 [0069] Accordingly, other values ​​can be changed to correspond to the wide conditions.

[0070] 另外,在另一方式中,也可W构成为,在由上述异常判定部检测到异常的情况下, 上述机器人控制部进行使上述机器人停止的控制。 [0070] Further, in another aspect, W may also be configured, in a case where the abnormality is detected by the abnormality determining unit, the robot control unit performs control so that the robot is stopped.

[0071] 由此,能够通过在异常检测时停止机器人,从而实现安全的机器人的控制等。 [0071] Accordingly, when an abnormality is detected to stop the robot, for secure control of the robot, and the like can be adopted.

[0072] 另外,在另一方式中,也可W构成为,在由上述异常判定部检测到异常的情况下, 上述机器人控制部跳过基于上述变化量运算部中的上述图像特征量变化量的运算所使用的两个上述图像信息中的、在时间序列上靠后的时刻获取的上述图像信息亦即异常判定图像信息形成的控制,而根据在比上述异常判定图像信息靠前的时刻获取的上述图像信息来进行控制。 [0072] Further, in another aspect, W may also be configured, in a case where the abnormality is detected by the abnormality determination section, the control section skips the robot based on the image change amount of the feature amount calculation unit change in the amount of the calculation used in time sequence abnormality determination time that is the image information acquired by the control information of the image formed in front of and according to the acquired abnormality determination time than two image information in the image information the image information is controlled.

[0073] 由此,能够在异常检测时跳过使用异常判定图像信息的机器人的控制等。 [0073] Thus, use can be skipped when the abnormality detection abnormality determination control of the robot image information, and the like.

[0074] 另外,另一方式设及一种机器人控制装置,其包括:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其求出表示上述机器人的末端执行器或者对象物的位置姿势信息的变化量的位置姿势变化量、或者表示上述机器人的关节角信息的变化量的关节角变化量;变化量推断部,其根据上述图像信息而求出图像特征量变化量,并根据上述图像特征量变化量,求出上述位置姿势变化量的推断量亦即推断位置姿势变化量、或者上述关节角变化量的推断量亦即推断关节角变化量;W及异常判定部,其通过上述位置姿势变化量与上述推断位置姿势变化量的比较处理、或者上述关节角变化量与上述推断关节角变化量的比较处理来进行异常判定。 [0074] Further, another embodiment of a robot control apparatus and arranged, comprising: a robot control unit which controls the robot based on the image information; change amount calculation unit which obtains showing the end effector of the robot or the object posture changing amount of the posture variation amount of the position information, or the amount of change represents an amount of change in the joint angle information of the robot joint angle; change amount estimation unit which obtains the image feature amount and the amount of change in accordance with the image information, and in accordance with change amount of the image feature amount, the position is determined inference quantity change amount of the posture change in the amount estimation i.e. estimation of the amount of change in position and posture, i.e. the joint angle or the above-described joint angle change amount estimation; and W is abnormality determination unit, by the position and posture change amount estimated position above the posture variation amount comparison processing, or where the joint angle variation amount estimation and comparison of the above-described joint angle change amount abnormality determination is performed.

[0075] 另外,在另一方式中,根据图像特征量变化量求出推断位置姿势变化量或者推断关节角变化量,并通过位置姿势变化量与推断位置姿势变化量的比较、或者关节角变化量与推断关节角变化量的比较来进行异常判定。 [0075] Further, in another embodiment, the posture change amount estimated position is determined according to the image feature amount or the change amount estimation joint angle change amount, and by comparing the position and posture change amount and the amount of change in position and posture estimation, or joint angle change comparison of the amount of the change amount of the joint angle inference abnormality determination. 由此,也能够通过位置姿势信息或者关节角信息的比较,在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。 Accordingly, it is possible by comparing the position information or posture information of the joint angle, the control of the robot using image information, in particular means using the image feature amount, the abnormality determination appropriately.

[0076] 另外,在另一方式中,也可W构成为,上述变化量运算部进行获取多个上述位置姿势信息并作为上述位置姿势变化量而求出多个上述位置姿势信息的差分的处理、获取多个上述位置姿势信息并根据多个上述位置姿势信息的差分而求出上述关节角变化量的处理、 获取多个上述关节角信息并作为上述关节角变化量而求出多个上述关节角信息的差分的处理、W及获取多个上述关节角信息并根据多个上述关节角信息的差分而求出上述位置姿势变化量的处理中的任一个处理。 [0076] Further, in another aspect, W may be arranged such that the change amount calculation unit for obtaining a plurality of the position and posture information as the position and posture change amount obtained plurality of difference processing posture information of the position obtaining a plurality of the position and posture information, and to obtain a change amount of the joint angle of a plurality of processing according to the difference of the position of the posture information, acquiring the plurality of joint angle information and the amount of change as the joint where the joint angle is obtained a plurality of difference angle information processing, W, and acquiring the plurality of joint angle information and the processing to obtain the position of any posture change amount in a plurality of the above-described processing in accordance with the joint angle difference information.

[0077] 由此,能够通过各种手段来求出位置姿势变化量或者关节角变化量等。 [0077] Accordingly, it is possible to determine the position or posture change amount joint angle change amount by various means.

[0078] 另外,另一方式设及一种机器人,其包括:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其根据上述图像信息来求出图像特征量变化量;变化量推断部,其根据作为上述机器人或者对象物的信息并且作为上述图像信息W外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算;W及异常判定部,其通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定。 [0078] Further, another embodiment is provided and a robot comprising: a robot control unit which controls the robot based on the image information; change amount calculation unit which obtains the image feature amount based on the image information of the amount of change; the amount of change estimation section, as based on information of the object or the robot and a change amount of the image information of the outside information message W inference, inference quantity of the feature amount of the image change amount of the image feature amount i.e. estimation computes the amount of change; W and abnormality determination section that performs abnormality determination and estimation of the comparison, the image feature amount by the amount of change of the image feature quantity variation.

[0079] 另外,在另一方式中,根据图像特征量变化量、与基于变化量推断用信息而求出的推断图像特征量变化量,进行使用图像信息的机器人的控制的异常判定。 [0079] Further, in another embodiment, the amount of change in accordance with the image feature amount, the feature amount of the image change amount estimation obtained by information amount estimation based on the change, abnormality determination control of the robot using the image information. 由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。 Accordingly, it is possible to control the robot using image information, in particular means using the image feature amount, the abnormality determination appropriately.

[0080] 另外,另一方式设及一种机器人控制方法,其为根据图像信息来控制机器人的机器人控制方法,其包括根据上述图像信息,进行求出图像特征量变化量的变化量运算处理的步骤;根据作为上述机器人或者对象物的信息并且作为上述图像信息W外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算的变化量推断处理的步骤;W及通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定的步骤。 [0080] Further, another embodiment is provided a control method of a robot and its control method of the robot to control the robot according to the image information including the image information in accordance with, the amount of change is obtained for the change amount of the image feature amount calculation processing step; according to the robot as the object or the information and information of a change amount of the image information outside the inferred information for W, estimation of the image feature amount of change in the amount of variation that is inferred image characteristic amount of the variation amount calculation W, and the comparison step is performed with the image feature amount estimation processing by the change amount of the image change amount of the feature amount abnormality determination; inference process step.

[0081] 另外,在另一方式中,根据图像特征量变化量、与基于变化量推断用信息而求出的推断图像特征量变化量,进行使用图像信息的机器人的控制的异常判定。 [0081] Further, in another embodiment, the amount of change in accordance with the image feature amount, the feature amount of the image change amount estimation obtained by information amount estimation based on the change, abnormality determination control of the robot using the image information. 由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。 Accordingly, it is possible to control the robot using image information, in particular means using the image feature amount, the abnormality determination appropriately.

[0082] 另外,另一方式设及一种程序,其使计算机作为如下部件而发挥功能:机器人控制部,其根据图像信息来控制机器人;变化量运算部,其根据上述图像信息而求出图像特征量变化量;变化量推断部,其根据作为上述机器人或者对象物的信息并且作为上述图像信息W外的信息的变化量推断用信息,对上述图像特征量变化量的推断量亦即推断图像特征量变化量进行运算;W及异常判定部,其通过上述图像特征量变化量与上述推断图像特征量变化量的比较处理来进行异常判定。 [0082] Further, another embodiment is provided, and a program causing a computer to function as the following components: robot control unit which controls the robot based on the image information; change amount calculation unit which obtains the image based on the image information feature quantity variation amount; change amount estimating unit, as based on information of the object or the robot and a change amount of the image information of the outside information message W inference, inference quantity of the feature amount of the image change amount estimation image i.e. feature quantity calculating the amount of change; and W is abnormality determination section that performs abnormality determination and estimation of the comparison, the image feature amount of the image change amount by the amount of change in feature amount.

[0083] 另外,在另一方式中,根据图像特征量变化量、与基于变化量推断用信息而求出的推断图像特征量变化量,使计算机执行使用图像信息的机器人的控制的异常判定。 [0083] Further, in another embodiment, the amount of change in accordance with the image feature amount, based on the change amount information obtained by inference the image feature amount change amount estimation, making a computer execute abnormality determination control of the robot using the image information. 由此,能够在使用图像信息的机器人的控制中,特别是在使用图像特征量的手段中,适当地进行异常判定。 Accordingly, it is possible to control the robot using image information, in particular means using the image feature amount, the abnormality determination appropriately.

[0084] 运样,根据几个方式,能够提供适当地进行基于图像信息实现的机器人的控制中的使用图像特征量的控制的异常的检测的机器人控制装置、机器人W及机器人控制方法等。 [0084] The sample transport, according to several embodiment, it is possible to provide appropriately controlled based on the abnormality detection control of the robot to achieve the image information using the image feature amount in the robot controller, a robot control method of the robot W, and the like.

[0085] 另外,另一方式设及机器人,其为使用由拍摄部拍摄的检查对象物的拍摄图像来进行检查上述检查对象物的检查处理的机器人,并且根据第一检查信息,生成包含上述检查处理的检查区域的第二检查信息,并根据上述第二检查信息,进行上述检查处理。 [0085] Further, another embodiment is provided and the robot using the captured image captured by the imaging unit of an inspection object to inspect the inspection process of the robot object of inspection, and inspection in accordance with the first information, generates the inspection comprising the second examination information processing inspection region, and a second inspection based on the information, the inspection process.

[0086] 另外,在另一方式中,根据第一检查信息,生成包含检查区域的第二检查信息。 [0086] Further, in another embodiment in accordance with the first inspection information, examination information comprises generating a second inspection area. 一般地,将检查(狭义而言外观检查)所使用的图像中的哪一区域用于处理取决于检查对象物的形状等信息、针对检查对象物进行的作业内容等,因此,在每次检查对象物、作业内容变化时,必须重新设定检查区域,而导致使用者的负担较大。 Generally, the area of ​​the image which checks (in a narrow sense appearance inspection) is used for processing the object to be inspected depends on the shape information, the work performed for the object to be inspected, etc. Therefore, in each examination object, changes the contents of the job must be re-set inspection area, resulting in a large burden on the user. 在运一点上,通过根据第一检查信息而生成第二检查信息,能够容易地决定检查区域等。 In operation point, generating a second inspection information according to the first through the inspection information, the inspection area can be easily determined like.

[0087] 另外,在另一方式中,也可W构成为,上述第二检查信息包含将多个视点信息包含在内的视点信息组,并且上述视点信息组的各视点信息包含上述检查处理中的上述拍摄部的视点位置W及视线方向。 [0087] Further, in another aspect, W may be arranged such that the second set of check information comprises a plurality of viewpoint information included in the viewpoint information and the viewpoint information of each viewpoint information group including the inspection process the viewpoint position and the sight line direction W of the imaging unit.

[0088] 由此,能够作为第二检查信息而生成视点信息组等。 [0088] Thus, as the second examination information generating viewpoint information group and the like.

[0089] 另外,在另一方式中,也可W构成为,对上述视点信息组的各视点信息,设定使上述拍摄部向与上述视点信息对应的上述视点位置W及上述视线方向移动时的优先度。 When [0089] Further, in another aspect, W may also be configured, for each group viewpoint information of the viewpoint information is set so that the imaging portion moves to the viewpoint information corresponding to the viewpoint position and the sight-line direction W priority.

[0090] 由此,能够对视点信息组所包含的各视点信息,设定优先度等。 [0090] Thus, the viewpoint information for each viewpoint information included in the group, setting the priority degree.

[0091] 另外,在另一方式中,也可W构成为,依据基于上述优先度而设定的移动顺序,使上述拍摄部向与上述视点信息组的上述各视点信息对应的上述视点位置W及上述视线方向移动。 [0091] Further, in another embodiment may also be configured as W, based on the priority order based on the degree of movement and set so that the imaging portion to the viewpoint above the respective position information corresponding to the viewpoint information on a viewpoint group W and the sight-line direction of movement.

[0092] 由此,能够使用设定了优先度的多个视点信息,实际控制拍摄部而进行检查处理等。 [0092] Accordingly, it is possible to set a plurality of viewpoints using priority information, the photographing control unit performs the actual processing inspection.

[0093] 另外,在另一方式中,也可W构成为,在根据可动范围信息,判定为无法使上述拍摄部向与多个上述视点信息中的第ia为自然数)视点信息对应的上述视点位置W及上述视线方向移动的情况下,不进行基于上述第i视点信息的上述拍摄部的移动,而根据上述移动顺序中的上述第i视点信息的下一个的第j(j为满足i辛j的自然数)视点信息,使上述拍摄部移动。 [0093] Further, in another aspect, W may also be configured, so that the imaging unit is not the first correspondence information ia is a natural number) a plurality of viewpoints and the viewpoint information in accordance with the movable range information, it is determined a case where the viewpoint position W and the sight-line direction of movement, does not move the imaging unit in the i-th viewpoint information based on, according to the j-th next the movement sequence in the i-th viewpoint information (j to satisfy i oct-j is a natural number) the viewpoint information, so that the imaging portion moves.

[0094] 由此,能够实现考虑了机器人的可动范围的拍摄部的控制等。 [0094] Accordingly, it is possible to consider the imaging control unit movable range of the robot and the like.

[00M]另外,在另一方式中,也可W构成为,上述第一检查信息包含针对上述检查对象物的相对的检查处理对象位置,并W上述检查处理对象位置为基准,设定与上述检查对象物对应的对象物坐标系,从而使用上述对象物坐标系,生成上述视点信息。 [00M] Further, in another aspect, W may be arranged such that the first inspection information includes a position opposite to the check processing for the inspection target object, and the inspection process W target position as a reference, set above inspection object corresponding to the object coordinate system, so that the use of the object coordinate system, generates the viewpoint information.

[0096] 由此,能够生成对象物坐标系中的视点信息等。 [0096] Accordingly, it is possible to generate the viewpoint information of the object coordinate system and the like.

[0097] 另外,在另一方式中,也可W构成为,上述第一检查信息包含表示上述检查对象物的全局坐标系中的位置姿势的对象物位置姿势信息,根据基于上述对象物位置姿势信息而求出的上述全局坐标系与上述对象物坐标系的相对关系,求出上述全局坐标系中的上述视点信息,并根据上述全局坐标系中的可动范围信息与上述全局坐标系中的上述视点信息, 对是否能够使上述拍摄部向上述视点位置W及上述视线方向移动进行判定。 [0097] Further, in another aspect, W may be arranged such that the first inspection object position information includes information of position and posture of the posture of the global coordinate system of the object to be examined in accordance with the position and orientation of the object based on the information obtaining the relationship between the global coordinate system relative to the coordinate system of the object, obtains the viewpoint information of the global coordinate system, and the information of the movable range in accordance with a global coordinate system and the global coordinate system of the above-described the viewpoint information on whether to make the imaging portion moves to the viewpoint position and the sight-line direction W determination.

[0098] 由此,能够生成全局坐标系中的视点信息、W及根据该视点信息与机器人的可动范围信息而控制拍摄部的移动等。 [0098] Accordingly, it is possible to generate information in the global viewpoint coordinate system, W, and movement control unit according to the imaging information of the movable range of the robot viewpoint information and the like.

[0099] 另外,在另一方式中,也可W构成为,上述检查处理是针对机器人作业的结果进行的处理,上述第一检查信息是在上述机器人作业中获取的信息。 [0099] Further, in another aspect, W may be arranged such that the inspection process is a process performed for the robot operation result of the first check information is information acquired in the operation of the robot.

[0100] 由此,能够在机器人作业中获取第一检查信息等。 [0100] Accordingly, the first check information can be acquired in the robot operation and the like.

[0101] 另外,在另一方式中,也可W构成为,上述第一检查信息是包含上述检查对象物的形状信息、上述检查对象物的位置姿势信息、W及针对上述检查对象物的相对的检查处理对象位置中的至少一个的信息。 [0101] Further, in another embodiment may also be configured as W, examination information comprising the first shape information of the object to be inspected, the position attitude information of the object to be inspected, and W relative to the object to be examined for at least one information processing inspection target position.

[0102] 由此,能够作为第一检查信息而获取形状信息、位置姿势信息W及检查处理对象位置的至少一个信息。 [0102] Accordingly, it is possible to check information acquired as the first shape information, at least one information of the position and posture information check process W target position.

[0103] 另外,在另一方式中,也可W构成为,上述第一检查信息包含上述检查对象物的Ξ 维模型数据。 [0103] Further, in another aspect, W may be arranged such that the information includes a first check Ξ dimensional model data of the object to be inspected.

[0104] 由此,能够作为第一检查信息而获取Ξ维模型数据。 [0104] Accordingly, it is possible to check information acquired as the first Ξ dimensional model data.

[0105] 另外,在另一方式中,也可W构成为,上述检查处理是针对机器人作业的结果进行的处理,上述Ξ维模型数据包含通过进行上述机器人作业而得到的作业后Ξ维模型数据、 与上述机器人作业前的上述检查对象物的上述Ξ维模型数据亦即作业前Ξ维模型数据。 [0105] Further, in another aspect, W may be arranged such that the inspection process is a process performed for the robot operation result, said data including Ξ dimensional model data Ξ dimensional model of the robot after the operation performed by the job obtained the Ξ dimensional model data of the above-described object to be examined before robot operation i.e. the front working Ξ dimensional model data.

[0106] 由此,能够作为第一检查信息而获取作业前后的Ξ维模型数据。 [0106] Accordingly, it is possible to check information acquired as the first-dimensional model data Ξ before and after the operation.

[0107] 另外,在另一方式中,也可W构成为,上述第二检查信息包含合格图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置W及上述视线方向的假想摄像机拍摄上述Ξ维模型数据而得的图像。 [0107] Further, in another embodiment may also be configured as W, and the second inspection image information includes passing the image is arranged in conformity with the above-described position of the viewpoint and the sight-line direction W corresponding virtual viewpoint information camera Ξ dimensional model image of the captured data obtained.

[0108] 由此,能够从Ξ维模型数据与视点信息,作为第二检查信息而获取合格图像等。 [0108] Accordingly, it is possible, as the second inspection image information acquired qualified Ξ dimensional model data and other information from the viewpoint.

[0109] 另外,在另一方式中,也可W构成为,上述第二检查信息包含合格图像与作业前图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置W及上述视线方向的假想摄像机拍摄上述作业后Ξ维模型数据而得的图像,上述作业前图像是由配置于与上述视点信息对应的上述视点位置W及上述视线方向的上述假想摄像机拍摄上述作业前Ξ维模型数据而得的图像,通过对上述作业前图像与上述合格图像进行比较,求出上述检查区域。 [0109] Further, in another embodiment may also be configured as W, and the second inspection information includes image before passing the image to the job, the above-described image is disposed in conformity with the viewpoint information corresponding to the viewpoint position and the sight-line W the direction of the virtual camera captured image Ξ-dimensional model data obtained by the above-described operation, the working front image captured by the former and disposed on the viewpoint information corresponding to the viewpoint position W and the sight-line direction of the virtual camera dimensional model the working Ξ the image data obtained by the above-described operation of the front image and the comparison image passing obtains the inspection area.

[0110] 由此,能够根据作业前后的Ξ维模型数据与视点信息,求出合格图像与作业前图像,并根据其比较处理而求出检查区域等。 [0110] Thus, according to Ξ dimensional model data and the viewpoint information before and after the job, the image is obtained before passing the job image, and obtains the comparison processing in accordance with the inspection area and the like.

[0111] 另外,在另一方式中,也可W构成为,在上述比较中,求出上述作业前图像与上述合格图像的差分亦即差分图像,上述检查区域是上述差分图像中的包含上述检查对象物的区域。 [0111] Further, in another embodiment may also be configured as W, in the above comparison, calculates a difference i.e. the difference image, the working area before the inspection image and the image is qualified in the difference image including the range check object.

[0112] 由此,能够使用差分图像而求出检查区域等。 [0112] Accordingly, it is possible to use the difference image calculated the examination region and the like.

[0113] 另外,在另一方式中,也可W构成为,上述第二检查信息包含合格图像与作业前图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置W及上述视线方向的假想摄像机拍摄上述作业后Ξ维模型数据而得的图像,上述作业前图像是由配置于与上述视点信息对应的上述视点位置W及上述视线方向的上述假想摄像机拍摄上述作业前Ξ维模型数据而得的图像,根据上述作业前图像与上述合格图像的相似度,设定基于上述拍摄图像与上述合格图像进行的上述检查处理所使用的阔值。 [0113] Further, in another embodiment may also be configured as W, and the second inspection information includes image before passing the image to the job, the above-described image is disposed in conformity with the viewpoint information corresponding to the viewpoint position and the sight-line W the direction of the virtual camera captured image Ξ-dimensional model data obtained by the above-described operation, the working front image captured by the former and disposed on the viewpoint information corresponding to the viewpoint position W and the sight-line direction of the virtual camera dimensional model the working Ξ the image data obtained, according to the similarity before passing the working image and the image is set based on the value of the width of the captured image check processing performed by the above-described image qualified use.

[0114] 由此,能够使用作业前图像与合格图像的相似度,设定检查处理的阔值等。 [0114] Accordingly, it is possible to use the image similarity front work qualified image, setting the value of the width and the like check processing.

[0115] 另外,在另一方式中,也可W构成为,至少包括第一臂与第二臂,上述拍摄部是设置于上述第一臂W及上述第二臂的至少一方的手眼摄像机。 [0115] Further, in another aspect, W may also be configured to include at least a first arm and a second arm, the imaging section is provided in the first arm of at least one of W and the second arm of the hand-eye camera.

[0116] 由此,能够使用2支W上的臂、与设置于该臂的至少一个的手眼摄像机来进行检查处理等。 [0116] Accordingly, it is possible to use the W 2 arm sticks, with at least one hand-eye camera disposed on the arm to perform the check processing and the like.

[0117] 另外,另一方式设及处理装置,其为针对使用由拍摄部拍摄的检查对象物的拍摄图像而进行上述检查对象物的检查处理的装置,输出上述检查处理所使用的信息的处理装置,并且根据第一检查信息,生成将上述检查处理的包含上述拍摄部的视点位置W及视线方向的视点信息、与上述检查处理的检查区域包含在内的第二检查信息,并针对进行上述检查处理的上述装置输出上述第二检查信息。 [0117] Further, another embodiment and the processing apparatus is provided, which is a check processing apparatus performs the inspection of the object captured by the imaging unit used for the examination of the object captured image, the inspection information output process used by a process means and in accordance with a first inspection information, generates the inspection process comprising the viewpoint position and the sight line direction W of the imaging portion viewpoint information, and the inspection process of the second inspection region included examination information, and for the above the above-described check processing apparatus outputs the second test information.

[0118] 另外,在另一方式中,根据第一检查信息生成包含检查区域的第二检查信息。 [0118] Further, in another embodiment in accordance with the first inspection region contains check information generating second examination information. 一般地,将检查(狭义而言为外观检查)所使用的图像中的哪一区域用于处理取决于检查对象物的形状等信息、针对检查对象物进行的作业内容等,因此,每次在检查对象物、作业内容变化时,必须重新设定检查区域,从而导致使用者的负担较大。 Generally, checks (in a narrow sense of appearance inspection) is used in an image area for which the processing depends on the shape of the inspection object information, such as the contents of operations for the object to be inspected, therefore, each time inspection target, change the contents of the job must be re-set inspection area, resulting in a large burden on the user. 在运一点上,通过根据第一检查信息生成第二检查信息,能够容易地决定检查区域,从而使其他的装置进行检查处理等。 On the operation point, the second inspection information generation information according to a first inspection, the inspection area can be easily determined, so that other processing inspection apparatus.

[0119] 另外,另一方式设及检查方法,其为使用由拍摄部拍摄的检查对象物的拍摄图像, 而进行检查上述检查对象物的检查处理的检查方法,在该检查方法中包括根据第一检查信息,生成将上述检查处理的包含上述拍摄部的视点位置W及视线方向的视点信息、与上述检查处理的检查区域包含在内的第二检查信息的步骤。 [0119] Further, another embodiment is provided and the inspection method using the captured image captured by the imaging portion of the object to be inspected, the inspection method for inspecting the inspection object inspection process, the inspection method comprising a section according to a viewpoint information inspection information, generates the inspection process comprising the viewpoint position and the sight line direction W of the imaging unit, and the inspection process of inspection region inclusive of the step of the second test information.

[0120] 另外,在另一方式中,根据第一检查信息生成包含检查区域的第二检查信息。 [0120] Further, in another embodiment in accordance with the first inspection region contains check information generating second examination information. 一般地,将检查(狭义而言为外观检查)所使用的图像中的哪一区域用于处理取决于检查对象物的形状等信息、针对检查对象物进行的作业内容等,因此,每次在检查对象物、作业内容变化时,必须重新设定检查区域,从而导致使用者的负担较大。 Generally, checks (in a narrow sense of appearance inspection) is used in an image area for which the processing depends on the shape of the inspection object information, such as the contents of operations for the object to be inspected, therefore, each time inspection target, change the contents of the job must be re-set inspection area, resulting in a large burden on the user. 在运一点上,通过根据第一检查信息生成第二检查信息,能够容易地决定检查区域等。 On the operation point, the second inspection information generation information according to a first inspection, the inspection area can be easily determined like.

[0121 ]运样,根据几个方式,能够提供通过根据第一检查信息生成检查所需要的第二检查信息,能够减少使用者的负担并且容易地执行检查的机器人、处理装置W及检查方法等。 [0121] The sample transport, according to some aspects, the information can be provided by the second checking the first check information generation according to the required inspection can be reduced and the burden on the user to easily perform the inspection robot, the processing method and inspection device W .

附图说明 BRIEF DESCRIPTION

[0122] 图1是通过视觉伺服进行的组装作业的说明图。 [0122] FIG. 1 is an explanatory view of the assembly operation carried out by visual servoing.

[0123] 图2A、图2B是被组装对象物的位置偏移的说明图。 [0123] FIGS. 2A, 2B are explanatory view showing the assembly of the object position offset.

[0124] 图3是本实施方式的系统构成例。 [0124] FIG. 3 is a system configuration according to the embodiment Fig.

[0125] 图4是通过基于被组装对象物的特征量的视觉伺服进行的组装作业的说明图。 [0125] FIG. 4 is an explanatory view of the assembly operation carried out by the feature amount based on the object to be assembled visual servoing.

[0126] 图5是基于被组装对象物的特征量的视觉伺服所使用的拍摄图像的一个例子。 [0126] FIG. 5 is an example of a captured image is assembled object feature amount based visual servoing used.

[0127] 图6是组装状态的说明图。 [0127] FIG. 6 illustrates an assembled state.

[0128] 图7是基于被组装对象物的特征量的视觉伺服的流程图。 [0128] FIG. 7 is a flowchart showing the assembled object feature amount based visual servoing.

[0129] 图8是基于被组装对象物的特征量的视觉伺服的另一个流程图。 [0129] FIG. 8 is another flowchart showing the assembled object feature amount based visual servoing.

[0130] 图9是使组装对象物向被组装对象物的正上方移动的处理的说明图。 [0130] FIG. 9 is an explanatory view showing the assembly process of the target object is moved directly above the object to the assembly.

[0131] 图10是通过两种视觉伺服进行的组装作业的说明图。 [0131] FIG. 10 illustrates the assembly operation carried out by two kinds of visual servoing.

[0132] 图11是连续进行两种视觉伺服的情况下的处理的流程图。 [0132] FIG. 11 is a flowchart of a case where two kinds of continuous visual servoing.

[0133] 图12(A)~(D)是参照图像与拍摄图像的说明图。 [0133] FIG. 12 (A) ~ (D) is the reference image and the captured image of FIG.

[0134] 图13A、图13B是Ξ个工件的组装作业的说明图。 [0134] FIGS. 13A, 13B is an explanatory view showing the assembly operation Ξ workpiece.

[0135] 图14(A)~(C)是进行Ξ个工件的组装作业时使用的拍摄图像的说明图。 [0135] FIG. 14 (A) ~ (C) is an explanatory view captured image used for the assembling operation Ξ workpiece.

[0136] 图15是进行Ξ个工件的组装作业时的处理的流程图。 [0136] FIG. 15 is a flowchart of processing when assembling operation is Ξ workpiece.

[0137] 图16(A)~(C)是同时组装Ξ个工件时使用的拍摄图像的说明图。 [0137] FIG. 16 (A) ~ (C) is a diagram illustrating an image captured simultaneously used when assembling Ξ workpiece.

[0138] 图17是同时组装Ξ个工件时的处理的流程图。 [0138] FIG 17 is a flowchart of processing when assembled simultaneously Ξ workpiece.

[0139] 图18(A)~(C)是W其他的顺序组装Ξ个工件时使用的拍摄图像的说明图。 [0139] FIG. 18 (A) ~ (C) is a view illustrating a captured image used for sequence assembly Ξ W other workpieces.

[0140] 图19Α、图19Β是机器人的构成例。 [0140] FIG 19Α, FIG 19Β embodiment is a configuration of a robot.

[0141] 图20是经由网络而控制机器人的机器人控制系统的构成例。 [0141] 20 via a network and controlling the robot configuration example of a robot control system of FIG.

[0142] 图21是表示第二实施方式的机器人系统1的构成的一个例子的图。 [0142] FIG. 21 is a diagram showing an example of configuration of a robot system of FIG embodiment 1 of the second embodiment.

[0143] 图22是表示机器人系统1的功能结构的一个例子的框图。 [0143] FIG. 22 is a block diagram showing an example of a functional configuration of a robot system.

[0144] 图23是机器人系统1的数据流程图。 [0144] FIG. 23 is a dataflow diagram of a robot system.

[0145] 图24是表示控制部20的硬件结构的图。 [0145] FIG. 24 shows a hardware configuration of the control unit 20.

[0146] 图25Α是对通过位置控制W及视觉伺服来控制臂11时的端点的轨道进行说明的图,图25Β是目标图像的一个例子。 [0146] FIG 25Α and W is a visual servo control by controlling the position of the end of the rail arm 11 is a diagram illustrating, an example of FIG. 25Β target image.

[0147] 图26是对分量α进行说明的图。 [0147] FIG. 26 is a diagram illustrating components α.

[0148] 图27是表示本发明的第Ξ实施方式的机器人系统2的处理流程的流程图。 [0148] FIG. 27 is a flowchart showing a flow of a robot system Ξ first embodiment of the present invention, the embodiment 2.

[0149] 图28是对对象物的位置、切换点的位置W及端点的轨道进行说明的图。 [0149] FIG. 28 is a position of the object, the track end position W and the switching point will be described in FIG.

[0150] 图29是表示本发明的第四实施方式的机器人系统3的结构的一个例子的图。 [0150] FIG. 29 is a diagram showing an example of a configuration of a robot system of the fourth embodiment 3 of the embodiment of the present invention.

[0151] 图30是表示机器人系统3的功能结构的一个例子的框图。 [0151] FIG. 30 is a block diagram showing an example of a functional configuration of a robot system 3.

[0152] 图31是表示机器人系统3的处理流程的流程图。 [0152] FIG. 31 is a flowchart showing a flow of the robot system 3.

[0153] 图32是表示机器人系统3将工件插入孔哺勺装配作业的图。 [0153] FIG. 32 is a diagram showing the robot system of FIG 3 is inserted into the workpiece hole spoon feeding assembly operation.

[0154] 图33是表示本发明的第五实施方式的机器人系统4的处理流程的流程图。 [0154] FIG. 33 is a flowchart showing a flow of a fifth embodiment of a robot system of the embodiment 4 of the present invention.

[01W]图34是表示机器人系统4将工件插入孔哺勺装配作业的图。 [01W] FIG. 34 is a diagram showing a robot system in FIG. 4 is inserted into the workpiece hole spoon feeding assembly operation.

[0156] 图35是本实施方式的机器人控制装置的构成例。 [0156] FIG. 35 is a configuration example of the present embodiment of the robot control device.

[0157] 图36是本实施方式的机器人控制装置的详细的构成例。 [0157] FIG. 36 is a detailed configuration example of the present embodiment of the robot control device.

[0158] 图37是获取图像信息的拍摄部的配置例。 [0158] FIG. 37 is an embodiment of imaging unit to obtain configuration information of the image.

[0159] 图38是本实施方式的机器人的构成例。 [0159] FIG. 38 is a configuration example of a robot according to the embodiment.

[0160] 图39是本实施方式的机器人的构造的其他例子。 [0160] FIG. 39 is another example of the configuration of a robot according to the embodiment.

[0161] 图40是一般的视觉伺服控制系统的构成例。 [0161] FIG. 40 is a configuration example of a general visual servo control system.

[0162] 图41是对图像特征量变化量、位置姿势信息的变化量W及关节角信息的变化量、 与雅克比矩阵的关系进行说明的图。 [0162] FIG. 41 is a variation amount of the image feature amount, the change amount W, and the amount of change of position and orientation information of joint angle information, and Jacques diagram illustrating relationship between the ratio of the matrix.

[0163] 图42是对视觉伺服控制进行说明的图。 [0163] FIG. 42 is a visual servo control. FIG.

[0164] 图43Α、图43Β是本实施方式的异常检测手段的说明图。 [0164] FIG 43Α, FIG 43Β abnormality detecting means is an explanatory view of the embodiment according to the present embodiment.

[0165] 图44是与图像获取时刻之差对应地设定阔值的手段的说明图。 [0165] FIG. 44 is a time of taking the difference value width setting means in correspondence with the image description to FIG.

[0166] 图45是表示图像获取时刻、关节角信息的获取时刻W及图像特征量获取时刻的关系的图。 [0166] FIG. 45 is an image acquisition time, acquisition time W and an image feature amount information acquired joint angle relationship between the timing of FIG.

[0167] 图46是表示图像获取时刻、关节角信息的获取时刻W及图像特征量获取时刻的关系的另一个图。 [0167] FIG. 46 is an image acquisition time, acquisition time W and an image feature amount acquisition joint angle information showing the relationship between the timing of another.

[0168] 图47是结合数学公式说明图像特征量变化量、位置姿势信息的变化量W及关节角信息的变化量的相互关系的图。 [0168] FIG. 47 is a combination of mathematical formulas explaining the relationship between W and the amount of the change amount of the joint angle information of the image feature amount variation amount of change, position and posture information.

[0169] 图48是对本实施方式的处理进行说明的流程图。 [0169] FIG. 48 is a flowchart of processing according to this embodiment will be described.

[0170] 图49是本实施方式的机器人控制装置的另一个详细的构成例。 [0170] FIG. 49 is a further detailed configuration example of the present embodiment of the robot control device.

[0171] 图50是本实施方式的机器人的构成例。 [0171] FIG. 50 is a configuration example of a robot according to the embodiment.

[0172] 图51A、图51B是本实施方式的处理装置的构成例。 [0172] FIG. 51A, FIG. 51B is a configuration example of the processing apparatus of the present embodiment.

[0173] 图52是本实施方式的机器人的构成例。 [0173] FIG. 52 is a configuration example of a robot according to the embodiment.

[0174] 图53是本实施方式的机器人的其他构成例。 [0174] FIG. 53 is a configuration example of another embodiment of a robot of the present embodiment.

[0175] 图54是使用第二检查信息的检查装置的构成例。 [0175] FIG. 54 is a configuration of the second embodiment of the inspection apparatus using inspection information.

[0176] 图55是第一检查信息与第二检查信息的例子。 [0176] FIG. 55 is an example of the first information and the second check examination information.

[0177] 图56是对离线处理的流程进行说明的流程图。 [0177] FIG. 56 is a flowchart illustrating the flow of off-line processing is performed.

[0178] 图57A、图57B是形状信息(Ξ维模型数据)的例子。 [0178] FIG 57A, FIG 57B is an example of the shape information (a Cascade dimensional model data).

[0179] 图58是视点信息的生成所使用的视点候补信息的例子。 [0179] FIG. 58 is an example of view point candidate information generation information used viewpoint.

[0180] 图59是视点候补信息的对象物坐标系中的坐标值的例子。 [0180] FIG. 59 is an example of the viewpoint coordinate value candidate information object coordinate system.

[0181] 图60是基于检查处理对象位置的对象物坐标系的设定例。 [0181] FIG. 60 is a setting example of checking processing based on target position of the object coordinate system.

[0182] 图61A~图61G是与各视点信息对应的作业前图像与合格图像的例子。 [0182] FIG. FIGS. 61A ~ 61G are examples of the image before passing the image to the job information corresponding to each viewpoint.

[0183] 图62A~图62D是检查区域的设定手段的说明图。 [0183] FIG. FIGS. 62A ~ 62D is an explanatory view of the inspection area setting means.

[0184] 图63A~图63D是检查区域的设定手段的说明图。 [0184] FIG. FIGS. 63A ~ 63D is an explanatory view of the inspection area setting means.

[0185] 图64A~图64D是检查区域的设定手段的说明图。 [0185] FIG. FIGS. 64A ~ 64D is an explanatory view of the inspection area setting means.

[0186] 图65A~图6抓是作业前后的相似度计算处理的说明图。 [0186] FIGS. 65A ~ FIG. 6 is a diagram illustrating grasping similarity calculating process before and after the operation.

[0187] 图66A~图6抓是作业前后的相似度计算处理的说明图。 [0187] FIGS. 66A ~ FIG. 6 is a diagram illustrating grasping similarity calculating process before and after the operation.

[018引图67A~图67E是视点信息的优先度的说明图。 [018 primer FIGS 67A ~ FIG 67E is a diagram illustrating priority information viewpoint.

[0189] 图68是对在线处理的流程进行说明的流程图。 [0189] FIG. 68 is a flowchart illustrating the flow of processing performed online.

[0190] 图69A、图69B是对象物坐标系中的视点信息与机器人坐标系中的视点信息的比较例。 [0190] FIG 69A, FIG 69B is a Comparative Example of information of the viewpoint coordinate system of the object and viewpoint information of the robot coordinate system.

[0191] 图70A、图70B是图像旋转角度的说明图。 [0191] FIG 70A, FIG 70B is an explanatory view of the image rotation angle.

具体实施方式 detailed description

[0192] W下,对本实施方式进行说明。 The [0192] W, the present embodiment will be described. 此外,W下说明的本实施方式并非不合理地限定权利要求书所记载的本发明的内容。 Further, in the present embodiment by way of illustration not unreasonably W define the present invention described in the appended claims. 另外,本实施方式中说明的全部结构未必都是本发明所必须的构成要件。 Further, the overall structure of the present embodiment described in the present invention are not necessary constituent elements.

[0193] 1.本实施方式的手段 [0193] 1. The present embodiment means

[0194] 第一实施方式 [0194] First Embodiment

[01M]如图1所示,运里,对将由机器人的手部皿把持的组装对象物WK1组装于被组装对象物WK2的组装作业的情况进行说明。 [01M] As shown in FIG. 1, in operation, by the robot hand portion of the dish assembly of the target object gripped WK1 assembled to the case where the assembling work assembling WK2 object will be described. 此外,机器人的手部皿设置于机器人的臂AM的前端。 In addition, the robot hand portion is disposed at a front end dish AM arm of the robot.

[0196] 首先,作为本实施方式的比较例,在通过使用上述参照图像的视觉伺服进行图1所示的组装作业的情况下,根据由摄像机(拍摄部)CM拍摄的拍摄图像、W及预先准备好的参照图像,对机器人进行控制。 [0196] First, as a comparative example of the present embodiment, by performing the assembling operation in FIG. 1 using the reference servo visual image, the captured image by the camera (imaging unit) CM captured, W, and pre ready reference image, the robot control. 具体而言,使组装对象物WK1如箭头YJ那样地向映入参照图像的组装对象物WK1R的位置移动,而将其组装于被组装对象物WK2。 Specifically, the assembly as indicated by arrow YJ WK1 object as reflection position toward the assembly of the object of the reference image WK1R movement, and is assembled to the assembling object WK2.

[0197] 运里,在图2A中示出了此时使用的参照图像RIΜ,在图2B中示出了映入参照图像RIM的被组装对象物WK2的现实空间(Ξ维空间)上的位置。 [0197] in operation, is shown in FIG. 2A RIΜ reference image used at this time, in FIG. 2B shows the position on the physical space WK2 (a Cascade dimensional space) reflection reference image RIM is assembled object . 在图2A的参照图像RIM中,映有被组装对象物WK2与组装状态(或者组装之前的状态)的组装对象物WK1R(相当于图1的WK1R)。 Reference image RIM FIG. 2A, the enantioselective assembled state and assembly WK2 target object (or the state prior to assembly) was assembled objects WK1R (WK1R 1 corresponds to FIG.). 在使用上述参照图像RIM的视觉伺服中,W使映入拍摄图像的组装对象物WK1的位置姿势与映入参照图像RIM的组装状态的组装对象物WK1R的位置姿势一致的方式,使组装对象物WK1 移动。 In the above-described reference image RIM visual servoing, W is assembled so that reflection of the object image is photographed with reflection of the position and posture WK1 assembled state of the reference image RIM coincide with the position of the assembly posture of the object WK1R manner, the assembly of the object WK1 move.

[0198] 但是,如上所述,在实际进行组装作业的情况下,存在被组装对象物WK2的位置姿势变化的情况。 [0198] However, as described above, when the actual assembly operation, there is a case where the change in position and posture WK2 assembly target object. 例如,如图2B所示,映入图2A的参照图像RIM的被组装对象物WK2的重屯、位置在现实空间上为GC1。 For example, FIG. 2B, the reflection target object weight is assembled Tun WK2 reference image RIM FIG. 2A, the location in the real space GC1. 与此相对,实际的被组装对象物WK2会偏置,并且实际的被组装对象物WK2的重屯、位置为GC2。 On the other hand, the actual object to be assembled will be biased WK2, and the actual weight of the object to be assembled Tun WK2, the position GC2. 在运种情况下,即便W使与映入参照图像RIM的组装对象物WK1R的位置姿势一致的方式,使实际的组装对象物WK1移动,也无法成为与实际的被组装对象物WK2 的组装状态,因此,不能正确地进行组装作业。 In the transport case, a position corresponding to the assembly of the object of the reference image RIM WK1R manner even when the posture of W and reflection so that the actual assembly of the mobile object WK1, can not be assembled with the actual state of the object to be assembled is WK2 Therefore, the assembly operation can not be performed correctly. 运是因为在被组装对象物WK2的位置姿势变化的情况下,与被组装对象物WK2成为组装状态的组装对象物WK1的位置姿势也变化。 Transport because in the case of changing the posture of the assembly WK2 object position, object assembly assembled state WK1 position with the posture change of the object to be assembled WK2.

[0199] 因此,本实施方式的机器人控制系统100等即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业。 [0199] Accordingly, the present embodiment is a robot control system 100, etc., even when the position of the object to be assembled posture change, the assembling operation can be performed correctly.

[0200] 具体而言,在图3中示出了本实施方式的机器人控制系统100的构成例。 [0200] Specifically, in FIG. 3 illustrates an embodiment of the robot according to the present embodiment the control system 100 of the embodiment. 本实施方式的机器人控制系统100包括从拍摄部200获取拍摄图像的拍摄图像获取部110、和根据拍摄图像来控制机器人300的控制部120。 Robot control system according to the present embodiment includes an acquisition section 100 acquires the captured image 110 from the capturing image pickup unit 200, and the control unit 120 controls the robot 300 according to the captured image. 另外,机器人300具有末端执行器(手部)310和臂320。 Further, the robot 300 having an end effector (hand) 310 and the arm 320. 此外,在后面对拍摄部200W及机器人300的结构进行详细叙述。 In addition, a detailed description of the structure of the face and the imaging section 200W robot 300 after.

[0201] 首先,拍摄图像获取部110获取映有组装作业的组装对象物与被组装对象物中的、 至少被组装对象物的拍摄图像。 [0201] First, the captured image acquiring section 110 acquires enantioselective assembling work assembling object and the image captured in the object to be assembled, at least assembled object.

[0202] 然后,控制部120根据拍摄图像,进行被组装对象物的特征量检测处理,并根据被组装对象物的特征量,使组装对象物移动。 [0202] Then, the control unit 120 according to the captured image, the feature amount detection process for the object to be assembled, and the feature quantity of the object to be assembled, the assembly by moving the object. 此外,在使组装对象物移动的处理中,也包括输出机器人300的控制信息(控制信号)的处理等。 Further, in the assembling process of moving an object, including control information (control signal) of the output processing of the robot 300. 另外,控制部120的功能通过各种处理器(CPU等)、ASIC(n阵列等)等硬件、或程序等而能够实现。 Further, the function control unit 120 through various processors (CPU, etc.), and other ASIC (n array), a hardware, or the program can be realized.

[0203] 运样,在使用上述参照图像的视觉伺服(比较例)中,根据参照图像的组装对象物的特征量,使组装对象物移动,与此相对,在本实施方式中,根据映入拍摄图像的被组装对象物的特征量,使组装对象物移动。 [0203] The sample transport, in the reference image using the visual servo (Comparative Example), the assembly according to the feature quantity of the reference image of the object, the moving object assembly, whereas, in the present embodiment, in accordance with reflection assembled object feature amount of the captured image, the assembling object movement. 例如,如图4所示,在由摄像机CM拍摄的拍摄图像中,检测作为被组装对象物的工件WK2的特征量,并根据检测出的工件WK2的特征量,使作为组装对象物的工件WK1如箭头YJ所示地移动。 For example, as shown in FIG. 4, the captured image captured by the camera CM, the workpiece is detected as the feature amount WK1 workpiece assembly WK2 target object and the feature of the workpiece WK2 detected, so that the assembly as a target object as shown by arrow YJ move.

[0204]运里,在由摄像机CM拍摄的拍摄图像中,映有当前时刻(拍摄的时刻)的被组装对象物WK2。 [0204] Yun, in the captured image captured by the camera CM, the enantioselective current time (imaging time) is assembled object WK2. 因此,能够使组装对象物WK1向当前时刻的被组装对象物WK2的位置移动。 Therefore, the assembly is moved to the position of the object WK1 is assembled WK2 object of the current time. 由此,能够防止如使用上述参照图像的视觉伺服的失败例(图1中说明的比较例的问题)那样,使组装对象物WK1向当前时刻不能成为组装状态的位置移动的情况。 This prevents the use of the reference image as a visual servo failure embodiment (FIG. 1 explained problems in a comparative example) as the assembling object WK1 to the current time can not be the position of the movement assembled state. 另外,由于在每次进行组装作业时,根据拍摄图像设定视觉伺服的新的目标位置,所W即使在被组装对象物WK2的位置姿势变化的情况下,也能够设定正确的目标位置。 Further, since the assembling work for each time, according to the new target position is set visual servoing captured image, the W even in the case of change in position and posture WK2 assembly target object, it is possible to set the correct destination.

[0205] 如上所述,即使在被组装对象物的位置姿势变化的情况下,也能够正确地进行组装作业。 [0205] As described above, even when the position of the object to be assembled posture change, the assembling operation can be performed correctly. 并且,在本实施方式中,也无需预先准备参照图像,从而能够减少视觉伺服的准备成本。 Further, in the present embodiment, there is no need to prepare in advance the reference image, thereby reducing the cost of preparing visual servoing.

[0206] 另外,运样,控制部120根据拍摄图像来进行视觉伺服,从而控制机器人。 [0206] In addition, the sample transport, the control unit 120 performs visual servoing captured image, thereby controlling the robot.

[0207] 由此,能够根据当前的作业状况,对机器人进行反馈控制等。 [0207] Accordingly, it is possible for the robot performs feedback control based on the current job status and the like.

[0208] 此外,机器人控制系统100并不限定于图1的结构,而能够进行省略上述一部分的构成要素、或追加其他的构成要素等各种变形实施。 [0208] Further, the robot control system 100 is not limited to the structure of FIG. 1, but omitting the above-described components can be part of, or the addition of other embodiments and various modifications of constituent elements. 另外,如后述图19B所示,本实施方式的机器人控制系统100包括在机器人300内,并且也可W与机器人300-体地构成。 Further, as shown in FIG. 19B described later, the robot according to the present embodiment includes a control system 100 in the robot 300 and the robot 300 W may be configured body. 并且,如后述图20所示,机器人控制系统100的功能也可W通过服务器500、各机器人300所具有的终端装置330来实现。 And, as described later in FIG. 20, the robot control system 100 may also be through the server 500 W, each robot 300 having a terminal 330 is achieved.

[0209] 另外,例如在机器人控制系统100与拍摄部200通过包括有线W及无线的至少一方的网络而连接的情况下,拍摄图像获取部110也可W是与进行拍摄部200通信的通信部(接口部)。 [0209] Further, for example, in a case where the robot control system 100 and the imaging unit 200 are connected by a network of at least one comprising a cable W and wireless, the captured image acquiring section 110 may be W is to communicate with the imaging unit 200 of the communication unit (interface portion). 并且,在机器人控制系统100包括拍摄部200的情况下,拍摄图像获取部110也可W是拍摄部200本身。 Further, in the case 100 includes a shooting section 200 of the robot control system, the captured image acquiring section 110 may be W is imaging unit 200 itself.

[0210] 运里,拍摄图像是指利用拍摄部200拍摄而得到的图像。 [0210] operation, the captured image is an image obtained using the imaging unit 200 captures. 另外,拍摄图像也可W是存储于外部的存储部的图像、经由网络而获取的图像。 Further, W is a captured image may be an image stored in the external storage unit, an image acquired via the network. 拍摄图像例如是后述图5所示的图像PIM 等。 Example, an image captured image PIM and the like shown in FIG. 5 to be described later.

[0211] 另外,组装作业是指组装多个作业对象物的作业,具体而言,是指将组装对象物组装于被组装对象物的作业。 [0211] In addition, the assembly work refers to work assembling a plurality of operation target object, particularly, it refers to the assembly target object in the assembling work object. 组装作业例如是将工件WK1放置在工件WK2之上(或者旁边)的作业、是将工件WK1嵌入(嵌合)工件WK2的作业(嵌入作业、嵌合作业)、或是将工件WK1与工件WK2粘合、连接、装配、融着的作业(粘合作业、连接作业、装配作业、融着作业)等。 The assembly work, for example, placing the workpiece in a workpiece WK1 WK2 above (or beside) the operation, the workpiece is fitted WK1 (chimeric) WK2 workpiece job (job fitted, the fitting operation), and the workpiece or the workpiece WK1 WK2 adhering, connecting, assembling job for fusion (bonding operation, the connecting operation, assembly operations into the operation) and the like.

[0212] 并且,组装对象物是指在组装作业中针对被组装对象物而进行组装的物体。 [0212] Then, the assembly of the object refers to an object and assembled in the assembly operation for the target object to be assembled. 例如, 在图4的例子中是工件WK1。 For example, in the example of FIG WK1 workpiece 4.

[0213] 另一方面,被组装对象物是指在组装作业中供组装对象物组装的物体。 [0213] On the other hand, the target object means an object the assembly for assembly target object in the assembly operation. 例如,在图4的例子中是工件M2。 For example, a workpiece M2 in the example of FIG. 4.

[0214] 2.处理的详细 [0214] 2. Process Details

[0215] 接下来,对本实施方式的处理进行详细说明。 [0215] Next, the processing of the present embodiment will be described in detail.

[0216] 2.1.基于被组装对象物的特征量的视觉伺服 [0216] 2.1. Assembled based on the object feature amount visual servoing

[0217] 本实施方式的拍摄图像获取部110获取映有组装对象物W及被组装对象物的1个或者多个拍摄图像。 [0217] The present embodiment photographed image acquisition section 110 acquires embodiment enantioselective assembly is assembled object W and the object or a plurality of captured images. 然后,控制部120根据获取的1个或者多个拍摄图像,进行组装对象物W 及被组装对象物的特征量检测处理。 Then, the control unit 120 in accordance with one or a plurality of captured images acquired, assembled object W and the feature amount detection process is assembled object. 并且,控制部120根据组装对象物的特征量W及被组装对象物的特征量,W使组装对象物与被组装对象物的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动。 Then, the control unit 120 in accordance with the feature quantity W assembly target object and the assembled object feature amount, W the relative position and posture relationship between the assembly object and assembled target object becomes a targeted manner relative position and posture relationship, the assembly object mobile.

[0218] 运里,目标相对位置姿势关系是指,在通过视觉伺服进行组装作业时成为目标的、 组装对象物与被组装对象物的相对位置姿势关系。 [0218] operation, the relative position and posture relationship between the target means is a target at the time of assembling work by visual servoing, relative position and posture relationship between the object and the assembly is assembled object. 例如,在图4的例子中,工件WK1与工件WK2的Ξ角形的孔化接触(邻接)时的相对位置姿势关系为目标相对位置姿势。 For example, in the example of FIG. 4, the relative position and posture relationship with the workpiece when the workpiece WK1 WK2 angular aperture of the Ξ a contact (abutment) of the target relative position and posture.

[0219] 由此,能够根据从拍摄图像检测出的组装对象物的特征量与被组装对象物的特征量,进行组装作业等。 [0219] Accordingly, the feature quantity can be assembled with the object based on the detected feature quantity from the captured image of an object is assembled, the assembling operation and the like. 此外,在后段中对拍摄图像获取部110所获取的1个或者多个拍摄图像进行详细说明。 In addition, one or a plurality acquiring unit 110 acquires the captured image will be described in detail in a subsequent stage the captured image.

[0220] 另外,在多数组装作业中,大多确定了组装对象物中的组装于被组装对象物的部分(组装部分)、与被组装对象物中的供组装对象物组装的部分(被组装部分)。 [0220] Further, in most assembly operations, mostly determined to be partially assembled object (assembling portion), the portion assembly target object assembly in the object to provide the assembled part (the assembly in the object assembly ). 例如,在图4 的例子中,组装对象物中的组装部分是指工件WK1的底面BA,被组装对象物的被组装部分是指工件WK2的Ξ角形的孔化。 For example, in the example of FIG. 4, the assembly partially assembled in the object refers to the bottom surface of the workpiece WK1 BA, and is assembled partially assembled object refers Ξ angular aperture of the workpiece WK2. 在图4的组装作业的例子中,是将组装部分BA嵌入被组装部分的孔化,例如即使将工件WKl的侧面SA组装于孔化也是没意义的。 In the example of FIG 4 the assembly operation, the assembly is fitted portion BA is assembled portion of the hole, even if the workpiece is assembled to the SA WKl sides of the hole is meaningless. 因此,优选预先设定组装对象物的组装部分与被组装对象物的被组装部分。 Accordingly, it is preferable to set the partially assembled part is assembled with the assembled object assembled object.

[0221] 因此,控制部120根据被组装对象物的特征量中的作为目标特征量而设定的特征量、与组装对象物的特征量中的作为关注特征量而设定的特征量,W使组装对象物与被组装对象物的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动。 [0221] Thus, the control unit 120 according to the feature amount as a target feature amount is set to be the feature quantity in the assembled object, and the feature amount as the feature of interest of the object in the assembled set feature quantity, W assembling object and the target object is put into the assembled posture relative positional relationship between a targeted manner relative position and posture relationship between the assembly by moving the object.

[0222] 运里,特征量例如是指图像的特征点、映入图像中的检测对象物(组装对象物W及被组装对象物等)的轮廓线等。 [0222] in operation, for example, it refers to the feature quantity of the feature point image, reflection object to be detected (object W and the assembly is assembled objects, etc.) of the contour lines in the image. 而且,特征量检测处理是指检测图像中的特征量的处理,例如是指特征点检测处理、轮廓线检测处理等。 Further, the feature amount detection processing means for detecting the feature quantity of the image processing, for example, it refers to the feature point detection processing, the contour detection process and the like.

[0223] W下,对作为特征量而检测特征点的情况进行说明。 [0223] of W, the case where the feature quantity is detected as the feature points will be described. 特征点是指能够从图像中突出观测的点。 Feature point refers to an image viewed from the projection point. 例如,在图5所示的拍摄图像PIM11中,作为被组装对象物的工件WK2的特征点, 检测出特征点P1~P10,作为组装对象物的工件WK1的特征点,检测出特征点Q1~Q5。 For example, in the captured image PIM11 shown in FIG. 5, as the feature point object assembled workpiece WK2 detected feature point P1 ~ P10, the assembly of the object as feature points of a workpiece WK1, detected feature points Q1 ~ Q5. 此外, 在图5的例子中,为了便于图示W及说明,图示出仅检测到P1~Plow及Q1~Q5的特征点的样子,但是在实际的拍摄图像中检测到比运更多的特征点。 Further, in the example of FIG. 5, W for convenience of illustration and description, illustrated only detected like P1 ~ Plow and the feature point Q1 ~ Q5, but more than that detected in the actual operation of the captured image Feature points. 但是,即使在检测到比运更多的特征点的情况下,W下的说明处理的内容也无变化。 However, even in the case where the operation is detected more than the feature point, in the description of the processing contents of W also unchanged.

[0224] 另外,在本实施方式中,作为特征点的检测方法(特征点检测处理),使用角点检测法等,但是也可W使用其他的一般的角部检测方法(固有值、FAST特征检测),也可W使用W SIFT(Scale Invariant FeaUire Transform:尺度不变特征转换)为代表的局部特征量描述子、SURF(Speeded Up Robust化日化的:快速鲁棒性特征)等。 [0224] Further, in the present embodiment, as a feature point detection process (feature point detection processing) using a corner detection method or the like, but may be W other general corner detection method (intrinsic value, FAST feature detection) may also be used W W SIFT (scale invariant FeaUire Transform: scale invariant feature Transform) to the sub-described local feature quantity representative, SURF (Speeded Up Robust of day of: fast Robust features) and the like.

[0225] 而且,在本实施方式中,根据被组装对象物的特征量中的作为目标特征量而设定的特征量、与组装对象物的特征量中的作为关注特征量而设定的特征量,进行视觉伺服。 [0225] Further, in the present embodiment, the feature amount of the feature amount as a target is set by the object feature amount in the assembly, with the feature amount as the feature of interest of the object in the assembled set of features the amount of visual servo.

[0226] 具体而言,在图5的例子中,在工件WK2的特征点P1~P10中,作为目标特征量而设定目标特征点P9W及目标特征点P10。 [0226] Specifically, in the example of FIG. 5, the workpiece WK2 feature point P1 ~ P10, the feature amount as a target feature point sets the target and the target P9W feature point P10. 另一方面,在工件WK1的特征点Q1~Q5中,作为关注特征量而设定关注特征点Q4 W及关注特征点Q5。 On the other hand, the workpiece WK1 feature point Q1 ~ Q5, the set as a target feature amount and the feature point Q4 W feature point Q5.

[0227] 而且,控制部120W使组装对象物的关注特征点与被组装对象物的目标特征点一致或者接近的方式,使组装对象物移动。 [0227] Further, the control section 120W feature point that the assembly object to be coincide with the target feature point of the object to the assembly to approach, moving the object to the assembly.

[0228] 目P,W使关注特征点Q4与目标特征点P9接近、关注特征点Q5与目标特征点P10接近的方式,使组装对象物WK1如箭头YJ所示地移动。 [0228] Head P, W so that the target feature point and the feature point Q4 P9 proximity feature point Q5 and the target feature point P10 to approach the target object WK1 assembly moves as indicated by arrow YJ shown.

[0229] 运里,目标特征量是指表示被组装对象物的特征量中的、通过视觉伺服使组装对象物移动时成为目标的特征量。 [0229] in operation, wherein the target amount refers to the amount representing the feature of the object in the assembly, the assembly of the servo object to a target by moving visual feature quantity. 换言之,目标特征量是被组装对象物的被组装部分的特征量。 In other words, the target feature amount is a feature amount assembled part assembled object. 另外,目标特征点是指在进行特征点检测处理的情况下作为目标特征量而设定的特征点。 The target feature point refers to the case of performing the feature point detection process of the feature point feature quantity as the target set. 如上所述,在图5的例子中,作为目标特征点而设定有与工件WK2的Ξ角形的孔化对应的特征点P9W及特征点P10。 As described above, in the example of FIG. 5, there is set the feature point and the feature point P10 P9W Ξ angular aperture of the workpiece as the target WK2 corresponding feature point.

[0230] 另一方面,关注特征量是指表示组装对象物或者被组装对象物的特征量中的、朝向与目标特征量对应的现实空间上的点(在图5的例子中为工件WK2的Ξ角形的孔化)移动的、表示现实空间上的点(在图5的例子中为工件WK1的底面)的特征量。 [0230] On the other hand, the amount of features of interest refers to an object or an assembly is assembled object feature amount of points on the real space toward the target characteristic corresponding to the amount (as in the example of FIG. 5 WK2 workpiece in Ξ angular aperture of) movement, represents the point (WK1 workpiece in the example of a bottom in FIG. 5) on the real space of the feature quantity. 换言之,关注特征量是组装对象物的组装部分的特征量。 In other words, the feature amount is a feature amount concerned assembly part assembled object. 另外,关注特征点是指在进行特征点检测处理的情况下作为关注特征量而设定的特征点。 Further, the feature point refers to the case of performing the feature point detection process of the feature point feature quantity as a target is set. 如上所述,在图5的例子中,作为关注特征点而设定有与工件WK1的底面对应的特征点Q4W及特征点Q5。 As described above, in the example of FIG. 5, as a target feature point set Q4W feature points and the feature point Q5 and the bottom surface of the workpiece corresponding to WK1.

[0231] 另外,目标特征量(目标特征点及关注特征量(关注特征点)可W是指导者(使用者)预先设定的,也可w是依据给定的算法而设定的特征量(特征点)。例如,也可w根据检测到的特征点的偏差W及与目标特征点的相对位置关系,设定目标特征点。具体而言,在图5的例子中,也可W在拍摄图像PIM11中,将位于表示工件WK2的特征点P1~P10的偏差中屯、附近的特征点P9W及特征点P10设定作为目标特征点。另外,除此之外也可W在表示被组装对象物的CAD(Computer Aided Design:计算机辅助设计)数据中,预先设定与目标特征点对应的点,并且在拍摄图像中进行CAD数据与CAD匹配,从而根据CAD匹配的结果,从被组装对象物的特征点之中,确定(检测)作为目标特征点而设定的特征点。关注特征量(关注特征点)也是同样的。 [0231] Further, the target feature quantity (target feature point and feature quantity of interest (target feature point) may be W is a guide (user) set in advance, w can be given algorithm is based on the feature quantity set (feature points). For example, the deviation W w may be detected feature points and the relative positional relationship with the target feature point sets the target feature point. specifically, in the example of FIG. 5, W may be in PIM11 captured image, the feature points located in the workpiece WK2 represents the deviation of the village P1 ~ P10, and feature point near the feature point P10 P9W set as the target feature point. Further, in addition to W may be expressed in the assembled object CAD (Computer Aided design: Computer Aided design) data, previously set target point corresponding to the feature point, and for CAD and CAD data matches the captured image, thereby matching the result of CAD, it is assembled from the object among the feature point thereof, is determined (detected) is set as the target feature point feature point feature amounts of interest (target feature point) is the same.

[0232] 此外,在本实施方式中,控制部120W使组装对象物的关注特征点与被组装对象物的目标特征点一致或者接近的方式,使组装对象物移动,但是由于被组装对象物与组装对象物是有形物体,所W实际上在相同的点检测不到目标特征点与关注特征点。 [0232] In the present embodiment, the control unit 120W making feature point assembly target object coincides with the assembly of the object target feature point or close manner, the moving assembly of the object, but since the assembling object and assembling object is a tangible object, the W actually see the target feature point and the feature point at the same point. 即,W-致的方式使组装对象物移动终究是指使检测到关注特征点的点向检测到目标特征点的点移动的意思。 That is, the way of the moving actuator assembly W- object refers ultimately detected feature point to the destination point to the feature point detecting point movement means.

[0233] 由此,能够W使设定的组装对象物的组装部分与设定的被组装对象物的被组装部分的相对位置姿势关系成为目标相对位置姿势关系的方式,使组装对象物移动等。 [0233] Accordingly, it is possible to make the W relative position and posture relationship between the assembly portion and the portion of the set is assembled assembly target object set in the assembly pattern of the target object becomes the posture relative positional relationship of the moving object like assembly .

[0234] 而且,能够将组装对象物的组装部分组装于被组装对象物的被组装部分等。 [0234] Furthermore, the assembly can be partially assembled is assembled to the target object and the like are assembled part assembled object. 例如图6所示,能够将作为工件WK1的组装部分的底面BA嵌入作为工件WK2的被组装部分的孔化。 As shown in FIG. 6, for example, can be assembled as a part of the bottom surface of the workpiece BA WK1 is embedded as part of the bore of the workpiece to be assembled in WK2.

[0235] 接下来,用图7的流程图对本实施方式的处理流程进行说明。 [0235] Next, the processing flow of the present embodiment will be described with reference to the flowchart of FIG.

[0236] 首先,拍摄图像获取部110获取例如图5所示的拍摄图像PIMll(SlOl)。 [0236] First, the captured image acquisition unit 110 acquires the captured image PIMll example shown in FIG. 5 (SlOl). 在该拍摄图像PIM11映有组装对象物WK1与被组装对象物WK2的双方。 In this captured image has both enantiomers are WK2 PIM11 the assembly target object and target object WK1.

[0237] 接下来,控制部120根据获取的拍摄图像PIM11,进行特征量检测处理,从而检测被组装对象物WK2的目标特征量FB、与组装对象物WK1的关注特征量FA(S102、S103)。 [0237] Next, the control unit 120 PIM11 captured image acquired feature quantity detecting processing to detect a target object to be assembled FB WK2 feature quantity of the feature quantity of interest is the assembly of the object WK1 FA (S102, S103) .

[0238] 然后,如上所述,控制部120根据检测出的关注特征量FA与目标特征量FB,使组装对象物WK1移动(S104),并且对组装对象物WK1与被组装对象物WK2的相对位置姿势关系是否成为目标相对位置姿势关系进行判定(S105)。 [0238] Then, as described above, the control unit 120 according to the detected feature of interest to the feature amount with a target amount of the FB FA, WK1 assembling object movement (S104), and the object relative to the assembly is assembled with the object WK1 WK2 of whether a target position and posture relationship relative position and posture relationship is determined (S105).

[0239] 最后,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系如图6所示地成为目标相对位置姿势关系的情况下,结束处理,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系未成为目标相对位置姿势关系的情况下,回到步骤S101,并且反复进行处理。 [0239] Finally, in the case where it is determined that the object to be assembled with the assembly object relationship WK1 WK2 relative position and posture as shown in FIG. 6 becomes a target posture relative positional relationship, the process ends, in the assembly of the object is determined and WK1 the relative position and posture relationship between the assembly does not become an object WK2 case where the relative position and posture relationship between the target and return to step S101, and repeats the process and. W上是本实施方式的处理流程。 A process flow of the embodiment according to the present embodiment W.

[0240] 另外,拍摄图像获取部110也可W获取多个拍摄图像。 [0240] Further, the captured image acquiring section 110 may acquire a plurality of captured images W. 在运种情况下,拍摄图像获取部110可W获取多个映有组装对象物与被组装对象物的双方的拍摄图像,也可W获取仅映有组装对象物的拍摄图像、W及仅映有被组装对象物的拍摄图像。 In the transport case, the captured image acquiring section 110 may acquire a plurality of W enantioselective assembled object and taking an image of the target object of both the assembly, the captured image may be acquired only W enantioselective assembly target object, and only the enantiomer W the captured image has assembled object.

[0241] 运里,在图8的流程图中示出了后者的获取分别映有组装对象物与被组装对象物的多个拍摄图像的情况的处理流程。 [0241] Yun, in the flowchart shown in FIG. 8 of the latter are acquired enantioselective process flow of the case assembly object and the plurality of captured images are assembled object.

[0242] 首先,拍摄图像获取部110获取至少映有被组装对象物WK2的拍摄图像PIM11 (S201)。 [0242] First, the captured image acquisition unit 110 acquires at least enantioselective WK2 assembled object captured image PIM11 (S201). 此外,该拍摄图像PIM11也可W映有组装对象物WK1。 In addition, the captured image may PIM11 assembly target object W enantioselective WK1. 然后,控制部120从拍摄图像PIM11检测被组装对象物WK2的目标特征量FB(S202)。 Then, the control unit 120 detects from the captured image PIM11 assembly FB target object feature amount of WK2 (S202).

[0243] 接下来,拍摄图像获取部110获取至少映有组装对象物WK1的拍摄图像PIM12 (S203)。 [0243] Next, the captured image acquisition unit 110 acquires the captured image PIM12 least enantioselective WK1 is assembled object (S203). 此外,与步骤S201相同,该拍摄图像PIM12也可W映有被组装对象物WK2。 Further, the same step S201, the captured image may PIM12 W enantioselective assembled object WK2. 然后,控制部120从拍摄图像PIM12检测组装对象物Ml的关注特征量FA(S204)。 Then, the control unit 120 of interest from the feature amount detector assembly PIM12 captured image of the object Ml FA (S204).

[0244] 然后,W下与图7说明的处理流程相同,控制部120根据检测出的关注特征量FA与目标特征量FB,使组装对象物WK1移动(S205),并且对组装对象物WK1与被组装对象物WK2的相对位置姿势关系是否成为目标相对位置姿势关系进行判定(S206)。 [0244] Then, the processing flow in the W Description FIG. 7 is the same, 120 according to the feature of interest detected amount of FA and the target feature amount control unit the FB, the assembly object WK1 movement (S205), and the assembling object WK1 and the relative position and posture relationship between the object assembly WK2 whether a target relative position and posture relationship determination (S206).

[0245] 最后,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系如图6所示地成为目标相对位置姿势关系的情况下,结束处理,在判定为组装对象物WK1与被组装对象物WK2的相对位置姿势关系未成为目标相对位置姿势关系的情况下,回到步骤S203,并且反复进行处理。 [0245] Finally, in the case where it is determined that the object to be assembled with the assembly object relationship WK1 WK2 relative position and posture as shown in FIG. 6 becomes a target posture relative positional relationship, the process ends, in the assembly of the object is determined and WK1 the relative position and posture relationship between the assembly does not become an object WK2 case where the relative position and posture relationship between the target and return to step S203, the processing is repeated and. W上是获取分别映有组装对象物与被组装对象物的多个拍摄图像的情况的处理流程。 W is a process flow of acquiring each case enantioselective assembling object and the plurality of captured images are assembled object.

[0246] 另外,在上述例子中,通过视觉伺服将组装对象物实际组装于被组装对象物,但是本发明并不限定于此,也可W通过视觉伺服形成将组装对象物组装于被组装对象物之前的状态。 [0246] Further, in the above example, by assembling the visual servo object actually assembled is assembled to the object, the present invention is not limited thereto but may also be formed by W visual servo assembly target object in the assembled objects state before the object.

[0247] 目P,控制部120也可W根据被组装对象物的特征量(特征点),确定出与被组装对象物处于给定的位置关系的图像区域,并W使组装对象物的关注特征点与确定出的图像区域一致或者接近的方式,使组装对象物移动。 [0247] Head P, the control unit 120 may also W according to the feature quantity (feature point), it is determined that the attention and is in an assembled object given image region the positional relationship, and W assembling object assembled object consistent with the determined characteristic point image region or to approach, moving the object to the assembly. 换言之,控制部120也可W根据被组装对象物的特征量,确定与被组装对象物处于给定的位置关系的现实空间上的点,并使组装对象物向确定出的点移动。 In other words, the control unit 120 may be assembled to the feature quantity W of the object, determining the point on the target object in real space is assembled at a given positional relationship of the object and the assembly is moved to the determined points.

[0248] 例如,在图9所示的拍摄图像PIM中,作为表示被组装对象物WK2的被组装部分亦即Ξ角形的孔化的特征点,检测到特征点P8~P10。 [0248] For example, in the captured image PIM shown in FIG. 9, i.e., feature points are represented as the aperture of the angled Ξ partially assembled object assembled WK2 detected characteristic point P8 ~ P10. 在运种情况下,在拍摄图像PIM中,确定出与特征点P8~P10处于给定的位置关系的图像区域R1~R3。 In the transport case, the PIM in the captured image, it is determined that the feature point P8 ~ P10 at a given positional relationship between the image region R1 ~ R3. 然后,W使组装对象物WK1的关注特征点Q4与图像区域R2-致(接近)、组装对象物的关注特征点Q5与图像区域R3-致(接近)的方式,使组装对象物WK1移动。 Then, W the assembly of the object of interest feature point Q4 WK1 image region R2- actuator (close), the assembly of the object of interest feature point image region Q5 R3- actuator (close) manner, the assembly WK1 moving object.

[0249] 由此,能够形成例如组装作业之前的状态等。 [0249] Thus, a state can be formed before the assembling work and the like for example.

[0250] 另外,未必需要进行如上述例子所示地检测组装对象物的特征量的处理。 [0250] Further, the feature quantity is not necessarily required for detecting the target object processing the assembly as shown in the above examples. 例如,也可W检测被组装对象物的特征量,并根据检测出的被组装对象物的特征量,推断与机器人相对的被组装对象物的位置姿势,从而W使把持有组装对象物的手部与推断出的被组装对象物的位置接近的方式,控制机器人等。 For example, W can be detected feature amount of the object to be assembled, and the assembly in accordance with the object feature amount is detected, the robot opposite inference is assembled position of the object posture, so that the W to hold the assembly of the object hand and inferred to be close to the position of the assembly of the object, the control robot.

[0251 ] 2.2.由两种视觉伺服进行的组装作业 [0251] 2.2. Assembled job from two visual servoing

[0252] 接下来,对持续进行使用参照图像的视觉伺服(第一视觉伺服)、与使用被组装对象物的特征量而使组装对象物移动的视觉伺服(第二视觉伺服)运两种视觉伺服的情况的处理进行说明。 [0252] Next, using the reference image ongoing visual servo (servo first vision), and using the feature amount of the object to be assembled is assembled servo moving visual target object (second visual servo) operation two kinds of visual servo processing in a case will be described.

[0253] 例如,在图10中,通过使用参照图像的第一视觉伺服进行组装对象物WK1的从位置GC1朝向位置GC2的移动(箭头YJ1所示的移动),通过使用被组装对象物WK2的特征量的第二视觉伺服进行组装对象物WK1的从位置GC2朝向位置GC3的移动(箭头YJ2所示的移动)。 [0253] For example, in FIG. 10, the reference image by using the first visual servo assembly moves toward the position of the target object WK1 GC2 GCl from the position (movement by arrow YJ1 shown), are assembled by using a target object WK2 a second visual feature quantity servo moves toward the position from the position of GC2 WK1 GC3 (as indicated by arrow YJ2 movement) of the object assembly. 此夕h位置GC1~GC3是组装对象物WK1的重屯、位置。 This evening h GC1 ~ GC3 assembled position of the object weight Tun WK1, location.

[0254] 在进行运样的处理的情况下,如图3所示,本实施方式的机器人控制系统100还包括参照图像存储部130。 [0254] When the process is performed like transport, as shown, the present embodiment 3 robot control system 100 also includes a reference image storage unit 130. 参照图像存储部130存储对采取目标位置姿势的组装对象物进行显示的参照图像。 Target position and posture of the taken object to be assembled with reference to a reference image storing an image display unit 130 is stored. 参照图像例如是如后述图12(A)所示的图像RIM。 Reference picture, for example, as described later, the image RIM shown in FIG. 12 (A). 此外,参照图像存储部130 的功能能够通过RAM(Random Access Memory:随机存取存储器)等存储器、皿D(化rd Disk 化ive:硬盘驱动器)等来实现。 Achieved: (a hard disk drive of I VE of rd Disk) and the like: In addition, the reference image storage unit 130 can function by a RAM (Random Access Memory Random Access Memory) memory, a dish D.

[0255] 而且,对于控制部120而言,作为如上述图10的箭头YJ1所示的第一视觉伺服,根据至少映有组装对象物的第一拍摄图像与参照图像,使组装对象物向目标位置姿势移动。 [0255] Further, the control unit 120, examples as indicated by arrows in FIG YJ1 a first visual servo 10, a first captured image and the reference image object based on at least the assembly enantiomers, the assembly to the target object position posture.

[0256] 并且,控制部120在上述第一视觉伺服后,进行如图10的箭头YJ2所示的第二视觉伺服。 [0256] Then, the control unit 120 after the first visual servo, the second visual servo as indicated by arrow 10 in the YJ2. 即,控制部120在使组装对象物移动后,根据至少映有被组装对象物的第二拍摄图像, 进行被组装对象物的特征量检测处理,并根据被组装对象物的特征量,使组装对象物移动。 That is, the control unit 120 in the assembly after moving the object, according to at least enantioselective assembled a second object captured image, the feature amount detection process for the object to be assembled, and the assembly in accordance with the feature quantity of the object, the assembly moving object.

[0257] 运里,用图11的流程图、图12(A)~图12(D)对更加具体的处理流程进行说明。 [0257] Yun, the flowchart of FIG. 11, FIG. 12 (A) ~ FIG. 12 (D) for a more specific processing procedure will be described.

[0258] 首先,作为第一视觉伺服的准备,使机器人的手部皿把持组装对象物WK1,并使组装对象物WK1向该目标位置姿势移动(S301),利用拍摄部200(图10的摄像机CM)拍摄目标位置姿势的组装对象物WK1,从而获取如图12(A)所示的参照图像(目标图像)RIM(S302)。 [0258] First, as a preparation of a first visual servo, the hand gripping portion of the robot assembled dish WK1 target object, and the assembly of the object to the target position posture WK1 (S301,), using a video camera unit 200 (FIG. 10 CM) imaging assembly object WK1 target position and posture, thereby acquiring FIG. 12 (a) the reference image (the target image) as shown in RIM (S302). 然后,从获取的参照图像RIM检测组装对象物WK1的特征量F0(S303)。 Then, from the obtained reference image RIM detector assembly of the target object feature amount WK1 F0 (S303).

[0259] 运里,目标位置姿势是指在第一视觉伺服中成为目标的组装对象物WK1的位置姿势。 [0259] transportation, the posture of the target position refers to a target in a first visual servo target object position and orientation in the assembly of WK1. 例如,在图10中,位置GC2是目标位置姿势,在图12(A)的参照图像RIM中,映有组装对象物WK1位于该目标位置姿势GC2的样子。 For example, in FIG. 10, the position of the target position posture GC2 is, in FIG. 12 (A) of the reference image RIM, the enantioselective assembly target object position and orientation of the object located WK1 GC2 like. 该目标位置姿势是在生成参照图像时由指导者(使用者)设定的位置姿势。 The target position and posture at the time of generating the reference image set by the instructor (user) position and orientation.

[0260] 另外,参照图像例如像图12(A)的参照图像RIM那样,是指在上述目标位置姿势下映有第一视觉伺服中的移动对象亦即组装对象物WK1的图像。 [0260] Further, for example, as the reference image of FIG. 12 (A) as a reference image RIM, it refers to the target position in the posture of the moving object has a first visual enantiomer servo assembly of an image that is the object of WK1. 此外,在图12(A)的参照图像RIM中,也映有被组装对象物WK2,但是不是一定要映出被组装对象物WK2。 Further, in FIG. 12 (A) of the reference image RIM, it is also enantioselective WK2 assembled object, but not necessarily be reflected on the object to be assembled WK2. 另外,参照图像也可W是存储于外部的存储部的图像、经由网络而获取的图像、根据CAD模型数据生成的图像等。 Further, the reference image may be an image W is stored in the external storage unit, an image acquired via a network, the image data generated by the CAD model and the like.

[0261] 接下来,进行第一视觉伺服。 [0261] Next, a first visual servoing. 首先,拍摄图像获取部110获取如图12(B)所示的第一拍摄图像PIM10US304)。 First, the captured image acquiring section 110 acquires a first image captured PIM10US304 shown in FIG. 12 (B)).

[0262] 运里,本例中的第一拍摄图像例如像图12(B)的拍摄图像PIM101那样,是指映有组装作业的组装对象物WK1与被组装对象物WK2中的至少组装对象物WK1的拍摄图像。 [0262] in operation, a first captured image in the present embodiment, for example, as in FIG. 12 (B) as the captured image PIM101 means enantioselective assembling operation of assembling the object with the least WK1 assembled object assembled of the object WK2 WK1 of the captured image.

[0263] 然后,控制部120从第一拍摄图像PIM101检测组装对象物WK1的特征量FUS305), 并根据上述特征量F0与特征量F1,使组装对象物WK1如图10的箭头YJ1所示地移动(S306)。 [0263] Then, 120 from the first captured image WK1 PIM101 detector assembly target object feature amount control unit FUS305), and based on the feature amount and the feature amount F0 F1, the assembling object WK1 as shown in FIG. 10 by arrow YJ1 moving (S306).

[0264] 然后,控制部120对组装对象物WK1是否处于目标位置姿势GC2进行判定(S306),在判定为组装对象物WK1处于目标位置姿势GC2的情况下,移至第二视觉伺服。 [0264] Then, the control unit 120 is in the assembled object WK1 GC2 target position posture determination (S306), it is determined that the assembly of the object at a target position posture GC2 WK1 case, to a second visual servo. 另一方面,在判定为组装对象物WK1未处于目标位置姿势GC2的情况下,回到步骤S304,并且反复进行第一视觉伺服。 On the other hand, it is determined that the assembly is not in the target object position and orientation WK1 GC2 case, return to step S304, the first and repeating the visual servoing.

[0265] 运样,在第一视觉伺服中,边比较参照图像RIM与第一拍摄图像PIM101中的组装对象物WK1的特征量彼此,边控制机器人。 [0265] The sample transport, the first visual servo, the reference edge feature quantity comparison image RIM and assembly of a first object in the captured image PIM101 WK1 each other, while controlling the robot.

[0266] 接下来,进行第二视觉伺服。 [0266] Next, the second visual servoing. 在第二视觉伺服中,首先,拍摄图像获取部110获取如图12(C)所示的第二拍摄图像PIM2US308)。 In the second visual servo, first, the acquisition unit 110 acquires the captured image in FIG. 12 (C) capturing a second image PIM2US308 shown). 运里,第二拍摄图像是指用于第二视觉伺服的拍摄图像。 Yun, the second photographed image refers to a captured image for the second visual servo. 此外,在本例的第二拍摄图像PIM21中,映有组装对象物WK1与被组装对象物WK2 的双方。 Further, in the present embodiment in PIM21 second captured image, there are both the enantiomers of WK2 assembly target object and target object WK1.

[0267] 然后,控制部120从第二拍摄图像PIM21检测被组装对象物WK2的目标特征量FB (S309)。 [0267] Then, the control unit 120 from the second captured image is detected PIM21 assembly FB target object feature amount of WK2 (S309). 例如在本例中,如图12(C)所示,作为目标特征量FB而检测到目标特征点GPm及目标特征点GP2。 For example, in this embodiment, FIG. 12 (C), as a target characteristic quantity detected FB target feature points and object feature points GPm GP2.

[ο %引同样,控制部12 ο从第二拍摄图像ΡIΜ 21检测组装对象物W Κ1的关注特征量FA (S310)。 [Ο% lead Similarly, the control unit 12 ο a second feature quantity from the captured image of interest ΡIΜ 21 detects the object W Κ1 assembly of FA (S310). 例如在本例中,如图12 (C)所示,作为关注特征量FA而检测到关注特征点IP1W及关注特征点IP2。 For example, in this embodiment, FIG. 12 (C), as a feature quantity of interest is detected FA IP1W target feature point and feature point IP2.

[0269] 接下来,控制部120根据关注特征量FA与目标特征量FB,使组装对象物WK1移动(S312)。 [0269] Next, the control unit 120 in accordance with the feature quantity of interest and the target feature amount the FB FA, WK1 assembling object movement (S312). 即,与前文中用图5说明的例子相同地,W使关注特征点IP1与目标特征点GP1接近、 并使关注特征点IP2与目标特征点GP2接近的方式使组装对象物WK1移动。 That is, the previous example of FIG. 5 described in the same manner as used in the text, W so that the target feature point and the feature point IP1 GP1 proximity feature point and a target feature point IP2 GP2 to approach the assembly WK1 moving object.

[0270] 然后,控制部120对组装对象物WK1与被组装对象物WK2是否处于目标相对位置姿势关系进行判定(S312)。 [0270] Then, the control unit 120 is assembled with the assembled object WK1 WK2 target object is in the target relative position and posture relationship determination (S312). 例如在图12(D)所示的拍摄图像PIME中,由于关注特征点IP1与目标特征点GP1邻接,关注特征点IP2与目标特征点GP2邻接,所W判定为组装对象物WK1与被组装对象物WK2处于目标相对位置姿势关系,从而结束处理。 PIME example, the captured image shown in (D) in FIG. 12, since the feature point IP1 and the target feature point GP1 abutment feature point IP2 and the target feature point GP2 adjacent to the target object W is determined that the assembly is assembled with the object WK1 WK2 was in relative position and posture relationship, thereby ending the processing.

[0271] 另一方面,在判定为组装对象物WK1与被组装对象物WK2未处于目标相对位置姿势关系的情况下,回到步骤S308,并且反复进行第二视觉伺服。 [0271] On the other hand, it is determined that the assembly is assembled with the object WK1 WK2 target object when the target is not in the relative position and posture relationship, back to step S308, and repeats a second visual servoing.

[0272] 由此,在每次反复进行相同的组装作业时,使用相同的参照图像,并使组装对象物向被组装对象物的附近移动,之后,能够与实际的被组装对象物的详细的位置姿势相配合地进行组装作业等。 [0272] Accordingly, each time repeating the same assembly work, using the same reference picture, and the object is moved close to the assembly assembled to the object, then, can be detailed with the actual object to be assembled position orientation matingly assembling workability. 即,即使在生成参照图像时的被组装对象物的位置姿势、与实际的组装作业时的被组装对象物的位置姿势偏移(不同)的情况下,也由于在第二视觉伺服中与被组装对象物的位置偏移对应,所W在第一视觉伺服中,能够每次使用相同的参照图像,而无需使用不同的参照图像。 In other words, even in the assembled position and posture of the object when generating the reference image, when the target object is assembled with the actual assembling operation position and posture offset (different), and because in the second and the visual servo shift position corresponding to the assembly of the object, the first visual servo in W, it is possible to use the same reference image each time, without the use of a different reference image. 其结果是,能够抑制参照图像的准备成本等。 As a result, it is possible to suppress the cost of preparing the reference image and the like.

[0273] 此外,在上述步骤S310中,从第二拍摄图像PIM21检测组装对象物WK1的关注特征量FA,但是未必意味着必须从第二拍摄图像PIM21进行检测。 [0273] Further, in the step S310, the second feature quantity from the captured image of interest detector assembly PIM21 WK1 object of the FA, but not necessarily imply that must be detected from the second captured image PIM21. 例如在第二拍摄图像PIM21未映有组装对象物WK1的情况等,也可W从映有组装对象物的其他第二拍摄图像PIM22检测组装对象物WK1的特征量等。 In the second example, the captured image is not enantioselective PIM21 object WK1 assembled condition the like, may also have the feature amount W from the other enantiomer of the second captured image PIM22 detecting assembly target object in the object WK1.

[0274] 2.3. Ξ个工件的组装作业 [0274] 2.3. Ξ assembling operation of the workpiece

[0275] 接下来,如图13A W及图13B所示,对进行Ξ个工件WK1~WK3的组装作业的情况的处理进行说明。 [0275] Next, as shown in FIG. 13A W and 13B, the workpiece Ξ performs processing in a case of WK1 ~ WK3 assembling operation will be described.

[0276] 在本组装作业中,如图13A所示,将由机器人的第一手部皿1把持的组装对象物WK1 (工件WK1,例如为驱动器)组装于由机器人的第二手部皿2把持的第一被组装对象物WK2 (工件WK2,例如为螺钉),并且将与工件WK1成为组装状态的工件WK2组装于作业台之上的第二被组装对象物WK3(工件WK3,例如螺孔)。 [0276] In the assembly operation, as shown, the robot hand portion 1 gripped by the dish assembly WK1 object (workpiece WK1, for example, a driver) is assembled to the hand gripping portion 13A by the first robot dish 2 a first object to be assembled WK2 (WK2 workpiece, for example a screw), the workpiece and the workpiece becomes WK1 WK2 incorporated in assembled state on the second work table is assembled to WK3 target object (workpiece to WK3, e.g. screw) . 然后,在组装作业后,成为如图13B所示的组装状态。 Then, after the assembly operation, the assembled state shown in FIG 13B.

[0277] 具体而言,在进行运样的处理的情况下,如图14(A)所示,控制部120根据至少映有组装作业中的第一被组装对象物WK2的第一拍摄图像PIM31,进行第一被组装对象物WK2的特征量检测处理。 [0277] Specifically, when the processing operation performed like, as shown in FIG 14 (A), the control unit 120 in accordance with at least a first captured image enantiomer PIM31 assembling work is assembled from a first subject of WK2 , a first feature quantity detected by the assembly process of the object to WK2. 本例中的第一拍摄图像是指在进行组装对象物WK1与第一被组装对象物WK2的组装作业时使用的拍摄图像。 Capturing a first image in the present embodiment refers to the first captured image WK1 WK2 is used when performing assembly work object assembly target object.

[0278] 然后,控制部120根据第一被组装对象物WK2的特征量,使组装对象物WK1如图13A 的箭头YJ1所示地移动。 [0278] Then, the control unit 120 according to a first feature quantity of the assembled object WK2, WK1 assembling object shown by an arrow YJ1 13A moves in FIG.

[0279] 接下来,如图14(B)所示,控制部120在使组装对象物WK1移动后,根据至少映有第二被组装对象物WK3的第二拍摄图像PIM41,进行第二被组装对象物WK3的特征量检测处理。 [0279] Next, FIG. 14 (B), the control unit 120 in the assembly after the object moves WK1, a second captured image is a second object WK3 PIM41 assembly according to at least enantiomers, second assembled WK3 object feature amount detection process. 本例中的第二拍摄图像是指在进行第一被组装对象物WK2与第二被组装对象物WK3的组装作业时使用的拍摄图像。 Capturing a second image in the present embodiment refers to a captured image is performed first and the second being used when the assembling work assembling WK3 object WK2 assembled object.

[0280]然后,控制部120根据第二被组装对象物WK3的特征量,使组装对象物WK1W及第一被组装对象物WK2如图13A的箭头YJ2的所示地移动。 [0280] Then, the control unit 120 according to the second object to be assembled WK3 feature quantity of the first assembly and the target object is WK1W WK2 assembled moving target object indicated by an arrow in FIG. 13A YJ2.

[0%1]由此,在每次进行组装作业时,即使在第一被组装对象物WK2、第二被组装对象物WK3的位置偏移的情况下,也能够进行组装对象物、第一被组装对象物、W及第二被组装对象物的组装作业等。 [0% 1] Thus, each time for assembling work, even when the first object to be assembled WK2 objects, the second case is assembled WK3 positional deviation of the object, it is possible to assemble the object, a first the object to be assembled, W, and the second assembly are assembled work or the like object.

[0282] 接下来,用图15的流程图对图13AW及图13B所示的Ξ个工件的组装作业中的处理流程进行详细说明。 Assembling operation processing flow of the workpiece Ξ [0282] Next, the flowchart of FIG. 15 and FIG. 13B to FIG 13AW shown in detail.

[0283] 首先,拍摄图像获取部110获取至少映有组装作业中的组装对象物WK1W及第一被组装对象物WK2的1个或者多个第一拍摄图像。 [0283] First, the captured image acquiring section 110 acquires one or a plurality of captured images of at least a first target object WK1W enantioselective assembling the assembly operation and a first object to be assembled WK2 object. 然后,控制部120根据第一拍摄图像,进行组装对象物WKlW及第一被组装对象物WK2的特征量检测处理。 Then, the control unit 120 according to the first captured image, the feature amount detection process for assembling a first target object and the object WK2 WKlW assembled objects.

[0284] 在本例中,首先,拍摄图像获取部110获取映有第一被组装对象物WK2的第一拍摄图像PIM3US401)。 [0284] In the present embodiment, first, a captured image acquiring section 110 acquires a first enantiomer PIM3US401 assembly is first photographed image of the object WK2). 然后,控制部120根据第一拍摄图像PIM31,进行第一被组装对象物WK2的特征量检测处理,从而检测第一目标特征量FBI (S402)。 Then, the control unit 120 according to the first photographed image PIM31, a first feature quantity detected by the processing target object WK2 assembly, thereby detecting the first target feature amount FBI (S402). 运里,作为第一目标特征量FBI而检测到如图14(A)所示的目标特征点GP1与目标特征点GP2。 Yun, the feature amount as the first target is detected as shown in FIG FBI (A) 14 of the target feature points of the target feature point GP1 GP2.

[0285] 接下来,拍摄图像获取部110获取映有组装对象物WK1的第一拍摄图像PIM32 (S403)。 [0285] Next, the captured image acquiring section 110 acquires a first target object enantioselective assembly PIM32 WK1 the captured image (S403). 然后,控制部120根据第一拍摄图像PIM32,进行组装对象物WK1的特征量检测处理, 从而检测第一关注特征量FA(S404)。 Then, the control unit 120 according to the first photographed image PIM32, the feature amount detection process for assembling WK1 target object, thereby detecting a first feature quantity of interest FA (S404). 运里,作为第一关注特征量FA而检测到关注特征点IP1 与关注特征点IP2。 Operation, the FA as a first feature quantity of interest is detected feature point feature point IP1 and IP2.

[0286] 此外,在步骤S401~S404中,对获取分别映有组装对象物WK1与第一被组装对象物WK2的多个第一拍摄图像(PIM31W及PIM32)的例子进行了说明,但是如图14(A)所示,在组装对象物WK1与第一被组装对象物WK2映入相同的第一拍摄图像PIM31的情况下,也可W从第一拍摄图像PIM31检测组装对象物WK1与第一被组装对象物WK2的双方的特征量。 [0286] Further, at step S401 ~ S404, the obtaining each enantiomer of the assembly have been described object and WK1 is an example of a first plurality of assembled object WK2 first captured image (PIM31W and PIM32), but FIG. 14 (a), in the assembly of the first object to be WK1 reflection object WK2 assembled a first case where the same image capturing PIM31, and W can be detected from the first captured image PIM31 assembled with the first target object WK1 the feature amounts of both the assembly of the object WK2.

[0287] 接下来,控制部120根据组装对象物WK1的特征量(第一关注特征量FA)W及第一被组装对象物WK2的特征量(第一目标特征量FBI ),W使组装对象物WK1与第一被组装对象物WK2的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使组装对象物WK1移动(S405)。 [0287] Next, the control unit 120 according to the object feature amount WK1 assembly (first feature quantity of interest FA) W and is a first feature quantity WK2 assembly target object (first target feature amount FBI), W assembling objects WK1 is a first object relative position and posture relationship between the assembly and the target object becomes a manner WK2 first target relative position and posture relationship between the object WK1 moving assembly (S405). 具体而言,在拍摄图像中,W使关注特征点IP1与目标特征点GP1接近、并使关注特征点IP2与目标特征点GP2接近的方式,使组装对象物WK1移动。 Specifically, in the captured image, W so that the target feature point and the feature point IP1 GP1 proximity feature point and a target feature point IP2 GP2 to approach the target object WK1 moving assembly. 此外,该移动相当于图13A的箭头YJ1的移动。 In addition, the movement corresponds to a shift of an arrow YJ1 13A.

[0288] 然后,控制部120对组装对象物WK1与第一被组装对象物WK2是否处于第一目标相对位置姿势关系进行判定(S406)。 Determination (S406) [0288] Then, the control unit 120 is assembled object WK1 first relative position and posture relationship between the object to be assembled if the target is in the first WK2. 在判定为组装对象物WK1与第一被组装对象物WK2未处于第一目标相对位置姿势关系的情况下,回到步骤S403,并且重新进行处理。 In the assembly of the object is determined to be assembled with the first WK1 WK2 target object is not in a case where a first relative position and posture relationship between the target and return to step S403, the process and re.

[0289] 另一方面,在判定为组装对象物WK1与第一被组装对象物WK2处于第一目标相对位置姿势关系的情况下,从第一拍摄图像PIM32检测第一被组装对象物WK2的第二关注特征量FB2(S407)。 [0289] On the other hand, it is determined that the target object WK1 assembly is assembled with the first object in the case where the first target WK2 relative position and posture relationship between the first captured image from a first object to be assembled PIM32 detecting a first target in WK2 two focus feature amount FB2 (S407). 具体而言,如后述图14(B)所示,控制部120作为第二关注特征量FB2而检测到关注特征点IP3与关注特征点IP4。 Specifically, as described later in FIG. 14 (B), the control unit 120 as a second feature quantity of interest FB2 detected feature point feature point IP3 and IP4.

[0290] 接下来,拍摄图像获取部110获取如图14(B)所示的至少映有第二被组装对象物WK3的第二拍摄图像PIM4US408)。 [0290] Next, the acquisition unit 110 acquires the captured image in FIG. 14 (B) enantiomer at least a second captured image is a second object WK3 PIM4US408 assembly shown).

[0291] 然后,控制部120根据第二拍摄图像PIM41,进行第二被组装对象物WK3的特征量检测处理,从而检测第二目标特征量FC(S409)。 [0291] Then, the control unit 120 according to the second captured image PIM41, the second feature amount detection process of assembling WK3 target object, thereby detecting the second target feature value FC (S409). 具体而言,如图14(B)所示,控制部120作为第二目标特征量FC而检测到目标特征点GP3与目标特征点GP4。 Specifically, FIG. 14 (B), the control unit 120 FC detected target feature point GP3 ​​GP4 target as the second target feature point feature quantity.

[0292] 接下来,控制部120根据第一被组装对象物WK2的特征量(第二关注特征量FB2)、与第二被组装对象物WK3的特征量(第二目标特征量FC),W使第一被组装对象物WK2与第二被组装对象物WK3的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使组装对象物WK1与第一被组装对象物WK2移动(S410)。 [0292] Next, the control unit 120 according to the feature amount (second feature amount interest FB2) to be a first target object WK2 assembly, and the second feature quantity is (the second feature quantity target FC) of the assembled object WK3, W the first object to be assembled is WK2 second relative position and posture relationship between the object WK3 assembly becomes a second embodiment of a target relative position and posture relationship, the assembly of the first object to be assembled WK1 WK2 moving target object (S410).

[0293] 具体而言,在拍摄图像中,W使关注特征点IP3与目标特征点GP3接近、并使关注特征点IP4与目标特征点GP4接近的方式,使组装对象物WK1与第一被组装对象物WK2移动。 [0293] Specifically, in the captured image, W so that the target feature point and the feature point IP3 GP3 close, and the target feature point and the feature point GP4 IP4 to approach, the assembly of the first object to be assembled WK1 moving the object to WK2. 此夕h该移动相当于图13A的箭头YJ2的移动。 This corresponds to moving the mobile h Xi arrow in FIG. 13A YJ2.

[0294] 然后,控制部120对第一被组装对象物WK2与第二被组装对象物WK3是否处于第二目标相对位置姿势关系进行判定(S411)。 [0294] Then, the control unit 120 is assembled the first and the second object to be assembled WK2 WK3 target object is in a second relative position and posture relationship between the target determines (S411). 在判定为第一被组装对象物WK2与第二被组装对象物WK3未处于第二目标相对位置姿势关系的情况下,回到步骤S408,并且重新进行处理。 If the determination is a first object to be assembled is assembled WK2 case of the second object is not in a second target WK3 relative position and posture relationship, back to step S408, and re-processed. [0巧日]另一方面,如图14(C)所示的拍摄图像PIME那样,在判定为第一被组装对象物WK2 与第二被组装对象物WK3处于组装状态、即处于第二目标相对位置姿势关系的情况下,结束处理。 [Qiao day 0] On the other hand, the captured image 14 shown in FIG PIME as shown in (C), it is determined that the first object to be assembled is assembled to the second WK2 WK3 target object in an assembled state, i.e. in a second target when the relative position and posture relationship, the process ends.

[0296] 运样,能够W使组装对象物WK1的关注特征点(IP1W及IP2)与第一被组装对象物WK2的目标特征点(GP1W及GP2)接近、并使第一被组装对象物WK2的关注特征点(IP3 W及IP4)与第二被组装对象物WK3的目标特征点(GP3W及GP4)接近的方式,进行视觉伺服等。 [0296] The sample transport, it is possible that the target feature point W (IP1W and IP2) is assembled with the first object being WK1 target feature point (GP1W and GP2) close the assembly WK2 object, and the first object to be assembled WK2 the feature point (IP3 W and IP4) and the second is close to the target characteristic point (GP3W and the GP4) assembled object WK3 manner, visual servo and the like.

[0297] 另外,也可W不像图15的流程图所示那样按顺序对组装对象物WK1与第一被组装对象物WK2进行组装,而如图16(A)~图16 (C)所示地同时组装Ξ个工件。 [0297] Alternatively, W can not flow chart as shown in FIG. 15 in order to assemble the object and the first object to be assembled is assembled WK2 WK1, while FIG. 16 (A) ~ FIG. 16 (C) are Ξ shown assembled workpiece simultaneously.

[0298] 在图17的流程图中示出了此时的处理流程。 [0298] In the flowchart of FIG. 17 shows a processing flow at this time. 首先,拍摄图像获取部110获取映有组装作业中的组装对象物WK1、第一被组装对象物WK2W及第二被组装对象物WK3的1个或者多个拍摄图像(S501)。 First, the captured image acquiring section 110 acquires enantioselective assembly WK1 assembly operations target object, the first object to be assembled is assembled WK2W and the second object or a plurality of the captured images WK3 (S501). 在本例中,获取图16(A)所示的拍摄图像PIM51。 In the present embodiment, the captured image acquired PIM51 shown in (A) in FIG. 16.

[0299] 接下来,控制部120根据1个或者多个拍摄图像,进行组装对象物WK1、第一被组装对象物WK2W及第二被组装对象物WK3的特征量检测处理(S502~S504)。 [0299] Next, the control unit 120 according to one or more of the captured image, the object WK1 assembly, the first assembly and the second object is WK3 WK2W feature quantity assembly target object detection process (S502 ~ S504).

[0300] 在本例中,如图16(A)所示,作为第二被组装对象物WK3的特征量而检测到目标特征点GP3 W及目标特征点GP4(S502)。 [0300] In the present embodiment, FIG. 16 (A), as a second feature quantity is assembled WK3 object being detected and a target feature point GP3 ​​W target feature points GP4 (S502). 然后,作为第一被组装对象物WK2的特征量而检测到目标特征点GP1W及目标特征点GP2、关注特征点IP3 W及关注特征点IP4(S503)。 Then, as the first feature amount to be assembled WK2 object to the target object detected feature points and object feature points GP1W the GP2, and the feature point feature point IP3 W IP4 (S503). 并且,作为组装对象物WK1的特征量而检测到关注特征点IP1W及关注特征点IP2 (S504)。 Further, as the assembly of the object feature amount of WK1 detected IP1W target feature point and feature point IP2 (S504). 此外,在Ξ个工件映入各自不同的拍摄图像的情况下,也可W在不同的拍摄图像分别进行特征量检测处理。 Further, in the case of each different reflection Ξ workpiece captured image, W may be the feature amount detection process of the captured image are different.

[0301] 接下来,控制部120-边根据组装对象物WK1的特征量W及第一被组装对象物WK2 的特征量,W使组装对象物WK1与第一被组装对象物WK2的相对位置姿势关系成为第一目标相对位置姿势关系的方式,使组装对象物WK1移动,一边根据第一被组装对象物WK2的特征量W及第二被组装对象物WK3的特征量,W使第一被组装对象物WK2与第二被组装对象物WK3的相对位置姿势关系成为第二目标相对位置姿势关系的方式,使第一被组装对象物WK2 移动(S505)。 [0301] Next, the control unit 120- rim assembly according to the object feature amount of the feature amount W WK1 and WK2 first object to be assembled objects, W WK1 assembling the first object to be assembled relative position and posture of the target object WK2 relationship has become a first embodiment of a target relative position and posture relationship, the assembling object movement WK1, WK2 while according to the first object to be assembled and a second object feature amount W is assembled object WK3 feature quantity of the first assembled W WK2 object and the second relative position and posture relationship between the object to be assembled WK3 target becomes a second embodiment of the relative position and posture relationship between the first object to be assembled WK2 movement (S505).

[0302] 目P,W使关注特征点IP1与目标特征点GP1接近、使关注特征点IP2与目标特征点GP2接近、使关注特征点IP3与目标特征点GP3接近、并使关注特征点IP4与目标特征点GP4接近的方式,使组装对象物Ml与第一被组装对象物WK2同时移动。 [0302] Head P, W so that the target feature point and the feature point IP1 GP1 close the feature point IP2 and the target feature point GP2 close the feature point IP3 and the target feature point GP3 ​​close, and the feature point IP4 GP4 target feature point to approach the target object Ml assembly is assembled with the first target object WK2 move simultaneously.

[0303]然后,拍摄图像获取部110重新获取拍摄图像(S506),并且控制部120根据重新获取的拍摄图像,对组装对象物WK1、第一被组装对象物WK2、W及第二被组装对象物WK3运Ξ 个工件是否处于目标相对位置姿势关系进行判定(S507)。 [0303] Then, the captured image acquisition unit 110 re-acquires the captured image (S506), the control unit 120 and the re-captured image acquired WK1 assembling object, the first object to be assembled WK2, W and the second objects are assembled whether the operation was WK3 Ξ workpiece at a target relative position and posture relationship determination (S507).

[0304]例如,在步骤S506中获取的拍摄图像是如图16(B)所示的拍摄图像PIM52,在判定为Ξ个工件还未处于目标相对位置姿势关系的情况下,回到步骤S503,并且反复进行处理。 [0304] For example, the captured image acquired in step S506 is shown in FIG 16 (B) shown PIM52 captured image, in a case where it is determined yet in the target Ξ workpiece relative position and posture relationship, back to step S503, and the process is repeated. 此外,根据重新获取的拍摄图像PIM52,进行步骤S503W下的处理。 Further, according to re-acquire a captured image PIM52, processing at step S503W.

[0305] 另一方面,在步骤S506中获取的拍摄图像是如图16(C)所示的拍摄图像PIME的情况下,判定为Ξ个工件处于目标相对位置姿势关系,从而结束处理。 [0305] On the other hand, the captured image acquired in step S506 in FIG. 16 is a case where the captured image PIME shown in (C), it is determined Ξ workpiece at a target relative position and posture relationship, thereby ending the processing.

[0306] 由此,能够同时进行Ξ个工件的组装作业等。 [0306] Accordingly, the assembly operation can be performed like Ξ workpiece simultaneously. 其结果是,能够缩短Ξ个工件的组装作业的作业时间等。 As a result, the operation time can be shortened Ξ workpiece assembling operation and the like.

[0307] 并且,在进行Ξ个工件的组装作业时,也可W按照与图15的流程图所示的组装顺序相反的顺序,进行组装作业。 [0307] Further, during the assembly operation Ξ a workpiece W can be assembled in the reverse order to the flowchart shown in FIG. 15 sequence, the assembly operation. 即,如图18(A)~图18(C)所示,也可W在将第一被组装对象物WK2组装于第二被组装对象物WK3后,将组装对象物WK1组装于第一被组装对象物M2。 That is, FIG. 18 (A) ~ FIG. 18 (C), the W can be assembled after the first object to be assembled to the second WK2 WK3 assembly target object, the target object WK1 assembly is assembled to the first assembling object M2.

[0308] 在运种情况下,如图18(A)所示,控制部120根据至少映有组装作业中的第二被组装对象物WK3的第一拍摄图像PIM61,进行第二被组装对象物WK3的特征量检测处理,并根据第二被组装对象物WK3的特征量,使第一被组装对象物WK2移动。 [0308] In case the operation, as shown in FIG 18 (A), the control unit 120 in accordance with at least a first captured image enantiomer PIM61 assembling work is assembled from a second subject WK3, the second object to be assembled objects WK3 feature amount detection process, and assembled in accordance with a second feature quantity is WK3 object of the first object to be assembled WK2 moving objects. 此外,由于特征量检测处理的详细内容与用图16(A)说明的例子相同,所W省略其说明。 Further, since the detailed feature amount detection process is the same as the example of Fig 16 (A) is described, the W description thereof is omitted.

[0309] 接下来,如图18(B)所示,控制部120根据至少映有移动后的第一被组装对象物WK2 的第二拍摄图像PIM71,进行第一被组装对象物WK2的特征量检测处理,并根据第一被组装对象物WK2的特征量,W形成如图18(C)的组装状态的方式,使组装对象物WK1移动。 [0309] Next, FIG. 18 (B), the control unit 120 according to a first feature amount thereof to be assembled WK2 enantiomer at least a first target after the movement is assembled from a second subject image captured PIM71 WK2 performs detection processing, according to a first feature quantity of an object to be assembled object WK2, W is formed so 18 (C) of FIG assembled state, the assembling object movement WK1.

[0310] 由此,无需使组装对象物WK1与第一被组装对象物WK2同时移动,而能够更容易地进行机器人的控制等。 [0310] Thus, without the assembly of the first object is moved while WK1 assembled WK2 object, it is possible to more easily control the robot, and the like. 另外,即使不是多臂的机器人而是单臂的机器人,也能够进行Ξ个工件的组装作业等。 Further, even when the multi-arm robot but not single-arm robot, assembling operation can be performed Ξ like workpiece.

[0311] 另外,W上的本实施方式中使用的拍摄部(摄像机)200例如包括CCD(charge- coupled device:电荷禪合元件)等拍摄元件与光学系统。 [0311] Further, the imaging unit (camera) according to the present embodiment, the 200 W used include, for example CCD (charge- coupled device: charge Zen engagement element) imaging element and the optical system and the like. 拍摄部200例如在顶棚、作业台之上等,W视觉伺服中的检测对象(组装对象物、被组装对象物或者机器人300的末端执行器310等)进入拍摄部200的视角内那样的角度而配置。 Imaging unit 200, for example, on the ceiling, other workbench, W visual servo detection target (assembling object, the target object or assembling the robot end effector 300, 310, etc.) into the angle of view as the angle of the imaging unit 200 configuration. 而且,拍摄部200将拍摄图像的信息向机器人控制系统100等输出。 Further, the photographing unit 200 to the information captured image output control system 100 and the like to the robot. 其中,在本实施方式中,将拍摄图像的信息保持原样地向机器人控制系统100输出,但是并不限定于此。 In the present embodiment, the image shooting information intact to the robot controller 100 outputs to the system, but is not limited thereto. 例如,拍摄部200能够包括图像处理等所使用的装置(处理器)。 For example, the device (processor) capable of capturing an image processing unit 200 and the like used.

[0312] 3.机器人 [0312] 3. The robot

[0313] 接下来,在图19AW及图19B中,示出了应用本实施方式的机器人控制系统100的机器人300的构成例。 Example 300 constituting the robot [0313] Next, in FIG 19AW and 19B, the present embodiment illustrates an application of the embodiment 100 of the control system of the robot. 在图19AW及图19B的任一情况下,机器人300都具有末端执行器310。 In either case, and FIG 19AW FIG. 19B, the robot 300 having an end effector 310.

[0314] 末端执行器310是为了把持、提起、吊起、吸附工件(作业对象物)、对工件施行加工而安装于臂的端点的部件。 [0314] The end effector 310 in order to grip, lift, lifting, adsorption workpiece (work object), for the purposes of machining the workpiece attached to the arm member endpoint. 末端执行器310例如可W是手部(把持部),可W是钩部,也可W 是吸盘等。 The end effector 310 may be, for example, W is a hand portion (grip portion), W is a hook portion can also be W is a suction cup. 并且,也可W针对1支臂设置多个末端执行器。 Furthermore, W may also be provided a plurality of end effector for an arm. 此外,臂是机器人300的部件,并且是包括一个W上的关节的可动部件。 In addition, the robot arm 300 is a member, and comprising a joint W on the movable member.

[0315] 例如,图19A的机器人是机器人主体300(机器人)与机器人控制系统100独立地构成的。 [0315] For example, FIG. 19A is a robot body of the robot 300 (robot) and the robot control system 100 independently thereof. 在运种情况下,机器人控制系统100的一部分或者全部的功能例如通过PC(Personal Computer:个人计算机)来实现。 In the transport case, the robot control part or all of the functionality of the system 100, for example, by a PC: achieved (Personal Computer PC).

[0316] 另外,本实施方式的机器人并不限定于图19A的结构,也可W是如图19B所示地机器人主体300与机器人控制系统100-体地构成的。 [0316] Further, the robot according to the present embodiment is not limited to the structure of FIG. 19A, may be W is a robot with the robot body 300 as shown in FIG. 19B control system 100 is configured body. 即,机器人300也可W包括机器人控制系统100。 That is, the robot 300 may also include a robot control system 100 W. 具体而言,如图19B所示,机器人300也可W具有机器人主体(具有臂W及末端执行器310) W及支撑机器人主体的基座单元部,并且机器人控制系统100收纳于该基座单元部。 Specifically, as shown in FIG. 19B, the robot 300 may be a robot body W (310 W, and an arm having an end effector) W, and a base unit of the support body of the robot, and the robot control system 100 housed in the base unit unit. 在图19B的机器人300中,形成为在基座单元部设置有车轮等、并且机器人整体能够移动的结构。 In the robot 300 of FIG. 19B, it is formed in the base unit is provided with a wheel unit, etc., and the entire movable structure of the robot. 此外,图19A是单臂型的例子,而机器人300也可W如图19B所示是双臂型等多臂型的机器人。 Further, FIG. 19A is an example of the single-arm type, the robot 300 is also shown in FIG arms W type multi-armed robot, etc. FIG 19B. 另外,机器人300可W是通过人手来移动的机器人,也可W是设置驱动车轮的马达而利用机器人控制系统100控制该马达从而使其移动的机器人。 Further, the robot 300 can be W is moved by the robot hand may also be provided W is a motor-driven wheel 100 and the robot control system controls the motor so as to move the robot. 另外,并不限定于如图19B所示地在设置于机器人300之下的基座单元部设置机器人控制系统100。 Further, not limited to the manner shown in FIG. 19B provided below the base unit of the robot 300 robot control system portion 100 is provided.

[0317] 另外,如图20所示,机器人控制系统100的功能也可W通过经由包括有线W及无线的至少一方的网络400而与机器人300通信连接的服务器500来实现。 [0317] Further, in FIG. 20, the robot control system 100 may also be implemented by a server 500 W are connected via a communication with the robot 300 include wired and wireless W of at least one of network 400.

[0318] 或者在本实施方式中,也可W由服务器500侧的机器人控制系统进行本发明的机器人控制系统的处理的一部分。 [0318] or, in the present embodiment, W may be part of the processing robot control system of the robot control system performs a server-side 500 of the present invention. 在运种情况下,通过与设置于机器人300侧的机器人控制系统的分散处理,从而实现该处理。 In the transport case, it is provided at 300 by the robot-side robot control system for distributed processing, thereby realizing the process. 此外,机器人300侧的机器人控制系统例如通过设置于机器人300的终端装置330 (控制部)来实现。 Further, the robot-side robot control system 300, for example, by setting the robot 330 in the terminal apparatus 300 (control unit) to achieve.

[0319] 而且,在运种情况下,服务器500侧的机器人控制系统进行本发明的机器人控制系统的各处理中的、分配于服务器500的机器人控制系统的处理。 Each processing robot processing system [0319] Further, in the transport case, the server 500 side robot control system according to the present invention, the control system, the distribution server 500 to the robot controller. 另一方面,设置于机器人300 的机器人控制系统进行本发明的机器人控制系统的各处理中的、分配于机器人300的机器人控制系统的处理。 Each processing other hand, the robot 300 is disposed in the robot control system according to the present invention, in the robot control system, the robot 300 partitioned robot control processing system. 此外,本发明的机器人控制系统的各处理可W是分配于服务器500侧的处理,也可W是分配于机器人300侧的处理。 In addition, each processing robot control system according to the present invention can be processed W is partitioned server 500 side, may be assigned to process 300 W robot side.

[0320] 由此,例如与终端装置330相比处理能力较高的服务器500能够进行处理量较多的处理等。 [0320] Thus, for example, a high processing power compared to the server terminal apparatus 330,500 can be large processing amount treatment. 并且,例如服务器500能够一并控制各机器人300的动作,并能够容易地使多个机器人300协调动作等。 And, for example, the server 500 can collectively control operations of the robot 300, and can be easily coordinated operation of a plurality of robots 300 and the like.

[0321] 另外,近几年,制造多品种且少数的部件的情况有增加的趋势。 [0321] Further, in recent years, the case of manufacturing a multi-species and a few parts tends to increase. 而且,在变更制造的部件的种类的情况下,需要变更机器人进行的动作。 Further, in the case of changing the type of manufactured parts, it is necessary to change the operation of the robot. 若为如图20所示的结构,则即使不重新进行针对多个机器人300各自的指导作业,服务器500也能够一并变更机器人300所进行的动作等。 If the configuration shown in FIG. 20, even without re robots 300 each for guiding a plurality of jobs, the server 500 can be collectively changed for the operation of the robot 300 and the like.

[0322] 并且,若为如图20所示的结构,则与针对各机器人300设置一个机器人控制系统100的情况相比,能够大幅度减少进行机器人控制系统100的软件更新时的麻烦等。 Where [0322] Further, if the structure shown in FIG. 20, the robot with a robot 300 is provided for each control system 100 as compared to the like can be significantly reduced troublesome when the robot control software updating system 100.

[0323] 此外,本实施方式的机器人控制系统W及机器人等也可W通过程序来实现上述处理的一部分或者大部分。 [0323] Further, the present embodiment robot control system and a robot W W may be implemented as part of or most of the above-described processes by a program. 在运种情况下,CPU等处理器执行程序,从而实现本实施方式的机器人控制系统W及机器人等。 In the transport case, CPU and other processor executes the program, thereby realizing the present embodiment and the robot control system W robots. 具体而言,读出存储于信息存储介质的程序,并且CPU等处理器执行读出的程序。 Specifically, reads out a program stored in the information storage medium, and a processor such as a CPU executing a program read out. 运里,信息存储介质(能够由计算机读取的介质)是储存程序、数据等的介质,其功能能够通过光盘(DVD、CD等)、皿D(硬盘驱动器)、或者存储器(卡式存储器、ROM 等)等来实现。 Operation, the information storage medium (computer readable medium) is a medium to store programs, data and the like, which function through the optical disc (DVD, CD, etc.), the dish D (hard disk drive), or a memory (memory card, ROM or the like) and the like. 而且,CPU等处理器根据储存于信息存储介质的程序(数据),进行本实施方式的各种处理。 Further, CPU and other processors according to a program (data) stored in the information storage medium, performs various processing according to this embodiment. 即,在信息存储介质存储用于使计算机(具备操作部、处理部、存储部、输出部的装置)作为本实施方式的各部而发挥功能的程序(用于使计算机执行各部的处理的程序)。 That is, in an information storage medium storing instructions for causing a computer (comprising, processing unit, storage unit, output unit means operating portion) The respective units of the present embodiment functions play program (program for causing a computer to execute the process of each section) .

[0324] W上对本实施方式进行了详细说明,但是可W在实质上不脱离本发明的新内容和效果的条件下,进行多种多样的改变,运对于本领域技术人员来说是显而易见的。 [0324] W of the present embodiment has been described in detail, but may be W without materially departing from the new content and effects of the present invention conditions, a variety of changes, operation will be apparent to the skilled person . 因此,运种改变例也均包含在本发明的范围内。 Thus, changing the kinds of transport cases it was also included in the scope of the present invention. 例如,在说明书或附图中,至少一次与更加广义或同义的不同用语一起被记载的用语,在说明书或附图中的任何位置,均能够替换成该不同用语。 For example, in the specification or drawings, terms with at least one different term having a broader or synonymous with being described, the description in the figures or in any position, both can be replaced with the different term. 另外,机器人控制系统、机器人W及程序的结构、动作也不限定于本实施方式中说明的结构、动作,而能够进行各种变形实施。 Further, the robot control system, and the program structure of the robot W, the operation of the present embodiment is not limited to the configuration described in the embodiment, the operation, and can be variously modified embodiments.

[03巧]第二实施方式 [Qiao 03] Second embodiment

[0326] 图21是表示本发明的一实施方式的机器人系统1的结构的一个例子的系统构成图。 [0326] FIG. 21 is a diagram showing an embodiment of a robot system of the present invention, an example of a configuration of a system configuration of FIG. 本实施方式的机器人系统1主要具备机器人10、控制部20、第一拍摄部30W及第二拍摄部40。 The robot system according to this embodiment mainly includes a robot 10, the control section 20, the first imaging unit and the second imaging section 40 30W.

[0327] 机器人10是具有包括多个接头(关节H2W及多个连杆13的臂11的臂型机器人。机器人10根据来自控制部20的控制信号而进行处理。 [0327] 10 is a robot comprising a plurality of joints (joints H2W plurality of linkage arms and a robot arm 11 13. The robot 10 performs processing in accordance with a control signal from the control unit 20.

[0328] 在接头12设置有用于使它们进行动作的促动器(未图示)。 [0328] In the connector 12 is provided for subjecting them to the action of the actuator (not shown). 促动器例如具备伺服马达、编码器等。 The actuator includes a servomotor for example, an encoder. 编码器输出的编码器值用于由控制部20进行的机器人10的反馈控制。 The encoder output from the encoder for the feedback control of the robot by the control unit 2010 of the.

[0329] 在臂11的前端附近设置有手眼摄像机15。 [0329] In the vicinity of the front end of the arm 11 is provided with a hand-eye camera 15. 手眼摄像机15是拍摄处于臂11的前端的物体而生成图像数据的单元。 Hand-eye camera 15 is captured in the distal end of the arm 11 and the object image data generating unit. 作为手眼摄像机15,例如能够采用可见光摄像机、红外线摄像机等。 As the hand-eye camera 15, for example using a visible light camera, an infrared camera and the like.

[0330] 作为臂11的前端部分的区域,将不与机器人10的其他区域(除去后面进行说明的手部14)连接的区域定义为臂11的端点。 [0330] As the area of ​​the front end portion of the arm 11, will not (will be described later removal of the hand portion 14) and the other area defining area of ​​the robot 10 is connected to arm 11 for the endpoint. 在本实施方式中,使图21所示的点E的位置位于端点的位置。 In the present embodiment, the position of point E in FIG. 21 is shown located endpoint.

[0331] 此外,对于机器人10的结构而言,在对本实施方式的特征进行说明时对主要结构进行了说明,并且不限定于上述结构。 [0331] Further, for the structure of the robot 10, when the characteristic of the present embodiment will be explained the main structure has been described, and is not limited to the above. 并不排除一般的把持机器人所具备的结构。 It does not exclude a general structure provided in the grip of the robot. 例如,在图21中示出了6轴的臂,但是也可W使轴数(接头数)进一步增加,也可W使其减少。 For example, in FIG. 21 shows the arm shaft 6, but may be the number of W (number of joints) shaft is further increased, so W can be reduced. 也可W 增减连杆的数量。 Decrease the number of links may be W. 另外,也可W适当地变更臂、连杆、接头等各种部件的形状、大小、配置、构造等。 Furthermore, W may be appropriately changed arm various components, links, joints, shape, size, configuration, structure and the like.

[0332] 控制部20进行控制机器人10的整体的处理。 [0332] The control unit 20 controls the overall processing of the robot 10. 控制部20可W设置于远离机器人10的主体的场所,也可W内置于机器人10。 W control section 20 may be disposed at a location remote from the body 10 of the robot, the robot may be incorporated in W 10. 在控制部20设置于远离机器人10的主体的场所的情况下,控制部20 W有线或者无线的方式与机器人10连接。 In the case where the control section 20 is provided in place away from the body of the robot 10, the control unit 20 W is a wired or wireless manner to the robot 10.

[0333] 第一拍摄部30W及第二拍摄部40分别是从不同角度对臂11的作业区域附近进行拍摄而生成图像数据的单元。 [0333] The first imaging unit and the second imaging unit 40 30W are taken from different angles is in the vicinity of the work area and the arms 11 of the image data generating unit. 第一拍摄部30W及第二拍摄部40例如包括摄像机,并被设置于作业台、顶棚、壁等。 A first imaging unit and the second imaging portion 30W includes a camera 40, for example, and is disposed on the work table, ceiling, wall or the like. 作为第一拍摄部30W及第二拍摄部40,能够采用可见光摄像机、红外线摄像机等。 The first imaging unit and the second imaging unit 40 30W can be employed visible light camera, an infrared camera and the like. 第一拍摄部30W及第二拍摄部40与控制部20连接,并向控制部20输入由第一拍摄部30W及第二拍摄部40拍摄的图像。 A first imaging unit and the second imaging section 40 30W 20 is connected to the control unit, the control unit 20 inputs to the image taken by the first imaging unit and the second imaging portion 30W 40. 此外,第一拍摄部30W及第二拍摄部40也可W不与控制部20而与机器人10连接,也可W内置于机器人10。 Further, the first imaging unit and the second imaging section 40 30W may be W is not connected to the control unit 20 and the robot 10, the robot may be incorporated in W 10. 此时,经由机器人10而向控制部20 输入由第一拍摄部30W及第二拍摄部40拍摄的图像。 In this case, the imaging unit 20 to the control input of the first imaging unit and the second imaging unit 40 30W robot 10 via the image.

[0334] 接下来,对机器人系统1的功能构成例进行说明。 [0334] Next, a functional configuration of the robot system 1 will be described. 图22表示机器人系统1的功能框图。 FIG 22 is a functional block diagram of the robot system 1.

[0335] 机器人10具备根据促动器的编码器值、W及传感器的传感器值等来控制臂11的动作控制部101。 [0335] 10 includes a robot control unit 101 controls the operation of the arm 11 according to the encoder value of the actuator, the sensor and the sensor value of W and the like.

[0336] 动作控制部101根据从控制部20输出的信息、促动器的编码器值、W及传感器的传感器值等,W使臂11向从控制部20输出的位置移动的方式,驱动促动器。 [0336] The operation controller 101 from the information, the encoder value of the actuator, the sensor value or the like W, and the sensor 20 outputs a control unit, W arm 11 to move from the position of the 20 outputs of the control unit embodiment, a drive pro actuator. 能够根据设置于接头12等的促动器的编码器值等求出端点的当前位置。 Endpoint can be determined according to the current position value of the encoder 12 or the like provided to the joint actuator and the like.

[0337] 控制部20主要具备位置控制部2000、视觉伺服部210 W及驱动控制部220。 [0337] The main control unit 20 includes a position control unit 2000, a visual servo control portion 210 W and a drive unit 220. 位置控制部2000主要具备路径获取部201W及第一控制部202。 Position control unit 2000 mainly includes a route acquiring unit and the first control unit 202 201W. 视觉伺服部210主要具备图像获取部211、图像处理部212W及第二控制部213。 The servo unit 210 mainly includes a visual image acquiring unit 211, an image processing unit and the second control unit 213 212W.

[0338] 位置控制部2000执行沿着预先设定的规定的路径使臂11移动的位置控制。 [0338] The control unit executes a predetermined position along a preset path 2000 to move the position of the control arm 11.

[0339] 路径获取部201获取与路径相关的信息。 [0339] route acquiring unit 201 acquires information associated with the path. 路径是根据指导位置形成的,例如是W预先设定的规定的顺序,将预先通过指导而设定的1个W上的指导位置连起来从而形成的。 The guide paths are formed in a position, for example, W in a predetermined order set in advance, the position of the guide on the guide set in advance by the W 1 connected together to form a. 此夕h与路径相关的信息、例如表示坐标、路径内的顺序的信息保持于存储器22(在后面进行说明,参照图24等)。 This evening h route-related information, such as a graph, the path information held in the order in the memory 22 (described later with reference to FIG. 24, etc.). 保持于存储器22的与路径相关的信息也可W经由输入装置25等而输入。 Retaining the path information related to the memory 22 via the input device may be 25 W and the like is input. 此外,与路径相关的信息也包含端点的最终的位置、即与目标位置相关的信息。 Further, information associated with the path also includes the final position of the end point, i.e., information associated with a target position.

[0340] 第一控制部202根据由路径获取部201获取的与路径相关的信息,设定下一个指导位置、即设定端点的轨道。 [0340] The first control unit 202 based on information on the route by the route acquiring unit 201 acquired under the guidance of a set position, i.e., the end of the rail set.

[0341] 另外,第一控制部202根据端点的轨道,决定臂11的下一个移动位置、即决定设置于接头12的各促动器的目标角度。 [0341] Further, the control section 202 in accordance with a first end of the rail, and determines the next position of the moving arm 11, i.e., disposed on the determined target joint angles of the actuators 12. 另外,第一控制部202生成使臂11移动目标角度那样的指令值,并将其向驱动控制部220输出。 Further, the first control unit 202 generates a moving arm 11 as the target angle command value and outputs it to the drive control unit 220. 此外,由于第一控制部202进行的处理是一般的内容, 所W省略详细的说明。 Further, since the first process control unit 202 is a general content, W the detailed description is omitted.

[0342] 视觉伺服部210执行根据第一拍摄部30与第二拍摄部40拍摄的图像,将与目标物的相对的位置的变化作为视觉信息来测量,并将其作为反馈信息来使用,从而追踪目标物的控制手段亦即所谓的视觉伺服,而使臂11移动。 [0342] The visual servo unit 210 performs a first image capturing section 30 and the second imaging section 40 captured, the change in the relative position of the target object is measured as the visual information, and to use it as the feedback information, so control means for tracking the target object that is called a visual servo, the arm 11 moves.

[0343] 此外,作为视觉伺服,能够使用位置基准的方法、特征基准的方法等方法,上述位置基准的方法根据对象的Ξ维位置信息而控制机器人,上述对象的Ξ维位置信息是通过使用产生视差那样的两个图像而将图像作为立体图像来识别的立体图等方法来计算出来的; 上述特征基准的方法W使由两个拍摄部从正交的方向拍摄的图像与预先保持的各拍摄部的目标图像之差为零(各图像的像素数量的误差矩阵为零)的方式控制机器人。 Method [0343] Further, as a visual servo, it is possible to use the position of a reference feature reference method or the like, the position reference of the method of controlling the robot in accordance with a Cascade-dimensional position information of the object, a Cascade dimensional position information of the object by using the generated two parallax images as an image like a perspective view of the stereoscopic image recognition as a method to calculate out; W method of the above features reference image captured by the two imaging portions from a direction perpendicular to the respective imaging portions held in advance the difference between the target image of zero (the number of pixels of each image to zero error matrix) is controlled robots. 例如,在本实施方式中,采用特征基准的方法。 For example, in the present embodiment, the method uses a reference feature. 此外,特征基准的方法虽然能够使用1台拍摄部来进行, 但是为了提高精度,优选使用2台拍摄部。 Further, although a method wherein a reference imaging unit may be used to stage 1, but in order to improve the accuracy, it is preferable to use two imaging unit.

[0344] 图像获取部211获取第一拍摄部30所拍摄的图像下,称为第一图像)W及第二拍摄部40所拍摄的图像(W下,称为第二图像)。 [0344] The image acquisition unit 211 acquires the image captured by the first imaging unit 30 of the lower, called a first image (the second W W and the imaging unit 40 is captured), called a second image). 将图像获取部211获取的第一图像W及第二图像向图像处理部212输出。 The image acquisition section 211 acquires a first image and a second image output W to the image processing unit 212.

[0345] 图像处理部212根据从图像获取部211获取的第一图像W及第二图像,从第一图像W及第二图像识别端点的前端,并提取包括端点的图像。 [0345] The image processing unit 212 according to the first image and the second image 211 W acquired from the image acquisition unit W from the front end of the first image and the second image recognition endpoint, and inclusive of the extracted image. 此外,由于图像处理部212进行的图像识别处理能够使用一般的各种技术,所W省略其说明。 Further, since the image recognition processing performed by the image processing unit 212 may be used various techniques in general, the W description thereof will be omitted.

[0346] 第二控制部213根据由图像处理部212提取的图像下,称为当前图像)、W及端点位于目标位置时的图像下,称为目标图像),设定端点的轨道、即端点的移动量W及移动方向。 [0346] The second control unit 213 according to the image generated by the image processing unit 212 extracts, called current image), the image at the end and at the target position W, referred to as target image), the end of the rail set, i.e., the endpoint W amount of movement and moving direction. 此外,对于目标图像而言,将预先获取的信息存储于存储器22等即可。 In addition, the target images, the stored information can be acquired in advance in the memory 22 and the like.

[0347] 另外,第二控制部213根据端点的移动量W及移动方向,决定设置于接头12的各促动器的目标角度。 [0347] Further, the second control unit 213 according to an amount of movement and moving direction W of the end, disposed on the determined target joint angles of the actuators 12. 并且,第二控制部213生成使臂11移动目标角度那样的指令值,并将其向驱动控制部220输出。 Then, the second control unit 213 generates a moving arm 11 as the target angle command value and outputs it to the drive control unit 220. 此外,由于第二控制部213进行的处理是一般的内容,所W省略详细的说明。 Further, since the processing performed by the second control unit 213 is a general content, a detailed description thereof will be omitted as W.

[0348] 此外,在具有关节的机器人10中,若决定了各关节的角度,则端点的位置通过正向运动学处理而唯一决定。 [0348] Further, in a robot having joints 10, it determines if the angle of each joint, the end position by the forward kinematics process is uniquely determined. 即,在N关节机器人中,由于能够由N个关节角度表现一个目标位置,所W若使该N个关节角度的组合为一个目标关节角度,则能够将端点的轨道考虑为关节角度的集合。 That is, the N-joint robot, since a target position can be expressed by the joint angle of N, the combination of the N W Ruoshi joint angle as a target joint angle, it can be considered as a set of endpoints rail joint angle. 由此,从第一控制部202W及第二控制部213输出的指令值可W是与位置相关的值(目标位置),也可W是与关节的角度相关的值(目标角度)。 Accordingly, the value of W can be associated with the value of the position (target position) from the first control unit and a 202W unit 213 outputs the second control instruction, and W can be associated with the value of the joint angle (the target angle).

[0349] 驱动控制部220根据从第一控制部202 W及第二控制部213获取的信息,W使端点的位置、即W使臂11移动的方式,向动作控制部101输出指示。 [0349] The drive control section 220 based on the information obtained from the 213 W and the second control unit 202 controls the first section, the position of the endpoint W, i.e. W arm 11 movable manner, the control unit 101 outputs the operation indication. 在后面对驱动控制部220所进行的处理的详细内容进行详述。 Described in detail in the detailed content of the processing performed by the drive control unit 220 after.

[0350] 图23是机器人系统1的数据流程图。 [0350] FIG. 23 is a dataflow diagram of a robot system.

[0351] 在位置控制部2000中,传递有用于通过位置控制而使机器人的各关节与目标角度接近的反馈环路。 [0351] In the position control section 2000, and transmitted for each of the target joint angle of the robot by controlling the position close feedback loop. 预先设定的路径的信息包含与目标位置相关的信息。 Predetermined information comprises path information related to the target position. 对于第一控制部202 而言,若获取与目标位置相关的信息,则根据与目标位置相关的信息W及由路径获取部201 获取的当前位置,生成轨道W及指令值(运里是目标角度)。 The first control unit 202, if the acquired location information associated with a target, according to the information related to the target position acquisition unit W, and the path 201 acquires the current position, and generates the track W command value (the target angle is in operation ).

[0352] 在视觉伺服部210中,传递有用于使用来自第一拍摄部30W及第二拍摄部40的信息而与目标位置接近的视觉反馈环路。 [0352] In the visual servo unit 210, for transmitting a feedback loop using the visual information from the first imaging unit and the second imaging portion 30W of 40 to approach the target position. 第二控制部213作为与目标位置相关的信息而获取目标图像。 The second control section 213 as information related to a target position and a target image acquisition. 对于第二控制部213而言,由于当前图像W及当前图像上的目标位置是W图像上的坐标系来表示的,所W将其变换为机器人的坐标系。 For the second control unit 213, since the current image W and a target position on the current image is the coordinate system on the image represented by W, W converts the coordinate system of the robot. 第二控制部213根据变换后的当前的当前图像W及目标图像,生成轨道W及指令值(运里是目标角度)。 The second control unit 213 according to the current target image and a current image W after the conversion, and generate the track W command value (the target angle is in operation).

[0353] 驱动控制部220向机器人10输出根据从第一控制部202输出的指令值、W及从第二控制部213输出的指令值而形成的指令值。 [0353] The drive control unit 220 of the robot 10 according to the value of the output from the first command output control unit 202, W and a command value is formed from the instruction control unit 213 outputs the second value. 具体而言,驱动控制部220将从第一控制部202输出的指令值乘运一系数,将从第二控制部213输出的指令值乘Wl-a运一系数,并向机器人10输出将上述值合成而成的值。 Specifically, the drive command output from the control unit 220 controls the first operation unit 202 by a coefficient value, a second instruction from the control unit 213 outputs a value multiplied by a coefficient operation Wl-a, and the robot 10 outputs the above-described the value obtained by the synthesis. 运里,α是比0大、比1小的实数。 In operation, [alpha] is larger than 0, the real number smaller than 1.

[0354] 此外,根据从第一控制部202输出的指令值、W及从第二控制部213输出的指令值而形成的指令值的方式并不限定于此。 Mode instruction [0354] Furthermore, according to the instruction value outputted from the first control unit 202, W and instruction from the control portion 213 outputs a second value is formed is not limited to this value.

[0355] 运里,在本实施方式中,从第一控制部202W恒定的间隔(例如每隔1毫秒(msec)) 输出指令值,从第二控制部213 W比来自第一控制部202的输出间隔长的间隔(例如每隔30msec)输出指令值。 [0355] Yun, in the present embodiment, the constant interval from the first control section 202W (for example, every 1 millisecond (msec)) output command value, the control unit 213 W from the second than the first control unit 202 from output interval longer intervals (e.g., every 30 msec) output command value. 因此,驱动控制部220在不从第二控制部213输出指令值的情况下,将从第一控制部202输出的指令值乘Wa运一系数,将最后从第二控制部213输出的指令值乘Wl-a运一系数,并向机器人10输出将上述值合成而成的值。 Thus, the drive control unit 220 in the case without the second control unit 213 outputs a command value from the command control unit 202 outputs a first value of a transport coefficient by Wa, the value from the last instruction of the second control unit 213 outputs Wl-a transport by a coefficient, and the robot 10 outputs the value obtained by synthesizing the above value. 对于驱动控制部220而言,最后从第二控制部213输出的指令值暂时存储于存储器22(图24参照)等存储装置,驱动控制部220从存储装置读出它而使用即可。 The driving control unit 220, the last value of the second instruction from the control section 213 outputs temporarily stored in the memory 22 (refer to FIG. 24) and other storage devices, the drive control unit 220 reads it out from the storage means can be used.

[0356] 动作控制部101从控制部20获取指令值(目标角度)。 [0356] the operation of the control unit 101 acquires the command value (the target angle) from the control unit 20. 动作控制部101根据设置于接头12等的促动器的编码器值等,获取端点的当前角度,并计算目标角度与当前角度的差分(偏差角度)。 The operation control unit 101, etc. disposed encoder value to the joint actuator 12 and the like, and acquires the current angle end, and calculates the target angle and the current angle difference (deviation angle). 另外,动作控制部101例如根据偏差角度来计算臂11的移动速度(偏差角度越大移动速度越快),并W计算出的移动速度使臂11移动计算出的偏差角度。 Further, the operation control unit 101, for example, the angle is calculated according to the moving speed of the deviation arm 11 (the faster moving larger deviation angle), and the moving speed W calculated angle of the arm 11 is moved so that the deviation calculated.

[0357] 图24是表示控制部20的简要结构的一个例子的框图。 [0357] FIG. 24 is a schematic block diagram showing an example of configuration of the control unit 20. 如图所示,由例如计算机等构成的控制部20具备作为运算装置的中央处理器(Central Processing化it: )21;由作为易失性的存储装置的RAM(Random Access Memory:随机存取存储器)、与作为非易失性的存储装置的R〇M(Read only Memory:只读存储器)构成的存储器22;外部存储装置23;与机器人10等外部的装置进行通信的通信装置24;鼠标或键盘等输入装置25;显示器等输出装置26; W及将控制部20与其他单元连接的接口(I/F)27。 As shown, for example, the control unit is configured as a central computer 20 includes a processor (Central Processing of it:) arithmetic unit 21; as a volatile memory device RAM Random Access Memory (: Random Access Memory ), and R〇M as a nonvolatile storage device (Read only memory: read-only memory) memory composed of 22; the external storage device 23; 24 for a communication apparatus for communicating with an external device of the robot 10 and the like; or mouse an input device such as a keyboard 25; a display output device 26; W and an interface (I / F) 20 is connected to a control unit 27 to other units.

[0358] 上述各功能部例如是通过CPU21在存储器22读出并执行储存于存储器22的规定的程序从而实现的。 [0358] The respective functional unit, for example, by CPU21 reads out and executes a program stored in a predetermined memory 22 in the memory 22 to achieve. 此外,规定的程序例如可W预先安装于存储器22,也可W经由通信装置24 而从网络下载从而安装或者更新。 Further, the program may be predetermined, for example, pre-installed in the memory 22 W, W may be so installed or downloaded from a network via the communication device update 24.

[0359] 对于W上的机器人系统1的结构而言,在对本实施方式的特征进行说明时对主要结构进行了说明,并且不限定于上述结构。 [0359] For the configuration of a robot system of the W 1, when the characteristic of the present embodiment will be explained the main structure has been described, and is not limited to the above. 另外,不排除具备一般的机器人系统的结构。 Further, the structure does not exclude have a general robot system.

[0360] 接下来,对本实施方式的由上述结构构成的机器人系统1的特征的处理进行说明。 [0360] Next, the processing characteristics of the robot system according to the present embodiment having the above structure will be described. 在本实施方式中,W使用手眼摄像机15按顺序对如图21所示的对象物01、02、03进行目视检查的作业为例进行说明。 In the present embodiment, W operations using hand-eye camera 15 in sequence as shown in FIG. 21 01,02,03 object visually inspected as an example.

[0361] 若经由未图示的按钮等而输入控制开始指示,则控制部20通过位置控制W及视觉伺服而控制臂11。 [0361] When the control start instruction is input via a button or the like (not shown), the control unit 20 by the position control servo and W and the visual control arm 11. 驱动控制部220在从第二控制部213输入指令值的情况(在本实施方式中每30次进行1次)下,使用将从第一控制部202输出的值下,称为基于位置控制的指令值)、与从第二控制部213输出的值下,称为基于视觉伺服的指令值)W任意的分量合成而成的指令值,并向动作控制部101输出指示。 The drive control unit 220 is input in the second command value from the control section 213 (in the present embodiment is performed once every 30 embodiment), the use of the first value output from the control unit 202 under the control based on the position referred to command value), and the value from the second control unit 213 outputs, called command value) W from an arbitrary component synthesis based visual servo command value to the control unit 101 outputs the operation indication. 驱动控制部220在不从第二控制部213输入指令值的情况(在本实施方式中每30次进行29次)下,使用从第一控制部202输出的基于位置控制的指令值、与最后从第二控制部213输出并暂时存储于存储器22等的指令值,并向动作巧制部101输出指不。 Without 220 (29 times for every 30 in the present embodiment) The drive control section 213 from the input unit when the second control command value, using the position control based on an instruction value outputted from the first control unit 202, and finally the second control unit 213 outputs a command value and temporarily stored in the memory 22 or the like, and clever operation unit 101 from the output means is not made.

[0362] 图25A是对通过位置控制W及视觉伺服而控制臂11时的端点的轨道进行说明的图。 [0362] FIG. 25A is W and the visual servo control by controlling the position of the end of the rail arm 11 is illustrated in FIG. 在图25A中的位置1设置有对象物01,在位置2设置有对象物02,在位置3设置有对象物03。 Position in FIG. 25A is provided with an object 01 with the object 02 disposed at a position 2, 3 is provided with a position of the object 03. 在图25A中,对象物01、02、03位于同一平面村¥平面)上,手眼摄像机15位于恒定的2方向位置。 In FIG. 25A, the object plane in the same village ¥ 01,02,03 plane), the hand-eye camera 15 is located in a constant position in the second direction.

[0363] 在图25A中,实线所示的轨道是仅使用基于位置控制的指令值的情况下的端点的轨道。 [0363] In FIG 25A, the track is shown in solid line using the end of the rail only in the case where the position control based on the command value. 该轨道成为通过位置1、2、3之上的轨道,在对象物01、02、03始终W相同的位置、姿势设置的情况下,能够仅通过位置控制而对对象物01、02、03进行目视检查。 Above the track through the track becomes a position 1,2,3, 01,02,03 always the object W in the same position, the posture set in the case, it is possible only while the object is performed by the position control 01,02,03 Visual inspection.

[0364] 与此相对,在图25A中,考虑对象物02从实线上的位置2向移动后的位置2移动的情况。 [0364] On the other hand, in FIG. 25A, consider the case where the object 02 moves from the solid line position to the 2 position of the movement 2. 由于端点在实线所示的对象物02之上移动,所W在仅使用基于位置控制的指令值的情况下,能够设想对象物02的检查的精度降低、或者无法进行检查的情形。 Since the mobile object 02 on the end shown in solid lines, in the case where only the position control based on the command value, the accuracy of the inspection can be envisaged to reduce the object 02, can not be checked or the W.

[0365] 由于与对象物的位置的移动对应,所W良好地应用视觉伺服。 [0365] Since the position corresponding to the movement of the object, the visual servo W favorably applied. 若使用视觉伺服,贝U 即使对象物的位置偏移,也能够使端点在对象物的正上方移动。 The use of visual servo, even if misalignment U shell object, so that the endpoint can be moved directly above the object. 例如,若对象物02位于移动后的位置2,则在给予图25B所示的图像作为目标图像的情况下,假设仅使用基于视觉伺服的指令值,则端点通过图25A中的点线所示的轨道上。 For example, if the object 02 located at a position after the movement 2, is given in FIG. 25B a case where the image as a target image represented by the assumed based only on visual servo command value, the endpoint by dotted line in FIG. 25A as shown in on the track.

[0366] 视觉伺服是能够与对象物的偏移对应的非常有用的控制方法,但是因第一拍摄部30或第二拍摄部40的帖速率、图像处理部212的图像处理时间等,从而与位置控制的情况相比,存在到达目标位置为止的时间花费较多运一问题。 [0366] The servo is capable of visual object corresponding to the offset useful control method, but the rate by the first imaging unit 30 or the second post portion 40 of the imaging, the image processing unit 212 of the image processing time, and thus compared to the position control, there is the time it takes to reach the target position until a more operational issues.

[0367] 因此,通过同时使用位置控制与视觉伺服的指令值(同时进行位置控制与视觉伺月良、即并行控制),从而与对象物01、02、03的位置偏移对应地确保检查精度,同时与视觉伺服相比高速地移动。 [0367] Thus, the visual servo position control command value by simultaneously (concurrently with the visual position control servo months benign, i.e., parallel control), whereby the position of the object 01,02,03 offset corresponds to ensure inspection accuracy while moving at high speed as compared with visual servo.

[0368] 此外,所谓同时并不限定于完全相同的时间、时刻。 [0368] In addition, the term is not limited to the same time exactly the same time, the moment. 例如,同时使用位置控制与视觉伺服的指令值的情况是指,也包括同时输出位置控制的指令值与视觉伺服的指令值的情况、错开微小时间地输出位置控制的指令值与视觉伺服的指令值的情况的概念。 For example, while the case where the instruction position control and visual servoing value means also includes a case of simultaneous command value and the visual servo command output position control value, shifted instruction value and the visual servo minute time to the position control output the concept of the value of the case. 对于微小时间而言,只要是能够进行与同时的情况相同的处理的时间,就可W是任意长度的时间。 For very short time as long as possible in the case of simultaneous processing of the same time, W can be any length of time.

[0369] 特别是在目视检查的情况下,由于手眼摄像机15的视角包含对象物02即可(对象物02无需位于视角的中屯、),所W即使轨道不在对象物02之上也没问题。 [0369] In particular in the case of visual inspection, since the angle of view of the hand-eye camera 15 comprises an object (target object 02 is located without the perspective Tun,) to above 02, the object W even when the rail 02 did not problem.

[0370] 因此,在本实施方式中,驱动控制部220W形成手眼摄像机15的视角包含对象物02 的轨道的方式,将基于位置控制的指令值与基于视觉伺服的指令值进行合成。 [0370] Accordingly, in the present embodiment, the drive control section 220W is formed hand-eye perspective of the camera mode track 15 comprises an object 02, based on the value of the synthesized value of the position command based on visual servo control command. 在图25A中由点划线表示此时的轨道。 In FIG 25A at this time it is represented by a dotted line track. 该轨道未通过对象物02的正上方,而是能够最大限度确保检查精度的位置。 The track 02 is not directly above the object by, but is capable of checking the position to ensure maximum accuracy.

[0371] 此外,除了对象物的设置位置偏移W外,由溫度变化引起的臂11的各部件等的膨胀等也成为路径上的位置与实际的对象物的位置的偏移的重要因素,但是在运种情况下, 也能够通过同时使用位置控制与视觉伺服的指令值来解决。 [0371] Further, in addition to setting the offset position of the object W, expansion and the like of each arm member 11 and the like caused by temperature changes become an important factor in the offset position and the actual position of the object on the path, However, in transport case can be solved by using both the position command value and the visual servo control.

[0372] 该图25A中的点划线所示的轨道的位置因分量α而能够改变。 [0372] The position of the track shown in chain lines in FIG. 25A points can be changed by the α component. 图26是对分量α进行说明的图。 FIG 26 is a diagram illustrating components α.

[0373] 图26Α是表示到目标(运里是对象物01、02、03)的距离与分量α的关系的图。 [0373] FIG 26Α shows the target (the object to be transported in 01,02,03) and the distance relationship between the components of α FIG. 线A是与到目标位置的距离无关而为恒定的分量α的情况。 Line A is the distance to the target position is constant regardless of the case where α component. 线B是与到目标位置的距离对应地阶段性地减小分量α的情况。 Line B is the case where the distance to the target position is reduced stepwise corresponding to the α component. 线C、D是与到目标位置的距离对应地连续地减小分量α的情况,线C 是与距离成比例地使分量α的变化变小的情况,线D是距离与分量α成比例的情况。 Lines C, D is the distance from the target position is continuously reduced to the corresponding components α case, the line C that the component is in proportion to the distance change α becomes small, and the line D is the distance proportional component α Happening. 其中,分量α 为0<α<1。 Wherein component [alpha] is 0 <α <1.

[0374] 在为图26Α的线B、C、D的情况下,W使随着到目标位置的距离变近,位置控制的指令值的比重减少、视觉伺服的指令值的比重增加的方式,设定分量α。 [0374] In the case of line B of FIG 26Α, C, D are, W so that the target position as the distance becomes closer, reducing the specific gravity of the position control command value, increase the proportion of the servo command value visual manner, setting component α. 由此,在目标位置移动的情况下,能够W使端点与目标位置更加接近的方式生成轨道。 Accordingly, in a case where the target position can be the position W so that end closer to the target track generation manner.

[0375] 另外,由于设定分量aW使得位置控制与视觉伺服的各指令值叠加,所W能够与到目标位置的距离对应地连续地改变分量α。 [0375] Further, since the setting of each component such that the aW visual servo control command position value is superimposed, the W can be continuously changed with the distance to the target position corresponding to the component α. 通过与距离对应地连续地改变分量,能够将控制从大致基于位置控制的臂的控制向大致基于视觉伺服的臂的控制顺利地切换。 By changing the component corresponding to the distance continuously, it is possible to substantially control the switching based on the control from the position control arm to the control arm is substantially based visual servo smoothly.

[0376] 此外,如图26Α所示,分量α并不限定于由到目标(运里是对象物〇1、〇2、03)的距离规定的情况。 [0376] Further, as shown in FIG 26Α, a component α is not limited to the target (the object to be transported in 〇1, 〇2,03) in the case of a predetermined distance. 如图26Β所示,也可W由从开始位置离开的距离规定分量α。 FIG 26Β shown, it may be by the distance W away from the start position of the predetermined component α. 即,驱动控制部220 能够根据当前位置与目标位置的差分来决定分量α。 That is, the drive control unit 220 can be determined based on the difference α component of the current position and the target position.

[0377] 此外,到目标的距离、从开始位置离开的距离可W根据路径获取部101所获取的路径而求出,也可W根据当前图像与目标图像而求出。 [0377] In addition, the distance to the target, the distance from the start position W from the path route retrieval unit 101 according to the acquired and calculated, W can also be based on the current image and the target image is obtained. 例如,在根据路径而求出的情况下,能够根据与路径相关的信息所包含的开始位置、目标、对象物的位置等的坐标、顺序、W及当前位置的坐标、顺序来进行计算。 For example, in the case according to the determined path, it can be calculated based on the coordinates associated with the start position information contained in the path, the target location, the object, the order of, W, and the current position, sequentially.

[0378] 对于如图26所示的当前位置与目标位置的差分和分量的关系而言,由于W使用者希望的轨道来控制臂11,所W例如能够经由输入装置25等输入部而输入。 [0378] For the relationship between the differential and the components of the current position and the target position as shown in FIG. 26, since W desired by the user to the track control arm 11, for example, the W and the like are input via the input unit 25 the input device. 另外,当前位置与目标位置的差分和分量的关系预先存储于存储器22等存储机构,从而使用它即可。 Further, the relationship between the differential and the components of the current position and the target position stored in the storage memory means 22, so as to use it. 此外,存储于存储机构的当前位置与目标位置的差分和分量的关系可W是经由输入部而输入的内容,也可w是预先初始设定的内容。 Further, stored in the storage means and the current components of the differential relation of the target position may be a position W content is input via the input unit, w may be set in advance of the initial content.

[0379] 根据本实施方式,由于使用W恒定的比例将位置控制与视觉伺服的各指令值合成而成的指令值来控制臂(手眼摄像机),所W即使在产生对象物的位置偏移的情况下,也能够精度良好地进行高速检查。 [0379] According to this embodiment, since a constant ratio W of each command position and the visual servo control command value to a value obtained by synthesizing the control arm (hand-eye camera), even in the W positional deviation of the object case, it is possible to accurately perform high-speed inspection. 特别是,根据本实施方式,能够使速度与位置控制同等(与视觉伺服相比高速),并且与位置控制相比能够针对位置偏移进行鲁棒性检查。 In particular, according to the present embodiment, it is possible to make the same speed and position control (servo visual comparison with high-speed), and compared with the position control can be performed to check the robustness against misalignment.

[0380] 此外,在本实施方式中,平常将基于位置控制的指令值与基于视觉伺服的指令值进行合成,但是例如在对象物02的位置偏移比规定的阔值大的情况下,也可W仅使用基于视觉伺服的指令值而使臂11移动。 [0380] In the present embodiment, the normal value of the position command based on the control command value based on visual servo synthesis, but for example, the offset is larger than a predetermined value or the width position of the object 02, also W may be moved using only the value of the arm 11 based visual servoing commands. 第二控制部213根据当前图像而求出对象物02的位置偏移是否比规定的阔值大即可。 The second control unit 213 determined whether the current image position of the object 02 can be an offset value is larger than a predetermined width of.

[0381] 另外,在本实施方式中,驱动控制部220根据当前位置与目标位置的差分而决定分量曰,但是决定分量α的方法并不限定于此。 Method [0381] Further, in the present embodiment, the drive control section 220 determines a difference current component in accordance with said position and the target position, but the decision component α is not limited thereto. 例如,驱动控制部220也可W使分量α随着时间的经过而变化。 For example, the drive control unit 220 may cause component α W varies with time. 另外,驱动控制部220也可W到经过一定时间为止使分量α随着时间的经过而变化,之后根据当前位置与目标位置的差分来改变分量曰。 Further, the drive control unit 220 may also be after a certain time until the W component so that α changes over time, after the said component is changed according to the difference of the current position and the target position.

[0382] 第=实施方式 [0382] The first embodiment =

[0383] 本发明的第一实施方式平常使用W恒定的比例将位置控制与视觉伺服的各指令值合成而成的指令值来控制臂,但是本发明的适用范围并不限定于此。 [0383] a first embodiment of the present invention normally use a constant ratio W of each command position and the visual servo control value obtained by the synthesis control instruction value to the arm, but the scope of the present invention is not limited thereto.

[0384] 本发明的第Ξ实施方式是将与对象物的位置对应地仅使用位置控制的各指令值的情况、与使用W恒定的比例将位置控制与视觉伺服的各指令值合成而成的指令值的情况进行组合的方式。 [0384] Ξ first embodiment of the present invention is a position of the object corresponding to the case where only the position control of each instruction value, and constant proportion W of each command position and the visual servo control value obtained by the synthesis of where the command value is performed in combination. W下,对本发明的第Ξ实施方式的机器人系统2进行说明。 Under W, the robot system of the first embodiment Ξ embodiment of the present invention will be described. 此外,由于机器人系统2的结构与第二实施方式的机器人系统1相同,所W省略机器人系统2的结构的说明, 而对机器人系统2的处理进行说明。 Further, the robot system configuration is the same as the second embodiment 2 of the robot system 1, the configuration of the robot system 2 will be omitted by W, while the handling robot system 2 will be described. 另外,对于与第二实施方式相同的部分,标注相同的附图标记,并省略其说明。 Further, for the second embodiment the same parts are denoted by the same reference numerals, and description thereof is omitted.

[0385] 图27是表示本发明的臂11的控制处理的流程的流程图。 [0385] FIG. 27 is a flowchart showing a flow of control process of the present invention arm 11. 该处理例如是通过经由未图示的按钮等而输入控制开始指示从而开始的。 This processing is inputted via a control button or the like (not shown) of the start instruction to start. 在本实施方式中,进行对象物01、〇2的目视检查。 In the present embodiment, the object 01 carried, 〇2 visual inspection.

[0386] 若开始进行处理,则位置控制部2000进行位置控制(步骤S1000)。 [0386] When processing is started, the position control section 2000 controls the position (step S1000). 即,第一控制部202根据由路径获取部201获取的与路径相关的信息而生成指令值,并将其向驱动控制部220输出。 That is, the control unit 202 generates a first command value based on the information related to the route by the route acquiring unit 201 acquired, and outputs it to the drive control unit 220. 驱动控制部220将从第一控制部202输出的指令值向机器人10输出。 10 outputs the robot control unit 220 from the drive command value output from the first control unit 202. 运样,动作控制部101根据指令值而使臂11(即端点)移动。 Sample transport operation control unit 101 so that the arm 11 (i.e., endpoint) moved according to the command value.

[0387] 接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点1进行判断(步骤S1002)。 [0387] Next, the first control unit 202 of the mobile terminal by the position control result, i.e., whether the endpoint is determined (step S1002) by a switching point. 表示切换点1的位置的信息包含在预先设定的与路径相关的信息内。 Information indicating the switching position of the point 1 of the information contained in the path set in advance.

[0388] 图28是对对象物01、02的位置、切换点的位置W及端点的轨道进行说明的图。 [0388] FIG. 28 is a position of the object 01, 02, the track end position W and the switching point will be described in FIG. 在本实施方式中,切换点1设置在开始地点与对象物01之间。 In the present embodiment, the switching point between a start point provided at 01 and the object.

[0389] 在端点未通过切换点1的情况(步骤S1002中为否)下,控制部20反复进行步骤S1000的处理。 [0389] In a case where a switching point through endpoint is not (NO in step S1002), the control unit 20 repeats the process of step S1000.

[0390] 在端点通过切换点1的情况(步骤S1002中为是)下,驱动控制部220使用位置控制W及视觉伺服来控制臂11(步骤S1004)。 [0390], the drive control unit 220 and the visual position control servo to control the W arm 11 (step S1004) by the end of the handover point 1 (YES in step S1002). 即,第一控制部202根据由路径获取部201获取的与路径相关的信息而生成指令值,并将其向驱动控制部220输出。 That is, the control unit 202 generates a first command value based on the information related to the route by the route acquiring unit 201 acquired, and outputs it to the drive control unit 220. 另外,第二控制部213根据由图像处理部212处理的当前图像与目标图像而生成指令值,并将其向驱动控制部220输出。 Further, the control unit 213 generates a second command value based on the current image and the target image processing by the image processing section 212, and outputs it to the drive control unit 220. 驱动控制部220随着时间的经过而对分量α进行阶段性切换,并使用切换后的分量α,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。 The drive control unit 220 performs over time switching stage of component [alpha] and [alpha] using the component after the handover, from the first control command value output unit 202 synthesizes the second instruction from the control unit 213 outputs a value and outputs of the robot 10. 运样,动作控制部101根据指令值而使臂11(即端点)移动。 Sample transport operation control unit 101 so that the arm 11 (i.e., endpoint) moved according to the command value.

[0391] W下,对步骤S1004的处理进行具体说明。 The [0391] W, the processing of step S1004 will be described in detail. 在进行步骤S1004的处理之前、即在步骤S1000的处理中,不使用来自视觉伺服部210的指令值。 Before performing the process of step S1004, i.e., in the process of step S1000 is not used from the command portion 210 of the visual servo value. 因此,来自位置控制部2000的指令值的分量α为1(来自视觉伺服部210的指令值的分量1-α = 〇)。 Thus, [alpha] component command value from the position control unit 2000 is 1 (visual component command value from the servo unit 210 1-α = square).

[0392] 在步骤S1004的处理开始后,若经过了一定时间(例如10msec),则驱动控制部220 将来自位置控制部2000的指令值的分量α从1切换至0.9。 [0392] After the process of step S1004 is started, when a predetermined time has elapsed (e.g. 10msec), the drive control unit 220 the instruction from the component value of the position control portion 2000 is switched from 1 to 0.9 α. 运样,来自视觉伺服部210的指令值的分量1-α变为0.1。 Sample transport, component command value from 1-α visual servo unit 210 becomes 0.1. 然后,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.9、并使来自视觉伺服部210的指令值的分量1-α为0.1的前提下对它们的指令值进行合成,并向机器人10输出。 Then, the drive control unit 220 that the instruction from the component value of the position control portion 2000 is [alpha] is 0.9, and visual components from the servo unit 210 of the command value for the 1-α synthesis instruction value thereof premise 0.1, and outputs of the robot 10.

[0393] 之后,若进一步经过了一定时间,则驱动控制部220将来自位置控制部2000的指令值的分量α从0.9切换至0.8,并将来自视觉伺服部210的指令值的分量1 -α从0.1切换至0.2。 After [0393], further if a certain time elapses, the control unit 220 to the component from the component value of the position command of the control unit 2000 of switching α from 0.9 to 0.8, and the command value from the visual servo driving unit 210 is 1 -α switching from 0.1 to 0.2. 运样,随着一定时间的经过而阶段性地切换分量α,并使用切换后的分量,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成。 Sample transport, as certain time elapsed stepwise switching component [alpha], and the use of components after switching from the first control command value output unit 202 synthesizes from the instruction control unit 213 outputs the second value.

[0394] 位置控制部2000在来自位置控制部2000的指令值的分量α变为0.5、来自视觉伺服部210的指令值的分量1-α变为0.5之前,反复进行该分量α的切换W及指令值的合成。 [0394] In the position control portion 2000 component command value [alpha] from the position control section 2000 becomes 0.5, the command value components from the previous 0.5 visual servo unit 210 becomes 1-α, repeating the switching component [alpha] and W synthesis of the command value. 驱动控制部220在来自位置控制部2000的指令值的分量α变为0.5、来自视觉伺服部210的指令值的分量1-α变为0.5之后,W不切换分量α而维持分量α的方式,反复进行指令值的合成。 The drive control unit 220 in the instruction from the component value of the position control section 2000 of the [alpha] becomes 0.5, the visual component command value from the servo unit 210 after 1-α becomes 0.5, W is maintained without switching component component [alpha] [alpha] fashion, repeating the synthesizing command value.

[0395] 由此,即使在对象物01的位置变化的情况下也能够进行目视检查。 [0395] Accordingly, even when the change in position of the object 01 can be visually inspected. 另外,在超出所需地远离对象物的情况下,仅通过位置控制而使端点移动,从而能够进行高速处理。 Further, in the case where more than necessary away from the object, the position control by only the mobile terminal, thereby enabling high-speed processing. 另外, 在与对象物接近时通过位置控制W及视觉伺服而使端点移动,从而也能够与对象物的位置变化的情况对应。 Corresponds Further, the terminal movement control servo W and near vision and the object by the position, and thus can vary the position of the object. 并且,通过缓缓切换分量α,能够防止臂11的突然的动作、振动。 Further, by gradually switching component [alpha], sudden movements of the arms 11 can be prevented and vibration.

[0396] 此外,在步骤S1004的处理中,在进行该分量α的切换W及指令值的合成期间,端点通过切换点2的(步骤S1006,在后面进行详细叙述)情况下,在分量α变为0.5之前不进行分量曰的切换W及指令值的合成,而进入步骤S1006。 The [0396] Further, in the processing in step S1004, is performed during synthesis of the component W and α switching command value, the endpoint (, be described in detail later in step S1006) by the switching point 2, the variations in component α synthesis handover command value W and 0.5 prior to said component is not performed, and proceeds to step S1006.

[0397] 接下来,第一控制部202对通过位置控制W及视觉伺服而使端点移动的结果、即对端点是否通过切换点2进行判断(步骤S1006)。 Results [0397] Next, the first control unit 202 and the visual servo control W by moving the position of the end point, i.e., whether the endpoint is determined (step S1006) by switching point 2. 表示切换点2的位置的信息包含在与路径相关的信息内。 Information indicating the position of the switch point 2 is included in the route-related information. 如图28所示,切换点2设定于对象物01。 28, the switching point 2 is set to the object 01.

[0398] 在端点未通过切换点2的情况(步骤S1006中为否)下,控制部20反复进行步骤S1004的处理。 [0398] In the case of a switching point end 2 failed (NO in step S1006), the control unit 20 repeats the process of step S1004.

[0399] 在端点通过切换点2的情况(步骤S1006中为是)下,驱动控制部220W使分量α随着时间的经过而阶段性增大的方式对分量α进行切换,并使用切换后的分量〇,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。 [0399] In the case of end point 2 by the switching (YES in step S1006), the drive control unit so that component α 220W with the passage of time is increased stepwise manner α switching components, and the switchover square component, from the first control command value output unit 202 synthesizes the second control command value output unit 213, and the output of the robot 10. 动作控制部101根据指令值而使臂11(即端点)移动(步骤S1008)。 Operation control unit 101 according to an instruction value of the arm 11 (i.e., endpoint) moves (step S1008).

[0400] W下,对步骤S1008的处理进行具体说明。 The [0400] W, the processing of step S1008 will be described in detail. 在进行步骤S1008的处理之前、即在步骤S1006的处理中,驱动控制部220使来自位置控制部2000的指令值的分量α为0.5,并使来自视觉伺服部210的指令值的分量1-α为0.5从而合成指令值。 Before performing the process of step S1008, i.e., the processing of step S1006, the drive control unit 220 causes the control component from the position command value unit 2000 is [alpha] is 0.5, and the visual component command value from the servo unit 210 1-α thereby synthesizing command value is 0.5.

[0401 ] 在步骤S1008的处理开始后,若经过了一定时间(例如10msec),则驱动控制部220 将来自位置控制部2000的指令值的分量α从0.5切换至0.6。 [0401] After the process of step S1008 is started, when a predetermined time has elapsed (e.g. 10msec), the drive control unit 220 the instruction from the component value of the position control section 2000 switches α of from 0.5 to 0.6. 运样,来自视觉伺服部210的指令值的分量1-α变为0.4。 Sample transport, component command value from 1-α visual servo unit 210 becomes 0.4. 然后,驱动控制部220在使来自位置控制部2000的指令值的分量α 为0.6、来自视觉伺服部210的指令值的分量1-α为0.4的前提下,对它们的指令值进行合成,并向机器人10输出。 Then, the drive control unit 220 that the component [alpha] from the command value of the position control portion 2000 is 0.6, component from a command value visual servo unit 210 1-α is a premise of 0.4, their instruction combined value, and output to the robot 10.

[0402] 之后,若进一步经过了一定时间,则驱动控制部220将来自位置控制部2000的指令值的分量α从0.6切换至0.7,并将来自视觉伺服部210的指令值的分量1 -α从0.4切换至0.3。 After [0402], further if a certain time elapses, the control unit 220 to the component from the component value of the position command of the control section 2000 of the α changed from 0.6 to 0.7, and the command value from the visual servo driving unit 210 is 1 -α switching from 0.4 to 0.3. 运样,随着一定时间的经过而阶段性地切换分量α,并使用切换后的分量,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成。 Sample transport, as certain time elapsed stepwise switching component [alpha], and the use of components after switching from the first control command value output unit 202 synthesizes from the instruction control unit 213 outputs the second value.

[0403] 驱动控制部220在分量α变为1之前反复进行分量α的切换。 [0403] In the drive control unit 220 before the component α 1 α repeated switching component becomes. 在分量α变为1的情况下,来自视觉伺服部210的指令值的分量1-α为0。 In the case of component [alpha] becomes 1, the command value components from the visual servo unit 210 0 1-α. 因此,驱动控制部220向机器人10输出从第一控制部202输出的指令值。 Thus, the drive control unit 220 outputs a first command value from the control unit 202 outputs to the robot 10. 运样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S1010)。 Sample transport operation control unit 101 (i.e., endpoint) moves the arm command value 11 (step S1010). 其结果是,通过位置控制而使端点移动。 As a result, by moving the end control position. 步骤S1010的处理与步骤S1000相同。 Step S1010 and the same process step S1000.

[0404] 运样,在通过对象物01的阶段,通过位置控制而使端点移动,从而能够进行高速处理。 [0404] The sample transport, by an object stage 01, the position control by the mobile terminal, thereby enabling high-speed processing. 另外,通过缓缓切换分量α,能够防止臂11的突然的动作、振动。 Further, by gradually switching component [alpha], sudden movements of the arms 11 can be prevented and vibration.

[0405] 接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点3进行判断(步骤S1012)。 [0405] Next, the first control section moves the end result by 202 pairs of position control, i.e., whether the endpoint is determined (step S1012) by switching point 3. 表示切换点3的位置的信息包含在预先设定的与路径相关的信息内。 3 information indicating switching point positions included in the information related to the route set in advance. 如图28所示,切换点3设置在对象物01(切换点2)与对象物02之间。 28, the switching point 02 between the object 01 disposed at 3 (switching point 2) and the object.

[0406] 在端点未通过切换点3的情况(步骤S1012中为否)下,控制部20反复进行步骤S1010的处理。 [0406] In the case where the endpoint is not by the switching point 3 (NO in step S1012), the control unit 20 repeats the process of step S1010.

[0407] 在端点通过切换点3的情况(步骤S1012中为是)下,驱动控制部220随着时间的经过而对分量α进行阶段性切换,并使用切换后的分量〇,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。 Where [0407] By switching point 3 at the ends (YES in step S1012), the drive control unit 220 performs over time component α of the switching stages, and using component billion after switching from the first command value output from the control unit 202 synthesizes the second command value output from the control unit 213, and the output of the robot 10. 运样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S1014)。 Sample transport operation control unit 101 so that the arm 11 (i.e., endpoint) moves (step S1014) according to the instruction value. 步骤S1014的处理与步骤S1004相同。 The same process as step S1014 to step S1004.

[0408] 接下来,第一控制部202对通过位置控制W及视觉伺服而使端点移动的结果、即对端点是否通过切换点4进行判断(步骤S1016)。 Results [0408] Next, the first control unit 202 and the visual servo control W by moving the position of the end point, i.e., whether the endpoint is determined (step S1016) by switching point 4. 表示切换点4的位置的信息包含在与路径相关的信息内。 Information indicating the position of the point 4 of the switch included in the route-related information. 如图28所示,切换点4设定于对象物02。 As shown in FIG 28, the switching point of the object 4 is set to 02.

[0409] 在端点未通过切换点4的情况(步骤S1016中为否)下,控制部20反复进行步骤S1014的处理。 [0409] handover point 4 at an endpoint failed (NO in step S1016), the control unit 20 repeats the process of step S1014.

[0410] 在端点通过切换点4的情况(步骤S1016中为是)下,驱动控制部220W使分量α随着时间的经过而阶段性增大的方式对分量α进行切换,并使用切换后的分量〇,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。 [0410] In the case of endpoints switching point 4 (step S1016, YES), the drive control unit so that component α 220W with the passage of time is increased stepwise manner α switching components, and the switchover square component, from the first control command value output unit 202 synthesizes the second control command value output unit 213, and the output of the robot 10. 动作控制部101根据指令值而使臂11(即端点)移动(步骤S1018)。 Operation control unit 101 according to an instruction value of the arm 11 (i.e., endpoint) moves (step S1018). 步骤S1018的处理与步骤S1008 相同。 Step S1018 and the same process step S1008.

[0411] 驱动控制部220在分量α变为1之前反复进行分量α的切换。 [0411] In the drive control unit 220 before the component α 1 α repeated switching component becomes. 若分量α变为1,则驱动控制部220向机器人10输出从第一控制部202输出的指令值。 If the components become α 1, the drive control unit 220 outputs a first command value from the control unit 202 outputs to the robot 10. 运样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S1020)。 Sample transport operation control unit 101 (i.e., endpoint) moves the arm command value 11 (step S1020). 步骤S1020的处理与步骤S1010相同。 Step S1020 and the same process step S1010.

[0412] 接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否到达目标地点进行判断(步骤S1022)。 [0412] Next, the first control unit 202 of the mobile terminal by the position control result, i.e., the endpoint of reaching the target site is judged (step S1022). 表示目标地点的位置的信息包含在预先设定的与路径相关的ί胃息内。 Information indicating the location of the target position included in the information related to the stomach ί path set in advance.

[0413] 在端点未到达目标地点的情况(步骤S1022中为否)下,控制部20反复进行步骤S1020的处理。 [0413] In the case of the endpoint does not reach the target site (NO in step S1022), the control unit 20 repeats the process of step S1020.

[0414] 在端点到达目标地点的情况(步骤S1022中为是)下,驱动控制部220结束处理。 [0414] In the case where an endpoint reaches the target location (step S1022, YES), the drive control section 220 ends the processing.

[0415] 根据本实施方式,在与对象物接近时,通过位置控制W及视觉伺服而使端点移动, 从而也能够与对象物的位置变化的情况对应。 Corresponding to [0415] embodiment according to the present embodiment, when the object is close to the endpoint W moved by the position control servo and the visual, such that the position can also vary with the case where the object. 另外,在端点(当前位置)超出所需地远离对象物的情况下,在满足端点(当前位置)通过对象物的情况等规定的条件的情况下,仅通过位置控制而使端点移动,从而能够进行高速处理。 Further, the end points (the current position) away from the more than necessary when the object is, in a case where a predetermined condition is satisfied by the endpoint object like the case of (the current position), only the endpoint moved by the position control, thereby high-speed processing.

[0416] 另外,根据本实施方式,在对基于位置控制W及视觉伺服的控制、W及基于位置控制的控制进行切换时,通过缓缓切换分量〇,能够防止臂的突然的动作、振动。 [0416] Further, according to the present embodiment, based on the position and the visual servo control of W, W, and when the position control is switched based on the control by gradually switching component billion, possible to prevent sudden operation of the vibrating arm.

[0417] 此外,在本实施方式中,在缓缓切换分量α的情况下,在每次经过一定时间时,将分量α阶段性地切换0.1,但是缓缓切换分量α的方法并不限定于此。 The method of the case where [0417] In the present embodiment, the α component is slowly switched, at every predetermined time has elapsed, the switching component is 0.1 α stepwise, gradually switching components but not limited to α this. 例如,如图26所示,也可W 与到对象物(相当于图26(A)中的目标位置)的位置、从对象物(相当于图26Β中的开始位置) 离开的位置对应地改变分量〇。 For example, shown in Figure 26, also with the object W (FIG. 26 corresponding to the target position (A) in) position, from the object (corresponding to the starting position in FIG 26Β) is changed to correspond to a position away from component billion. 另外,如图26所示,分量α也可W连续地变化(参照图26Α、图2她的线C、D等)。 Further, as shown in FIG. 26, the component W α may vary continuously (see FIG 26Α, her line of FIG. 2 C, D, etc.).

[0418] 另外,在本实施方式中,在使用位置控制与视觉伺服的指令值的情况(步骤S1004、 S1008、S1014、S1018)下,使分量α为0.5、0.6、0.7、0.8、0.9,但是分量〇只要是比0大比1小的实数就可W是任意值。 [0418] Further, in the present embodiment, in the case (step S1004, S1008, S1014, S1018) and the visual servo control using the position command value, so that component α is 0.5,0.6,0.7,0.8,0.9, but square as long as the component is smaller than 0 majority 1 W can be any real number value.

[0419] 第四实施方式 [0419] Fourth Embodiment

[0420] 本发明的第二、第Ξ实施方式使用手眼摄像机而进行目视检查,但是本发明的适用范围并不限定于此。 [0420] Second, the first embodiment uses Ξ cameras and hand-eye visual inspection, but the scope of the present invention is not limited thereto according to the present invention.

[0421] 本发明的第四实施方式是将本发明应用于对象物的针对孔的插入等装配作业的方式。 Fourth Embodiment [0421] embodiment of the present invention is applied to the hole for the object of the present invention is inserted into the assembly operations and the like. W下,对本发明的第四实施方式进行说明。 Under W, a fourth embodiment of the present invention will be described. 此外,对于与第二实施方式W及第Ξ实施方式相同的部分,标注相同的附图标记,并省略其说明。 Further, for the second embodiment and the second embodiment W Ξ embodiment the same parts are denoted by the same reference numerals, and description thereof is omitted.

[0422] 图29是表示本发明的一实施方式的机器人系统3的结构的一个例子的系统构成图。 [0422] FIG. 29 is a diagram showing an embodiment of a robot system of the present invention is an example of a system configuration of FIG. 3 configuration. 本实施方式的机器人系统3主要具备机器人10Α、控制部20、第一拍摄部30W及第二拍摄部40。 The robot system according to the present embodiment includes a robot main 3 10Α, the control section 20, the first imaging unit and the second imaging section 40 30W.

[0423] 机器人10Α是具有包括多个接头(关节H2W及多个连杆13的臂11Α的臂型机器人。 在臂11Α的前端设置有把持工件W、器具的手部14(所谓的末端执行器)。臂11Α的端点的位置是手部14的位置。此外,末端执行器并不限定于手部14。 [0423] 14 having a robot 10Α (a so-called end hand grip portion of the workpiece W, the appliance is provided at the front end of the arm comprises a plurality of joints 11Α (H2W joints 13 and a plurality of linkage arms 11Α the robot arm. ). 11Α arm end position is the position of the hand unit 14. Further, the end effector 14 is not limited to the hand.

[0424] 在臂11Α的手臂部分设置有力觉传感器102(在图29中未图示,参照图30)。 [0424] In the arm part of inner force sensor provided 11Α arm 102 (not shown in FIG. 29, see FIG. 30). 力觉传感器102是对作为与机器人10Α输出的力相对的反作用力而受到的力、力矩进行检测的传感器。 102 is a force sensor as a force and a force of the robot 10Α opposite reaction force outputted by the torque sensor for detecting. 作为力觉传感器,例如,能够使用可同时检测平移巧由方向的力成分W及绕着旋转巧由的力矩成分的6成分的6轴力觉传感器。 As the force sensor, for example, possible to use the six-axis force sensor can detect the direction of the force translating clever W component and a moment component about the rotation by the coincidence of the component 6. 另外,力觉传感器所使用的物理量是电流、电压、电荷量、电感、形变,电阻、电磁引导、磁,空气压、光等。 Furthermore, physical force sensor used is a current, voltage, charge amount, inductance, deformation resistance, to guide electromagnetic, magnetic, air pressure, light or the like. 力觉传感器102通过将希望的物理量转换为电信号,从而能够检测6成分。 A force sensor 102 by converting a physical quantity into an electrical signal desired, the component 6 can be detected. 此外,力觉传感器102并不限定于6轴,例如也可W是巧由。 In addition, the force sensor 102 is not limited to the axis 6, for example, W may be a coincidence.

[0425] 接下来,对机器人系统3的功能构成例进行说明。 [0425] Next, a functional configuration of the robot system 3 will be described. 图30表示机器人系统3的功能框图。 FIG 30 shows a functional block diagram of the robot system 3.

[0426] 机器人10Α具备根据促动器的编码器值和传感器的传感器值等来控制臂11Α的动作控制部101、W及力觉传感器102。 [0426] 10Α robot includes a control unit to control operation of the arm 11Α 101, W 102 and force sensor according to the sensor value or the like of the encoder value and the sensor actuator.

[0427] 控制部20A主要具备位置控制部2000、视觉伺服部210、图像处理部212、驱动控制部220W及力控制部230。 [0427] The main control unit 20A includes a position control unit 2000, a visual servo unit 210, the image processing section 212, the drive control unit and a power control unit 230 220W.

[0428] 力控制部230根据来自力觉传感器102的传感器信息(力信息、力矩信息),进行力控制(力觉控制)。 [0428] The power control section 230 sensor information from the force sensor 102 (the information of the force, the torque information), a force control (force sense control).

[0429] 在本实施方式中,作为力控制而进行阻抗控制。 [0429] In the present embodiment, the force control is performed as impedance control. 阻抗控制是为了将从外部向机器人的手尖(手部14)施加力的情况下产生的机械阻抗(惯性、衰减系数、刚性)设定为目标作业下合适的值的位置与力的控制手段。 It is generated from the impedance control for the case where the external force is applied to the hand of the robot (hand 14) mechanical impedance (inertia, damping coefficient, rigidity) and the position control means for setting an appropriate force values ​​under the target job . 具体而言,是在机器人的末端执行器部连接质量、粘性系数、W及弹性要素的模型中,W设定为目标的质量、粘性系数W及弹性系数而与物体接触的控制。 Specifically, is connected to the end effector of a robot mass, viscosity coefficient of the model, and the elastic elements W, W is set to the target mass, the viscosity and the elastic modulus and coefficient W in contact with the control object.

[0430] 力控制部230通过阻抗控制而决定端点的移动方向、移动量。 [0430] force control unit 230 determines the moving direction through the endpoints of the impedance control, the amount of movement. 另外,力控制部230根据端点的移动方向、移动量,决定设置于接头12的各促动器的目标角度。 Further, the force control unit 230 according to the movement direction of the endpoint, the amount of movement, provided on the determined target joint angles of the actuators 12. 另外,力控制部230 生成使臂11A移动目标角度那样的指令值,并将其向驱动控制部220输出。 Further, the control unit 230 generates a force that the arm 11A moves as the target angle command value and outputs it to the drive control unit 220. 此外,由于力控制部230进行的处理是一般的内容,所W省略详细的说明。 Further, since the processing performed by the force control unit 230 is a general content, a detailed description thereof will be omitted as W.

[0431 ]此外,力控制并不限定于混合控制,而能够采用顺应性控制等能够巧妙地控制干扰力的控制方法。 [0431] In addition, the force control is not limited to the hybrid control, the control method can be employed to manipulate the disturbing force compliance control. 另外,为了进行力控制,需要检测施加于手部14等末端执行器的力,但是对施加于末端执行器的力进行检测的方法并不限定于使用力觉传感器的情况。 Addition, the method for controlling the force necessary to detect a force applied to the hand 14 and other end effector, but the force applied to the end effector can be detected is not limited to the case of using the force sensor. 例如,也能够从臂11A的各轴扭矩值推断末端执行器受到的外力。 For example, it is possible to infer the external force received from the end shaft torque value of the respective arm 11A. 因此,为了进行力控制,只要臂11A具有直接或者间接地获取施加于末端执行器的力的机构即可。 Therefore, for the force control, as long as the arm 11A having a mechanism to obtain the force applied to the end effector directly or indirectly.

[0432] 接下来,对本实施方式的由上述结构构成的机器人系统3的特征的处理进行说明。 [0432] Next, the processing characteristics of the robot system according to the present embodiment is composed of the above-described configuration 3 will be described. 图31是表示本发明的臂11A的控制处理的流程的流程图。 FIG 31 is a flowchart showing a flow of control process of the present invention, the arm 11A. 该处理例如是经由未图示的按钮等而输入控制开始指示从而开始的。 This processing is inputted via a control button or the like (not shown) to start the instruction to start. 在本实施方式中,如图32所示,W将工件W插入孔Η的装配作业为例进行说明。 In the present embodiment, shown in Figure 32, the workpiece W is inserted into the hole W Η assembly operations will be described as an example.

[0433] 若经由未图示的按钮等而输入控制开始指示,则第一控制部202通过位置控制而控制臂11,并使端点移动(步骤S130)。 [0433] When the control start instruction is input via a button or the like (not shown), the first control unit 202 is controlled by the position control arm 11, and the mobile terminal (step S130). 步骤S130的处理与步骤S1000相同。 The same process of step S130 and step S1000.

[0434] 在本实施方式中,将基于位置控制的指令值的分量设定为α,将基于视觉伺服的指令值的分量设定为β,并将基于力控制的指令值的分量设定为丫。 [0434] In the present embodiment, the component is set based on the position control command value for the [alpha], based on visual servo component command value is set to beta], and the component of the force is set based on the control command value Ah. 分量a、ew及丫设定为,它们的合计为1。 Component a, ew and Ah is set, and the total of 1. 在步骤S130中,α为1,ew及丫为0。 In step S130, α is 1, ew is 0 and Ya.

[0435] 接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点1进行判断(步骤S132)。 [0435] Next, the first control unit 202 of the mobile terminal by the position control result, i.e., whether the endpoint is determined (step S132) by a switching point. 步骤S132的处理与步骤S1002相同。 Step S132 is the same as the processing of step S1002. 表示切换点1的位置的信息包含在预先设定的与路径相关的信息内。 Information indicating the switching position of the point 1 of the information contained in the path set in advance.

[0436] 图32是对端点的轨道W及切换点的位置进行说明的图。 [0436] FIG. 32 is W and the position of the track switching point of the end point will be described in FIG. 在本实施方式中,切换点1 设置于作业空间内的预先决定的规定的位置。 In the present embodiment, the switching is provided at a predetermined position of a predetermined point in the working space 1.

[0437] 在端点未通过切换点1的情况(步骤S132中为否)下,第一控制部202反复进行步骤S130的处理。 [0437] In a case where a switching point through endpoint is not (NO in step S132), the first control unit 202 repeats the process of step S130.

[0438] 在端点通过切换点1的情况(步骤S132中为是)下,驱动控制部220随着时间的经过而对分量aW及β进行阶段性切换,并使用切换后的分量aW及β,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。 [0438] In the case of end point 1, by switching (YES in step S132), the drive control unit 220 performs over time of the switching stages and the components aW beta], and using component aW beta] and after switching, command value output from the first control unit 202 and the second command value from the control unit 213 outputs are synthesized, and the robot 10 outputs. 运样,动作控制部101根据指令值而使臂11(即端点)移动(步骤S134)。 Sample transport operation control unit 101 so that the arm 11 (i.e., endpoint) moves (step S134) according to the instruction value. 即,在步骤S134中,通过位置控制W及视觉伺服而使端点移动。 That is, in step S134, by controlling the position of the endpoint W and visual servo movement.

[0439] W下,对步骤SU4的处理进行具体说明。 The [0439] W, SU4 processing steps will be specifically described. 在进行步骤SU4的处理之前、即在步骤S132的处理中,驱动控制部220使来自位置控制部200的指令值的分量α为1,使来自视觉伺服部210的指令值的分量β为0,并使来自力控制部230的指令值的分量丫为0,从而合成指令值。 SU4 before performing step process, i.e., the processing of step S132, the drive control unit 220 causes the control component from the value of the position command unit 200 is an α, β that the component command value from the visual servo unit 210 is 0, Ah and component command value from the control section 230 of the force is zero, to synthesize command value.

[0440] 在步骤S134的处理开始后,若经过了一定时间(例如10msec),则驱动控制部220将来自位置控制部2000的指令值的分量α从1切换至0.95,并将来自视觉伺服部210的指令值的分量0切换至0.05。 [0440] After the step S134 is started, when a predetermined time has elapsed (e.g. 10msec), the drive control unit 220 the instruction from the component value of the position control unit 2000 is switched from 1 to 0.95 α, and from the visual servo unit component command value 210 switches 0 to 0.05. 然后,驱动控制部220在使来自位置控制部2000的指令值为分量0.95、 并使来自视觉伺服部210的指令值的分量为0.05的前提下对它们的指令值进行合成,并向机器人10输出。 Then, the drive control unit 220 that the instruction from the control unit 2000 of the position of the component is 0.95, and visual components from the servo command section 210 for the value of their synthesis instruction value under the premise of 0.05, and the output of the robot 10 .

[0441 ] 之后,若进一步经过了一定时间,则驱动控制部220将来自位置控制部2000的指令值的分量α从0.95切换至0.9,并将来自视觉伺服部210的指令值的分量β从0.05切换至0.1。 After [0441] When the predetermined time has elapsed further, the drive component 220 to the instruction from the control unit 2000 of the value of the position α of from 0.95 to 0.9 handover control section, and from the β component of the visual command value from the servo unit 210 0.05 switching to 0.1.

[0442] 运样,随着一定时间的经过而阶段性地切换分量aW及β,并使用切换后分量,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成。 [0442] The sample transport, with the passage of a predetermined time after the stepwise switching component and aW beta], and using a switching component, a first command value from the control section 202 and the second command value from the control unit 213 outputs the synthesis. 驱动控制部220 在分量α变为0.05、分量β变为0.95之前反复进行上述分量的切换。 The drive control unit 220 in the component becomes 0.05 α, β component becomes the switching component described above is repeated until 0.95. 其结果是,通过位置控制W及视觉伺服而使端点移动。 As a result, the movement end position by W and visual servo control. 此外,在步骤S134中,由于不使用力控制,所W分量丫保持原样地为0。 Further, in step S134, since the force control is not used, the W component is 0 to Ya intact.

[0443] 此外,最终的分量α、β的比率α:β并不限定于〇.〇5:〇.95。 [0443] Furthermore, the final component of the α, β ratio of α: β 〇.〇5 not limited to: 〇.95. 分量〇、0能够取分量(1、0的和为1的各种值。但是,在运种作业中,由于孔Η的位置不限定为恒定,所W优选使视觉伺服的分量化k位置控制的分量α大。 Square component, the component can take on 0 (1,0 to various values ​​and 1. However, in kinds of transport operations, since the position of the holes is not limited Η is constant, it is preferable that the component W k of the position control of visual servoing large component α.

[0444] 此外,缓缓切换分量α的方法并不限定于此。 [0444] In addition, slowly switching component α is not limited to this method. 例如,如图26Α、图26Β所示,也可W与到对象物的位置、从对象物离开的位置对应地改变分量〇。 For example, as shown in FIG 26Α, FIG 26Β, W may be the position of the object, change the position of the object from the component square apart correspondingly. 另外,如图26的线C、D所示,分量α 也可W连续地变化。 Further, the line C in FIG. 26, as shown in D, the component may be α W changes continuously.

[0445] 接下来,第二控制部213对通过位置控制W及视觉伺服而使端点移动的结果、即对端点是否通过切换点2进行判断(步骤S136)。 Results [0445] Next, the second control unit 213 and the visual servo control W by moving the position of the end point, i.e., whether the endpoint is determined (step S136) by the switching point 2.

[0446] 切换点2由来自孔Η的相对位置决定。 [0446] 2 determined by the relative switching point from the position of the hole Η. 例如,切换点2是从孔Η的开口部中屯、离开距离L (例如,10cm)的位置。 For example, point 2 is switched from the opening hole Η Tun, the distance from the position L (e.g., 10cm) of. 从孔Η的开口部中屯、离开距离L的位置能够在X、y、Ζ空间内设定为半球状。 Tun from the opening portion of the hole Η separation distance L can be set to a position in the hemispherical X, y, Ζ space. 在图32中,例示出从孔Η的开口部中屯、沿Z方向离开距离L的位置。 In FIG 32, illustrates the opening portion from the village of Η hole, the distance L away from the position in the Z direction.

[0447] 图像处理部212从当前图像提取包含工件W的前端与孔Η的图像,并向第二控制部213输出。 [0447] The image processing unit 212 includes a front end image of the workpiece W hole Η image extracting from the current, and the second control unit 213 outputs. 另外,图像处理部212根据第一拍摄部30或者第二拍摄部40的摄像机参数(焦距等)对图像中的距离与现实空间中的距离的关系进行计算,并向第二控制部213输出。 Further, the image processing unit 212 calculates the distance relationship with the image from real space based on the camera parameter (focal length) of the first imaging unit 30 or the second imaging unit 40, and to the second control unit 213 outputs. 第二控制部213根据提取的图像中的工件W的前端位置与孔Η的中屯、位置之差,对端点是否通过切换点2进行判断。 The second control unit 213 according to the position of the end hole Η Tun workpiece W is extracted from the image, the difference between the position of the endpoint determination by the switching point 2.

[044引在端点未通过切换点2的情况(步骤S136中为否)下,第一控制部202、第二控制部213W及驱动控制部220反复进行步骤S134的处理。 [Cited in 044 (NO in step S136) is not the end point 2 by handover, the first control unit 202, and a second driving control portion 213W control unit 220 repeats the process of step S134.

[0449] 在端点通过切换点2的情况(步骤S136中为是)下,驱动控制部220将从第一控制部202输出的指令值与从力控制部230输出的指令值进行合成,并向机器人10输出。 [0449] In the case of end point 2 by the switching (step S136 Yes), the control unit 220 from the drive command value output from the first control unit 202 synthesizes the value from the instruction control unit 230 outputs a force, and output robot 10. 动作控制部101根据指令值而使臂11(即端点)移动(步骤S138)。 Operation control unit 101 according to an instruction value of the arm 11 (i.e., endpoint) moves (step S138).

[0450] W下,对步骤S138的处理进行具体说明。 The [0450] W, the processing of step S138 will be specifically described. 在进行步骤S138的处理之前、即在步骤SI34的处理中,驱动控制部220使来自位置控制部2000的指令值的分量α为ο. 05,并使来自视觉伺服部210的指令值的分量β为0.95,从而合成指令值。 Before performing the process of step S138, i.e. the processing in step SI34, the drive control unit 220 causes an instruction from the component value of the position control portion 2000 is α ο. 05, and the visual component command value β from the servo unit 210 0.95, to synthesize command value.

[0451 ]在步骤S138的处理开始后,驱动控制部220将来自位置控制部2000的指令值的分量α从0.05切换至0.5。 [0451] After the processing of step S138 starts, the drive control unit 220 from the component value of the position command of the control unit 2000 is switched from 0.05 to 0.5 α. 另外,将来自力控制部230的指令值的分量丫从0切换至0.5。 Further, components of future command value Ah self control unit 230 is switched from 0 to 0.5. 其结果是,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.5、来自视觉伺服部210 的指令值的分量β为0、来自力控制部230的指令值的分量γ为0.5的前提下,对它们的指令值进行合成,并向机器人10输出。 As a result, the drive control section 220 so that component α from the command value of the position control portion 2000 is 0.5, component β from the command value visual servo unit 210 is 0, the component γ instruction from the value of the force control unit 230 is 0.5 premise, their synthesis command value, and outputs the robot 10. 此外,在步骤S138中,由于不使用视觉伺服,所W分量β保持原样地为0。 Further, in step S138, since no visual servo, the W component is 0 to β intact. 此外,也可W阶段性地切换分量〇、丫。 Furthermore, W may be stepwise switching component billion, Ah.

[0452] 接下来,力控制部230对通过视觉伺服W及力控制而使端点移动的结果、即对端点是否到达目标地点进行判断(步骤S140)。 [0452] Next, the force control unit 230 by W and the visual servo control force of the end result of movement, i.e., the endpoint of reaching the target site is judged (step S140). 能够根据力觉传感器102的输出,判断是否到达目标地点。 Can be based on the output of the force sensor 102, determines whether or not reach the destination.

[0453] 在端点未到达目标地点的情况(步骤S140中为否)下,位置控制部200、力控制部230W及驱动控制部220反复进行步骤S138的处理。 [0453], the position control section 200, and the driving force control section 230W control unit 220 repeats the process in step S138 has not reached the end point where the location of the target (NO in step S140).

[0454] 在端点到达目标地点的情况(步骤S140中为是)下,驱动控制部220结束处理。 [0454] In the case where an endpoint reaches the destination point (step S140), the drive control section 220 ends the processing.

[0455] 根据本实施方式,能够维持位置控制的高速,并能够与不同的目标位置对应。 [0455] According to the present embodiment, it is possible to maintain the high speed position control, and can correspond to different target positions. 另夕h即使在无法确认目标位置等无法使用视觉伺服的情况下,也能够维持位置控制的高速并安全地进行作业。 Another evening h even when unable to confirm the target location and other visual servo can not be used, it is possible to maintain the high-speed position control and work safely.

[0456] 此外,在本实施方式中,切换点1预先设定于作业空间的任意位置,切换点2设定于从孔Η离开规定的距离的位置,但是切换点1、2的位置并不限定于此。 [0456] In the present embodiment, a predetermined switching point at any position in the working space, the switching point 2 is set to a predetermined distance away from the hole Η position, but the position of the switch points 1, 2 are not limited thereto. 也可W利用从规定的位置开始的经过时间设定切换点1、2的位置。 W may also use the elapsed time from the predetermined position of the set position of the switching points 1. 具体而言,例如,切换点2的位置能够设定在通过切换点1后的30秒后。 Specifically, for example, the position of the switching point can be set at 2 by the switching point 30 seconds after 1. 另外,也可W利用从规定的位置离开的距离设定切换点1、2的位置。 Further, use W may be a position away from the predetermined position of the distance setting switching points 1 a. 具体而言,例如,切换点1的位置能够设定于从开始地点离开距离X的位置。 Specifically, for example, the position of the switching point can be set at a position separated a distance X from the start point. 并且,也可W根据来自外部的信号输入(例如,来自输入装置25的输入信号)设定切换点1、2的位置。 Furthermore, W may be based on a signal input from the outside (e.g., an input signal from the input device 25) is set to 1, the position of the point switch.

[0457] 第五实施方式 [0457] Fifth Embodiment

[0458] 本发明的第五实施方式通过位置控制与力控制而进行对象物向孔的插入等装配作业,但是本发明的适用范围并不限定于此。 [0458] The fifth embodiment of the present invention performs the object to be inserted into the other bore through the assembly task force control and position control, but the scope of the present invention is not limited thereto.

[0459] 本发明的第五实施方式是通过位置控制、视觉伺服W及力控制而将本发明应用于对象物向孔的插入等装配作业的方式。 [0459] The fifth embodiment of the present invention is obtained by the position control, force control and visual servo W and the embodiment of the present invention is applied to the object to be inserted into the bore of the assembly work and the like. W下,对本发明的第五实施方式进行说明。 Under W, a fifth embodiment of the present invention will be described. 由于第五实施方式的机器人系统4的结构与机器人系统3相同,所W省略其说明。 The robot system is the same as the fifth embodiment and the configuration of the robot system 4 3, W the description thereof is omitted. 另外,在机器人系统4进行的处理中,对于与第二实施方式、第Ξ实施方式W及第四实施方式相同的部分,标注相同的附图标记,并省略详细的说明。 Further, in the processing performed in the robot system 4, for the second embodiment, like the first embodiment Ξ W and fourth embodiment are denoted by the same reference numerals, and detailed description thereof will be omitted.

[0460] 对本实施方式的机器人系统4的特征的处理进行说明。 [0460] The processing features of a robot system according to the embodiment 4 will be described. 图33是表示机器人系统4的臂11Α的控制处理的流程的流程图。 FIG 33 is a flowchart showing a control process of the robot arm system 4 of 11Α. 该处理例如是经由未图示的按钮等而输入控制开始指示从而开始的。 This processing is inputted via a control button or the like (not shown) to start the instruction to start. 在本实施方式中,如图34所示,W将工件W插入形成于移动台的孔Η的装配作业为例进行说明。 In the present embodiment, as shown in FIG. 34, the work W W hole formed in Η assembly operations of the mobile station as an example.

[0461] 若经由未图示的按钮等而输入控制开始指示,则第一控制部202通过位置控制而控制臂11Α,并使端点移动(步骤S130)。 [0461] When the control start instruction is input via a button or the like (not shown), the first control unit 202 is controlled by the position control arm 11 ?,, and the mobile terminal (step S130).

[0462] 接下来,第一控制部202对通过位置控制而使端点移动的结果、即对端点是否通过切换点1进行判断(步骤S132)。 [0462] Next, the first control unit 202 of the mobile terminal by the position control result, i.e., whether the endpoint is determined (step S132) by a switching point.

[0463] 在端点未通过切换点1的情况(步骤S132中为否)下,第一控制部202反复进行步骤S130的处理。 [0463] In a case where a switching point through endpoint is not (NO in step S132), the first control unit 202 repeats the process of step S130.

[0464] 在端点通过切换点1的情况(步骤S132中为是)下,驱动控制部220随着时间的经过而对分量aW及β进行阶段性切换,并使用切换后的分量aW及β,将从第一控制部202输出的指令值与从第二控制部213输出的指令值进行合成,并向机器人10输出。 [0464] In the case of end point 1, by switching (YES in step S132), the drive control unit 220 performs over time of the switching stages and the components aW beta], and using component aW beta] and after switching, command value output from the first control unit 202 and the second command value from the control unit 213 outputs are synthesized, and the robot 10 outputs. 运样,动作控制部101根据指令值而使臂11Α(即端点)移动(步骤S134)。 Sample transport operation control unit 101 so that the arm 11 ?, (i.e., endpoint) moves (step S134) according to the instruction value.

[0465] 接下来,第二控制部213对通过位置控制W及视觉伺服而使端点移动的结果、即对端点是否通过切换点2进行判断(步骤S136)。 Results [0465] Next, the second control unit 213 and the visual servo control W by moving the position of the end point, i.e., whether the endpoint is determined (step S136) by the switching point 2.

[0466] 在端点未通过切换点2的情况(步骤S136中为否)下,第一控制部202、第二控制部213W及驱动控制部220反复进行步骤S134的处理。 [0466] In the case of a switching point end 2 failed (NO in step S136), the first control unit 202, and a second driving control portion 213W control unit 220 repeats the process of step S134.

[0467] 在端点通过切换点2的情况(步骤S136中为是)下,驱动控制部220将从第一控制部202输出的指令值、从第二控制部213输出的指令值、W及从力控制部230输出的指令值进行合成,并向机器人10输出。 [0467] In the case of end point 2 by the switching (step S136 YES), the control command output from the drive unit 220 from the control unit 202 of the first value, the second value from the instruction output from the control unit 213, W, and from force command value output from the control unit 230 synthesizes and outputs the robot 10. 动作控制部101根据指令值而使臂11Α(即端点)移动(步骤S139)。 The operation control unit 101 so that the arm 11 ?, the command value (i.e., endpoint) moves (step S139).

[0468] W下,对步骤S139的处理进行具体说明。 The [0468] W, the processing of step S139 will be specifically described. 在进行步骤S139的处理之前、即在步骤S134的处理中,驱动控制部220使来自位置控制部200的指令值的分量α为0.05,并使来自视觉伺服部210的指令值的分量β为0.95,从而合成指令值。 Before performing the process of step S139, i.e. the processing of step S134, the drive control unit 220 causes the position instruction value from the α component of the control unit 200 is 0.05, and the command value from the servo unit 210 of the visual component β 0.95 , thereby synthesizing command value.

[0469] 在步骤S139的处理开始后,驱动控制部220将来自位置控制部2000的指令值的分量α从0.05切换至0. %。 [0469] In the process starts after step S139, the drive control unit 220 an instruction from the component value of the position control unit 2000 is switched from α is 0.05 to 0.5%. 另外,驱动控制部220将来自视觉伺服部210的指令值的分量β从0.95切换至0.33。 Further, the drive control unit 220 to the servo visual component β command section 210 is switched from a value from 0.95 to 0.33. 并且,驱动控制部220将来自力控制部230的指令值的分量丫从0切换至0.33。 The drive control unit 220 in the future component command value Ah self control unit 230 is switched from 0 to 0.33. 其结果是,驱动控制部220在使来自位置控制部2000的指令值的分量α为0.34、来自视觉伺服部210的指令值的分量β为0.33、来自力控制部230的指令值的分量丫为0.33的前提下对它们的指令值进行合成,并向机器人10输出。 As a result, the drive control section 220 so that component α from the command value of the position control portion 2000 is 0.34, component β from the command value visual servo unit 210 is 0.33, component Ah from a command value of the force control unit 230 is combining their premise 0.33 command value, and the output of the robot 10.

[0470] 此外,分量α、β、丫的比率α:β: 丫并不限定于0.34:0.33:0.33。 [0470] Furthermore, the component [alpha], the ratio of beta], Ah α: β: Ah is not limited to 0.34: 0.33: 0.33. 分量α、β、丫能够与作业对应地设定分量α、β、丫的和为1的各种值。 Components α, β, Ah can be set in correspondence with operation components α, β, to various values, and Ya 1. 另外,也可W缓缓切换分量α、β、丫。 Furthermore, W may be slowly switched components α, β, Ah.

[0471] 接下来,力控制部230对通过视觉伺服W及力控制而使端点移动的结果、即对端点是否到达目标地点进行判断(步骤S140)。 [0471] Next, the force control unit 230 by W and the visual servo control force of the end result of movement, i.e., the endpoint of reaching the target site is judged (step S140).

[0472] 在端点未到达目标地点的情况(步骤S140中为否)下,位置控制部2000、视觉伺服部210、力控制部230W及驱动控制部220反复进行步骤S139的处理。 [0472] When the endpoint does not reach the target site (NO in step S140), the position of the control unit 2000, a visual servo unit 210, and the driving force control section 230W control unit 220 repeats the processing of step S139.

[0473] 在端点到达目标地点的情况(步骤S140中为是)下,驱动控制部220结束处理。 [0473] In the case where an endpoint reaches the destination point (step S140), the drive control section 220 ends the processing.

[0474] 根据本实施方式,能够维持位置控制的高速,并能够使端点向不同的目标位置移动。 [0474] According to the present embodiment, it is possible to maintain the high speed position control, and enables a mobile terminal to a different target location. 特别是,即使在目标位置移动的情况下、并且在无法确认目标位置的情况下,由于通过位置控制、视觉伺服、力控制而进行控制,所W也能够维持位置控制的高速并安全地进行作业。 In particular, even in a case where the target position and without being able to confirm the target position due to be controlled by the position control, visual servoing, force control, the W can be maintained high-speed position control and work safely .

[0475] 此外,在本实施方式中,通过同时进行位置控制、视觉伺服、力控制(并行控制)而控制臂,但是在第五实施方式中,通过同时进行位置控制、力控制(并行控制)而控制臂。 [0475] In the present embodiment, the position control is performed by simultaneous visual servo control force (parallel control) and control arm, but in the fifth embodiment, by simultaneously position control, force control (parallel control) the control arm. 驱动控制部220能够根据工件W、孔Η等的可否目视确认、有无移动等规定的条件,选择是否根据预先设定的储存于存储器22等的条件等而同时进行位置控制、视觉伺服、力控制,或者同时进行位置控制、力控制。 The drive control unit 220 can be visually confirmed whether the workpiece W, Η hole or the like, the presence or absence of a predetermined movement conditions and the like, while select whether position control is performed in accordance with conditions and the like stored in the memory 22 or the like set in advance, visual servoing force control, position control, or both, force control.

[0476] 在上述实施方式中,对使用单臂机器人的情况进行了说明,但是也能够将本发明应用于使用双臂机器人的情况。 [0476] In the above-described embodiment, a case of using a single-arm robot has been described, it is also possible to apply the present invention to use dual-arm robot. 在上述实施方式中,对在机器人的臂的前端设置有端点的情况进行了说明,但是所谓设置于机器人并不限定于设置于臂。 In the above-described embodiment, a description has been provided at the front end of the robot arm end with a case, but a so-called robot is provided is not limited to the arm is provided. 例如,也可W在机器人设置由多个接头与连杆构成并通过使接头移动而整体活动的机械手,并且将机械手的前端作为端点。 For example, the robot may be provided W is constituted by a plurality of joints and the joint movement of the link by the overall activities of the robot, and the front end of the robot as an endpoint.

[0477] 另外,在上述实施方式中,具备第一拍摄部30W及第二拍摄部40运两个拍摄部,但是拍摄部也可W是一个。 [0477] Further, in the above-described embodiment, includes a first imaging unit and the second imaging portion 30W two photographing operation portion 40, but the film W is a unit may be.

[0478] W上,用实施方式对本发明进行了说明,但是本发明的技术的范围并不限定于上述实施方式所记载的范围。 [0478] a W, the present invention has been described using embodiments, but the technical scope of the present invention is not limited to the scope described in the above embodiments. 能够对上述实施方式施加各种变更或者改进,运对于本领域技术人员是显而易见的。 Various changes or improvements can be applied to the above described embodiment, operation of ordinary skill in the art will be apparent. 另外,根据权利要求书的记载可知,施加了运样的变更或者改进的方式也能够包含在本发明的技术的范围内。 Further, according to the claims clear from the description, or change it is applied in an improved manner can also transport the sample contained in the technical scope of the present invention. 特别是,本发明可W提供分别设置有机器人、控制部W及拍摄部的机器人系统,可W提供机器人包括控制部等的机器人,也可W提供仅由控制部,或者由控制部W及拍摄部构成的机器人控制装置。 In particular, the present invention may be W provided are provided with the robot, the control unit W and the robot system imaging portion can W provides the robot comprising a robot control unit or the like, may also be W only by the control unit, or captured by the control unit W and portions constituting the robot control apparatus. 另外,本发明也能够提供控制机器人等的程序、存储程序的存储介质。 Further, the present invention can provide a program-controlled robot or the like, a storage medium storing the program.

[0479] 第六实施方式 [0479] Sixth Embodiment

[0480] 1.本实施方式的手段 [0480] 1. The present embodiment means

[0481] 广泛公知有使用图像信息的机器人控制。 [0481] There are widely known robot control using image information. 例如,公知有连续地获取图像信息,并对从该图像信息获取的信息与成为目标的信息的比较处理的结果进行反馈的视觉伺服控制。 For example, there is known a continuously acquire the image information, and information of the image information obtained from the result of the comparison with the processing information becomes the target servo control visual feedback. 在视觉伺服中,向从最新的图像信息获取的信息与成为目标的信息之差变小的方向控制机器人。 In the visual servo, the smaller directional control robot to the latest information of the image information obtained from the difference information of a target. 具体而言,进行如下控制:求出与目标接近那样的关节角的变化量等,并根据该变化量等来驱动关节。 Specifically, the following controls: close to the target joint angle is determined as the amount of change and the like, and drives the joint based on the variation amount.

[0482] 在给予成为目标的机器人的手尖等的位置姿势,并W形成该目标的位置姿势的方式控制机器人的手段中,不易提高定位精度、即不易使手尖(手部)等正确地向目标的位置姿势移动。 [0482] In administering become robot object of the hand such as position and posture, and W means for forming the position and orientation of the target is controlled robot, easy to improve the positioning accuracy, i.e., not easy to make the hand (the hand) or the like accurately the move to the target position posture. 理想而言,若确定机器人的模型,则能够依据该模型而唯一求出手尖位置姿势。 Ideally, if the model of the robot is determined, it is possible to find the only hand tip position posture according to the model. 运里的模型例如是指设置于两个关节之间的框架(连杆)的长度、关节的构造(关节的旋转方向、是否存在偏置等)等信息。 Transport means, for example, in the model provided in the frame between the two joints (links) the length of the joint configuration (the rotational direction of the joint, whether the offset is present, etc.) and other information.

[0483] 但是,机器人包含各种误差。 [0483] However, the robot comprises a variety of errors. 例如连杆的长度的偏差、由重力引起的晓曲等。 For example, variation of the length of the link, Xiao Qu caused by gravity and the like. 由于运些误差因素,从而在进行使机器人采取给定的姿势的控制(例如决定各关节的角度的控审IJ)的情况下,理想的位置姿势与实际的位置姿势会变为不同的值。 The arrival of the case where some error factors, thus performing the control of the robot to take a posture of a given (e.g., controlled trial IJ determines the angle of each joint), the ideal position and posture and the actual posture of the position becomes a different value.

[0484] 在运一点上,在视觉伺服控制中,由于对与拍摄图像相对的图像处理结果进行反馈,所W与人们能够边用眼睛观察作业状况边对臂、手的移动方向进行微调的情况相同地, 即使当前的位置姿势与目标的位置姿势偏移,也能够识别并修正该偏移。 [0484] In operation point, the visual servo control, the image processing result of the captured image corresponding to the feedback, the W and one to side with the case where the job status observing an eye side of the arms, the moving direction of the hand of the fine-tuning Similarly, even when the position and posture of the current position and posture of the target shift, it is possible to identify and correct the offset.

[0485] 在视觉伺服控制中,作为上述"从图像获取的信息及"成为目标的信息",能够使用机器人的手尖等的Ξ维的位置姿势信息,也能够使用从图像获取的图像特征量而不将其转换为位置姿势信息。将使用位置姿势信息的视觉伺服称为位置基准的视觉伺服,将使用图像特征量的视觉伺服称为特征量基准的视觉伺服。 [0485] In the visual servo control, as the position and posture information Ξ dimensional "information from the acquired image, and" a target information ", it is possible to use a robot hand tip or the like, it is possible to use an image feature amount acquired from the image without converting it to position and orientation information. attitude information using visual servoing called visual servoing the position of the reference, the feature amount using the image feature amount called visual servoing the servo reference visual.

[0486] 为了适当地进行视觉伺服,需要从图像信息精度良好地检测位置姿势信息或者图像特征量。 [0486] In order to appropriately perform visual servoing requires attitude information or the image feature quantity from the image information detected with good accuracy. 若该检测处理的精度较低,则会错误识别当前的状态。 If lower accuracy of the detection process, an error is identified the current state. 因此,反馈于控制环路的信息也没有成为使机器人的状态与目标状态适当地接近的信息,而无法实现精度较高的机器人控制。 Thus, the feedback loop in the control information did not become a state of the robot and the target state information reasonably close to, but can not achieve high accuracy of the robot control.

[0487] 设想位置姿势信息、图像特征量都是通过某些检测处理(例如匹配处理)等而求出的,但是该检测处理的精度未必足够。 [0487] contemplated that the position and orientation information, the image feature amounts are detected by some process (e.g. matching process) or the like obtained, but the accuracy of the detection process is not necessarily sufficient. 运是因为在机器人实际进行动作的环境下,在拍摄图像中,不仅拍摄作为识别对象的物体(例如机器人的手部),也会拍摄工件、夹具、或者配置于动作环境的物体等。 Because the actual operation environment of the robot operates, in the captured image, not only the photographing an object (such as a robot hand) as the recognition target, the shutter will work, the jig or the object arranged on the operation environment and the like. 由于各种物体照入图像的背景,从而导致希望的物体的识别精度(检测精度)降低,并且求出的位置姿势信息、图像特征量的精度也变低。 Because of the various objects as the background image, causing the desired object recognition accuracy (detection accuracy) decreases, the position and orientation information, the accuracy of the image feature amount obtained also becomes low.

[0488] 在专利文献1中,公开如下手段:在位置基准的视觉伺服中,对从图像计算出的空间的位置或移动速度与从编码器计算出的空间的位置或移动速度进行比较,从而检测异常。 [0488] In Patent Document 1 discloses the following means: the position of the reference visual servo, calculated from the image position or movement velocity space is compared with the calculated from the encoder position or movement speed of the space, whereby detect anomalies. 此外,由于空间的位置是包含在位置姿势信息内的信息,并且移动速度也是根据位置姿势信息的变化量求出的信息,所下将空间的位置或移动速度作为位置姿势信息进行说明。 Further, since the spatial position information in the attitude information comprises, based on the information and the moving speed is the amount of change of the position and posture information obtained, the position or the moving speed of the spatial position and orientation information as described.

[0489] 考虑通过使用专利文献1的手段,从而在根据图像信息求出的位置姿势信息产生较大误差等、视觉伺服产生某些异常的情况下,能够检测该异常。 [0489] considered by means of Patent Document 1 is used, so that a large error is generated based on the position information of the image information obtained from the posture and the like, some visual servo abnormality is generated, the abnormality can be detected. 若能够实现异常的检测, 则能够停止机器人的控制,或者重新进行位置姿势信息的检测,从而至少抑制在控制中保持原样地使用异常的信息的情况。 If the abnormality detection can be achieved, it is possible to control the robot to stop, or re-detect position and orientation information, so as to maintain at least inhibit where information used as an abnormality in the control.

[0490] 但是,专利文献1的手段是W位置基准的视觉伺服为前提的。 [0490] However, in Patent Document 1 means that the position of the reference visual servoing W premise. 若为位置基准,则如上所述,进行根据编码器等的信息容易求出的位置姿势信息与根据图像信息求出的位置姿势信息的比较处理即可,因此实现容易。 If it is a reference position, as described above, the information encoder is easily determined according to the position and posture information comparison process to the image information of position and orientation information obtained thus achieved according facilitated. 另一方面,在特征量基准的视觉伺服中,在机器人的控制中使用图像特征量。 On the other hand, the visual servo reference feature amount, the feature amount using the image in the robot control. 而且,即使容易根据编码器等的信息求出机器人的手尖等的空间的位置,也不能直接求出与图像特征量的关系。 Further, even if the spatial position of the hand of the robot is determined based on information such as easy encoder or the like, can not be directly determined relationship between the image feature amount. 即,在设想特征量基准的视觉伺服的情况下,难W应用专利文献1的手段。 That is, in the case where the reference feature amount is contemplated visual servoing, W means difficulty of application in Patent Document 1.

[0491] 因此,本申请人提出了如下手段:在使用图像特征量的控制中,使用实际从图像信息获取的图像特征量变化量、与根据从机器人控制的结果获取的信息而推断的推断图像特征量变化量,从而检测异常。 [0491] Accordingly, the present applicant has proposed the following means: the control using the image feature amount, a feature amount of an image change amount acquired from the actual image information, and based on the information obtained from the result of the control of the robot inferred estimation image feature quantity change amount, thereby detecting the abnormality. 具体而言,如图35所示,本实施方式的机器人控制装置1000包括根据图像信息来控制机器人20000的机器人控制部1110;根据图像信息来求出图像特征量变化量的变化量运算部1120;根据作为机器人20000或者对象物的信息并且作为图像信息W外的信息的变化量推断用信息、对图像特征量变化量的推断量亦即推断图像特征量变化量进行运算的变化量推断部1130; W及通过图像特征量变化量与推断图像特征量变化量的比较处理来进行异常判定的异常判定部1140。 Specifically, as shown in FIG. 35, the robot control apparatus according to the embodiment 1000 comprises a control unit of the robot to control the robot in accordance with the image information of 20000, 1110; The image information obtaining image feature quantity variation amount change amount calculation unit 1120; as the information of the object or the robot 20,000 as the amount change information and image information of the outside information W inference, inference quantity change amount of the image feature quantity i.e. change amount estimation image feature amount estimation unit for computing change amount 1130; W and abnormality determination section performs abnormality determination by comparing the feature quantity variation amount change amount estimation image feature amount of image 1140.

[0492] 运里,图像特征量如上所述是表示图像中的区域、面积、线段的长度、特征点的位置等特征的量,图像特征量变化量是表示从多个(狭义而言为两个)图像信息获取的多个图像特征量之间的变化的信息。 [0492] in operation, the image feature amount is the amount described above, the image feature amount of the feature amount of change in the regions in the image, the area, the position of the length of the feature point or the like is a segment from a plurality of (two narrower sense a) a change of information between a plurality of image feature amount acquired image information. 作为图像特征量,若为使用3个特征点的图像上的二维位置的例子,则图像特征量为6维的矢量,并且图像特征量变化量为两个6维矢量之差、即为W矢量的各要素之差为要素的6维矢量。 As the image feature amount, when the two-dimensional position on the image feature points using the three examples of the image feature amount of 6-dimensional vector, and the image feature amount of the difference between the amount of change of the two-dimensional vector 6, that is, W the difference between each element of the vector for the 6-dimensional vector elements.

[0493] 另外,变化量推断用信息是用于图像特征量变化量的推断的信息,并且是图像信息W外的信息。 [0493] Further, the amount of change is the information to infer inference information change amount for the image feature amount, and image information is information of the outer W. 变化量推断用信息例如可W是从机器人控制的结果获取(实测)的信息,具体而言也可W是机器人20000的关节角信息。 The amount of change may be inferred, for example, W is acquired (measured) results from the information of the control information for the robot, specifically, W may also be a robot joint angle information 20000. 关节角信息能够从测定、控制机器人的关节驱动用的马达(广义而言为促动器)的动作的编码器获取。 Operation of the motor (the actuator in a broad sense) from the joint angle information can be determined by controlling the articulated robot driven encoder acquisition. 或者,变化量推断用信息也可W是由机器人20000的末端执行器2220、或者由机器人20000进行的作业的对象物的位置姿势信息。 Alternatively, the amount of change information may be used to infer a position and orientation information of the object W by the composition of the robot end effector 2220 to 20,000, or operations performed by the robot 20,000. 位置姿势信息例如是包含物体的基准点的Ξ维位置(x、y、z)、与相对于基准姿势的绕各轴的旋转(R1、R2、R3)的6维矢量。 Attitude information comprising e.g. Ξ dimensional position of the reference point of the object (x, y, z), with respect to the reference position of rotation (R1, R2, R3) around each axis of the six-dimensional vector. 考虑各种求出物体的位置姿势信息的手段,但是例如使用如下手段即可:使用超声波的距离测定手段、使用测量仪的手段、在手尖设置LED等而检测该L邸从而进行测量的手段、使用机械Ξ维测定器的手段等。 Using an ultrasonic distance measuring means, using the means of measuring instrument, such as an LED is provided to detect the tip of the hand to perform L Di measuring means: means for obtaining position and orientation information of the object of various considerations, but the following methods can be used e.g. using a mechanical Ξ dimensional measuring device means.

[0494] 运样,在使用图像特征量的机器人控制(狭义而言为特征量基准的视觉伺服)中, 能够检测异常。 [0494] The sample transport, the robot control using the image feature amount (in a narrow sense of visual servo reference feature amount), the abnormality can be detected. 此时,进行根据实际获取的图像信息而求出的图像特征量变化量、与根据从不同于图像信息的观点获取的变化量推断用信息而求出的推断图像特征量变化量的比较处理。 In this case, the feature quantity variation of the actual image from the image information acquired and calculated, and based on the estimated information obtained with the variation amount of the image feature quantity comparison inferred from the amount of change differs from the viewpoint of the acquired image information.

[0495] 此外,根据图像信息进行的机器人的控制并不限定于视觉伺服。 [0495] Further, control of the robot in accordance with image information is not limited to visual servoing. 例如,在视觉伺服中,连续进行针对控制环路的基于图像信息的信息反馈,但是也可W将进行1次图像信息的获取并根据该图像信息而求出针对目标位置姿势的移动量从而根据该移动量来进行位置控制的视觉方式,作为基于图像信息的机器人的控制来使用。 For example, in the visual servo, for continuous control loop based on the feedback information of the image information, but may also be performed W 1 acquired image information and image information based on the determined amount of movement of the target position and posture so as according the amount of movement to visually position control, is used as a control of the robot based on the image information. 另外,除了视觉伺服、视觉方式之外,在使用图像信息的机器人的控制中,在来自图像信息的信息的检测等,作为检测异常的手段,也能够应用本实施方式的手段。 Further, in addition to visual servo, visually, the control robot using image information, the detection information from the image information, as a means for detecting an abnormality, it is possible to use the means of the present embodiment.

[0496] 但是,如后述那样在本实施方式的手段中,设想推断图像特征量变化量的运算使用雅克比矩阵。 [0496] However, as described later in the present embodiment the means, the amount of change is contemplated inferred image characteristic amount using the Jacobian matrix calculation. 而且雅克比矩阵是表示给定的值的变化量与其他值的变化量的关系的信息。 And Jacobian matrix is ​​information indicating a relationship between a given change in the value of the amount of change in the value of the other. 例如,即使第一信息X与第二信息y为非线性关系(y = g(x)中的g为非线性函数),也能够考虑在给定的值的附近,第一信息的变化量AX与第二信息的变化量Δ y为线性关系(Δ y = ΚΔχ)中的h为线性函数),并且雅克比矩阵表示该线性关系。 For example, even if the first information and the second information X is a linear relationship y (y = g g (x) is a nonlinear function) can be considered in the vicinity of a given value, the amount of change of the first information AX and a second amount of change of information is a linear relationship Δ y (Δ y = ΚΔχ) is a linear function of h), and Jacques ratio represents the linear relationship matrix. 即,在本实施方式中,设想在处理中不使用图像特征量本身,而使用图像特征量变化量。 That is, in the present embodiment, using the image feature amount is not contemplated in the process itself, using the image feature amount and the change amount. 由此,在将本实施方式的手段应用于视觉方式等、视觉伺服W外的控制的情况下,应留意的点在于,不使用仅获取一次图像信息的手段,而需要使用W求出图像特征量变化量的方式至少获取2次W上图像信息的手段。 Accordingly, in the present embodiment is applied to the means visually, etc., outside the visual W servo control, point to be noted is that, not only once using the acquired image information means, and require the use of image features obtained W change in the amount of means on the way to get at least 2 W image information. 例如若将本实施方式的手段应用于视觉方式,则需要进行多次图像信息的获取W及成为目标的移动量的运算。 For example if the means of the present embodiment is applied to a visual manner, the need for image information acquired a plurality of times W and a target moving amount of computation.

[0497] W下,在对本实施方式的机器人控制装置1000、机器人的系统构成例进行说明后, 对视觉伺服的概要进行说明。 [0497] of W, the robot control apparatus of the present embodiment is 1000, the system configuration of the robot is described embodiment, a visual servo outline will be described. 在该前提下,对本实施方式的异常检测手段进行说明,最后也对变形例进行说明。 Under this assumption, the abnormality detection means of the present embodiment will be described, and finally also modification will be described. 此外,W下作为使用图像信息的机器人的控制而W视觉伺服为例,但是W下的说明能够扩大为使用其他的图像信息的机器人的控制。 In addition, under control using W as the image information of the robot visual servo and W as an example, but the instructions under W robot can be expanded to use other image information control.

[049引2.系统构成例 [Example 049 primer 2. System configuration

[0499] 在图36中示出了本实施方式的机器人控制装置1000的详细的系统构成例。 [0499] The present embodiment shown in FIG. 36 in the embodiment of robot control apparatus 1000 detailed system configuration example. 但是, 机器人控制装置1000并不限定于图36的结构,而能够进行省略上述一部分的构成要素、或追加其他的构成要素等各种变形实施。 However, the robot control device 1000 is not limited to the structure of FIG. 36, but can be a part of the above-described components will be omitted, or the addition of other embodiments and various modifications of constituent elements.

[0500] 如图36所示,机器人控制装置1000包括目标特征量输入部111、目标轨道生成部112、关节角控制部113、驱动部114、关节角检测部115、图像信息获取部116、图像特征量运算部117、变化量运算部1120、变化量推断部1130W及异常判定部1140。 [0500] As shown, the robot controller 111 comprises 361,000, 112, 113, driving unit 114, a joint angle detection unit 115, the image information control section joint angle target track generation unit target feature amount acquisition unit input unit 116, an image feature amount calculation unit 117, the change amount calculation unit 1120, 1130W variation estimation unit 1140 and the abnormality determining unit.

[0501 ]目标特征量输入部111对目标轨道生成部112输入成为目标的图像特征量fg。 [0501] certain feature quantity input unit 111 on the target track generation unit 112 becomes an input image of the target feature amount fg. 目标特征量输入部111例如也可W作为接受由使用者进行的目标图像特征量fg的输入的接口等来实现。 Target feature amount input unit 111 may be, for example, as the accepting interface W target image characteristic amount input by the user to achieve fg. 在机器人控制中,进行使根据图像信息求出的图像特征量f与运里输入的目标图像特征量fg接近(狭义而言使它们一致)的控制。 In robot control, that the control performed in accordance with the image feature amount of the target image feature fg image information obtained from the operation amount f is close in input (in a narrow sense that they are the same) of. 此外,也可W获取与目标状态对应的图像信息(参照图像、目标图像),并根据该图像信息求出目标图像特征量fg。 Furthermore, W may acquire image information corresponding to the target state (the reference image, the target image), and calculates the target image based on the image feature amount information fg. 或者,也可W不保持参照图像,而直接接受目标图像特征量fg的输入。 Alternatively, W can not be held reference image, the target image characteristic amount directly receives input fg.

[0502] 目标轨道生成部112根据目标图像特征量fg、W及从图像信息求出的图像特征量f,生成使机器人20000进行动作的目标轨道。 [0502] The target track generation unit 112 target image characteristic amount fg, W, and the image information obtained from the image feature amount f, to generate the target track of the robot operation 20,000. 具体而言,进行求出用于使机器人20000与目标状态(与fg对应的状态)接近的关节角的变化量A 0g的处理。 Specifically, the process for obtaining 20,000 robot target state (the state corresponding fg) A 0g variation near the joint angle. 该Δ 0g成为关节角的暂定的目标值。 The Δ 0g become tentative target joint angle. 此外,在目标轨道生成部112中,也可W从A0g求出每单位时间的关节角的驱动量(图36中的带点0g)。 Further, in the target track generation unit 112, W may be determined from A0g driving amount (dotted in FIG. 36 0g) joint angle per unit time.

[0503] 关节角控制部113根据关节角的目标值A0g、W及当前的关节角的值Θ,进行关节角的控制。 [0503] The joint angle control unit 113 A0g target joint angle, and current value W [Theta] of the joint angle, joint angle control. 例如,由于A 0g为关节角的变化量,所W使用Θ与A 0g,进行求出关节角可W为何值的处理。 For example, since the amount of change A 0g joint angle, Θ using the W and A 0g, joint angle can be processed to obtain what W values. 驱动部114跟随关节角控制部113的控制,进行驱动机器人20000的关节的控制。 Drive unit follow the joint angle control unit 114 controls 113 20,000 driving robot control joints.

[0504] 关节角检测部115进行检测机器人20000的关节角为何值的处理。 [0504] a joint angle detection unit 115 performs the process of detecting the joint angles 20,000 why value. 具体而言,在通过由驱动部114进行的驱动控制使关节角变化后,检测该变化后的关节角的值,并使当前的关节角的值为Θ而向关节角控制部113输出。 Specifically, the drive control by the drive unit 114 by the changes of the joint angle, joint angle detection value after the change, and the current joint angle Θ is output to the control unit 113 joint angle. 关节角检测部115具体而言也可W作为获取编码器的信息的接口等来实现。 A joint angle detection unit 115 may be specifically W as interface information acquired encoder implemented.

[0505] 图像信息获取部116从拍摄部等进行图像信息的获取。 [0505] The image information acquisition unit 116 acquires image information from the imaging unit and the like. 运里的拍摄部可W是如图37所示地配置于环境的拍摄部,也可W是设置于机器人20000的臂2210等的拍摄部(例如手眼摄像机)。 Transport in the imaging section may be disposed in the environment is W as shown in FIG imaging unit 37, may be provided on the robot arm and W is 2210 to 20,000 and the like imaging unit (e.g. hand-eye camera). 图像特征量运算部117根据图像信息获取部116所获取的图像信息,进行图像特征量的运算处理。 Image feature amount calculation unit 117 the image information acquisition unit 116 acquires the image information according to the image feature amount calculation processing. 此外,根据图像信息对图像特征量进行运算的手段公知有边缘检测处理、 匹配处理等各种手段,并且在本实施方式中能够广泛地应用它,因此省略详细的说明。 In addition, various means have edge detection processing, matching processing, etc. according to methods well-known image information on the image feature amount calculation, and in the present embodiment, it can be widely applied, detailed description thereof will be omitted. 由图像特征量运算部117求出的图像特征量作为最新的图像特征量f,而向目标轨道生成部112 输出。 Calculated by the image feature amount calculation unit 117 the image feature amount as the latest image feature amount f, and the output generating unit 112 to the target track.

[0506] 变化量运算部1120保持由图像特征量运算部117运算出的图像特征量,并根据过去获取的图像特征量fold、与作为处理对象的图像特征量f (狭义而言为最新的图像特征量) 的差分,对图像特征量变化量A f进行运算。 [0506] 1120 change amount calculation unit 117 calculating the image feature amount held by the image feature amount calculation unit, and the image feature amount acquired in the past fold, as the latest image and the image feature amount f (in terms of narrow processing target feature amount) of the differential, the change amount of the image feature amount a f is operated.

[0507] 变化量推断部1130保持由关节角检测部115检测出的关节角信息,并根据过去获取的关节角信息θαΐ<κ与作为处理对象的关节角信息Θ(狭义而言为最新的关节角信息)的差分,对关节角信息的变化量A Θ进行运算。 [0507] variation estimation unit 1130 holding the joint angle information detected by the joint angle detection unit 115, and based on the joint angle information obtained in the past θαΐ <κ as the target joint angle [Theta] processing information (narrow sense of the latest joint angle information) of the difference, the amount of change a Θ of information calculates joint angle. 并且,根据Δ Θ,求出推断图像特征量变化量Δ fe。 Further, according Δ Θ, the image change amount estimation obtained feature amount Δ fe. 此外,在图36中,对变化量推断用信息为关节角信息的例子进行了说明,但是如上所述,作为变化量推断用信息,也可W使用机器人20000的末端执行器2220或者对象物的位置姿势信息。 Further, in FIG. 36, estimation of the amount of change has been described as an example of information joint angle information, but as mentioned above, as information for the change amount estimation, W may also be used of the robot end effector 2220 or 20,000 object position and orientation information.

[050引此外,图35的机器人控制部1110也可W是与图36的目标特征量输入部111、目标轨道生成部112、关节角控制部113、驱动部114、关节角检测部115、图像信息获取部116、W及图像特征量运算部117对应的控制部。 [050 cited Further, FIG robot control unit 111035 may also be W is 36 target feature input unit 111, the target track generation unit 112, a joint angle control unit 113, a drive unit 114, a joint angle detection unit 115, an image information acquisition unit 116, W, and image feature amount calculation unit 117 corresponds to the control unit.

[0509]另外,如图38所示,本实施方式的手段能够应用于包含如下构成的机器人:包含根据图像信息来控制机器人(具体而言为包括臂2210 W及末端执行器2220的机器人主体3000)的机器人控制部1110;根据图像信息而求出图像特征量变化量的变化量运算部1120; 根据作为机器人20000或者对象物的信息并且作为图像信息W外的信息的变化量推断用信息、对图像特征量变化量的推断量亦即推断图像特征量变化量进行运算的变化量推断部1130;通过图像特征量与推断图像特征量变化量的比较处理来进行异常判定的异常判定部1140。 [0509] Further, as shown in FIG. 38, the means of the present embodiment also includes the embodiment can be applied to a robot: comprising controlling the robot (specifically, according to the image information of the robot includes an arm body and the end effector 2210 W 2220 3000 ) robot control unit 1110; according to image information to obtain a change amount of the image change amount of the feature amount calculation unit 1120; 20,000 or as the information of the robot and the object information as the amount of change of the outside image information W inference information for inferred change in the amount of image feature quantity i.e. change amount estimation image feature amount for computing the change amount estimating unit 1130; abnormality determination is performed by comparing the image characteristic amount with the variation amount estimation processing image feature quantity abnormality determination section 1140.

[0510] 如图19A、图19B所示,运里的机器人也可W是包括控制装置600W及机器人主体300的机器人。 [0510] Fig. 19A, 19B, in transport robots W may also comprise means it is 600W and a robot main body 300 of the robot control. 若为图19A、图19B的结构,则控制装置600包括图38的机器人控制部1110等。 If FIG. 19A, the structure of FIG. 19B, the control device 600 of FIG 38 includes a robot control unit 1110 and the like. 运样,能够进行根据基于图像信息的控制而形成的动作,从而能够实现自动检测控制中的异常的机器人。 Sample transport, based on the control operation can be performed in accordance with the image information is formed, thereby enabling the automatic detection of abnormality in the control of the robot.

[0511] 此外,本实施方式的机器人的构成例并不限定于图19A、图19B。 [0511] Further, the configuration of the embodiment of the robot according to the embodiment is not limited to FIGS. 19A, 19B. 例如,如图39所示, 机器人也可W包括机器人主体3000、W及基座单元部350。 For example, FIG. 39, the robot 3000 may include W, 350 W, and the base unit of the robot main body portion. 本实施方式的机器人也可W如图39所示是双臂机器人,除了相当于头部、躯干的部分之外,还包括第一臂2210-1与第二臂2210-2。 Robot according to the embodiment shown in FIG. 39 may be W is a dual-arm robot, except for the portion corresponding to the head, torso, further comprising a first arm and a second arm 2210-2 2210-1. 在图39中,第一臂2210-1是由关节2211、2213、与设置于关节之间的框架2215、 2217构成的,第二臂2210-2也是同样的,但是并不限定于此。 In Figure 39, the first arm is articulated 2210-1 2211,2213, and 2215 disposed between the frame joint, consisting of 2217, a second arm 2210-2 is the same, but is not limited thereto. 此外,在图39中示出了具有两支臂的双臂机器人的例子,但是本实施方式的机器人也可W具有3支W上的臂。 Further, in FIG. 39 shows an example of a dual-arm robot having two arms, the robot of the present embodiment may have a W 3 W on the arms.

[化12] 基座单元部350设置于机器人主体3000的下部,并且支承机器人主体3000。 [Chem. 12] The base unit 350 is provided in a lower portion 3000 of the robot body, the robot body and the support 3000. 在图39 的例子中,在基座单元部350设置有车轮等,从而形成为机器人整体能够移动的结构。 In the example of FIG. 39, the base unit 350 is provided with a wheel portion, so as to form a structure capable of moving the entire robot. 但是, 也可W是基座单元部350不具有车轮等,而固定于地面等的结构。 However, W may be a portion 350 of the base unit does not have wheels, fixed to the floor structure or the like. 在图39中,未图示与图19A、图19B的控制装置600对应的装置,但是在图39的机器人系统中,通过在基座单元部350 收纳控制装置600,从而使机器人主体3000与控制装置600作为一体而构成。 In FIG 39,. 19A, the control apparatus 600 corresponding to the device, not shown in FIG. 19B and FIG., But in the robot system of FIG. 39, by the base unit housing portion 350 control means 600, so that the robot main body 3000 and control device 600 is configured integrally.

[0513] 或者,也可W如控制装置600那样,不设置特定的控制用的机器,而通过内置于机器人的基板(更具体而言为设置于基板上的1C等),实现上述机器人控制部1110等。 [0513] Alternatively, W may be a control unit of the robot control apparatus 600 that is not provided a particular machine control, built in the robot by the substrate (more specifically, disposed on the board 1C), to achieve 1110 and so on.

[0514] 另外,如图20所示,机器人控制装置1000的功能也可W通过经由包括有线W及无线的至少一方的网络400而与机器人通信连接的服务器500来实现。 [0514] Further, as shown, the robot control apparatus 20 functions W 1000 may also be implemented by a server 500 is connected to communicate with the robot W via a wired and wireless comprising at least one network 400.

[0515] 或者在本实施方式中,也可W构成为,作为机器人控制装置的服务器500进行本发明的机器人控制装置的处理的一部分。 [0515] or, in the present embodiment, W may also be configured to perform part of the processing apparatus of the present invention is used as a server in a robot control device 500 controls the robot. 此时,通过与设置于机器人侧的机器人控制装置的分散处理,从而实现该处理。 In this case, provided by the robot-side robot control apparatus of the distributed processing, thereby realizing the process.

[0516] 而且,在运种情况下,作为机器人控制装置的服务器500进行本发明的机器人控制装置的各处理中的、分配于服务器500的机器人控制系统的处理。 Each processing server [0516] Further, in the transport case, as the robot control device 500 of the present invention, the robot control apparatus, the distribution server 500 in the robot process control system. 另一方面,设置于机器人的机器人控制装置进行本发明的机器人控制装置的各处理中的、分配于机器人的机器人控制装置的处理。 Each processing robot hand, the robot is provided a robot control apparatus according to the present invention is a control apparatus, the robot partitioned robotic control device.

[0517] 例如,本发明的机器人控制装置进行第一~第M(M为整数)处理,考虑能够W使第一处理通过子处理laW及子处理化来实现、并使第二处理通过子处理2aW及子处理化来实现的方式,将第一~第Μ的各处理分割为多个子处理的情况。 [0517] For example, the robot control apparatus of the present invention, first to M (M is an integer) processing, consideration W can be processed by the first sub-processing of the sub-processing and laW be implemented by the sub-process and a second process 2aW and the sub-processing of the embodiment to achieve the first to Μ each process is divided into a plurality of sub-processes. 在该情况下,考虑作为机器人控制装置的服务器500进行子处理la、子处理2a、· ··子处理Ma,设置于机器人侧的机器人控制装置进行子处理化、子处理化、···子处理Mb运一分散处理。 In this case, considered as a robot control device server process 500 sub-La, the sub-processing 2a, · ·· subprocess Ma, disposed on the robot side robot control device of the sub-processing, processing of the sub, the sub ????? a transport process Mb dispersion treatment. 此时,本实施方式的机器人控制装置、即执行第一~第Μ处理的机器人控制装置可W是执行子处理la~子处理Ma的机器人控制装置,可W是执行子处理化~子处理Mb的机器人控制装置,也可W是执行子处理la~子处理MaW及子处理化~子处理Mb的全部的机器人控制装置。 At this time, the robot control apparatus of the present embodiment, i.e., first to Μ performs processing robot control device may execute sub-process W is ~ La Ma, the sub-processing robot control apparatus can execute the sub-process W is a process of sub-Mb ~ robot control means may execute sub-process W is La ~ MaW and the sub-processing of the sub-process - processing the sub-Mb all robot control apparatus. 进一步而言,本实施方式的机器人控制装置是对第一~第Μ处理的各处理至少执行一个子处理的机器人控制装置。 Further, the present embodiment is a robot control apparatus of the first to Μ respective processing process executed at least one sub-processing robot control apparatus.

[051引由此,例如与机器人侧的终端装置(例如图19Α、图19Β的控制装置600)相比处理能力较高的服务器500能够进行处理负荷高的处理等。 [051 cited Thus, for example, the robot-side terminal apparatus (e.g. FIG 19Α, the control device 600 of FIG. 19Β) higher compared to the processing power of the server 500 capable of handling high-load processing and the like. 并且,服务器500能够一并控制各机器人的动作,从而例如容易使多个机器人协调动作等。 Further, the server 500 can collectively controls the operation of each robot, the robot so that for example easily plurality of cooperative operation and the like.

[0519] 另外,近几年,制造多品种且少数的部件的情况有增加的趋势。 [0519] Further, in recent years, the case of manufacturing a multi-species and a few parts tends to increase. 而且,在变更制造的部件的种类的情况下,需要变更机器人进行的动作。 Further, in the case of changing the type of manufactured parts, it is necessary to change the operation of the robot. 若为如图20所示的结构,则即使不重新进行针对多个机器人的各机器人的指导作业,服务器500也能够一并变更机器人所进行的动作等。 If the configuration shown in FIG. 20, even without re-operation for guiding a plurality of robots each robot, the server 500 can be collectively changed for the operation of the robot, and the like. 并且,与针对各机器人设置一个机器人控制装置1000的情况相比,能够大幅度减少进行机器人控制系统1000的软件更新时的麻烦等。 And, a robot provided for each robot control device 1000 as compared to the case, can be significantly reduced trouble of robot control system software 1000 updates the like.

[0520] 3.视觉伺服控制 [0520] 3. The visual servoing

[0521] 在对本实施方式的异常检测手段进行说明前,对一般的视觉伺服控制进行说明。 Before [0521] In the abnormality detecting means of the present embodiment will be described, in general visual servo control will be described. 在图40中示出了一般的视觉伺服控制系的构成例。 In FIG 40 shows a general configuration example of visual servo control system. 由图40可知,在与图36所示的本实施方式的机器人控制装置1000进行比较的情况下,形成为除去变化量运算部1120、变化量推断部1130、W及异常判定部1140的结构。 40 seen from the case of FIG., The robot according to the present embodiment is compared with the embodiment shown in FIG. 36, the control device 1000, is formed to remove change amount calculation unit 1120, the change amount estimating unit 1130, W and abnormality determination unit 1140 of the structure.

[0522] 在将用于视觉伺服的图像特征量的维数设为n(n为整数)的情况下,图像特征量f W作为f=[fl,f2,· · -,fn]T的图像特征量矢量而表现出来。 The case where [0522] for the number of dimensions of the image feature amount of visual servoing to n (n is an integer), as the image feature amount f W f = [, f2, · · fl -, fn] image T feature vector is manifested. f的各要素例如使用特征点(控制点)的图像的坐标值等即可。 For example, each element f of the feature point (control point) of the image coordinate values ​​and the like can. 在该情况下,从目标特征量输入部111输入的目标图像特征量fg也同样地,表现为fg=[fgl,fg2, · · ·,f即]T。 In this case, the target feature quantity input from the input unit 111 fg of target image characteristic amount similarly, expressed as fg = [fgl, fg2, · · ·, f ie] T.

[0523] 另外,关节角也作为与机器人20000(狭义而言为臂2210)所包含的关节数对应的维数的关节角矢量而表现出来。 [0523] Further, as the robot joint angle is also 20,000 (in terms of narrow arm 2210) included in the number of joints of the dimension corresponding to the joint angle vector manifested. 例如,若臂2210是具有6个关节的6自由度的臂,则关节角矢量白表现为目=[目1,目2, · · ·,目6]Τ。 For example, if 2210 is a six joint arm having six degrees of freedom of the arm, the joint angle vector is expressed as white mesh = [head 1, head 2, ·, head 6] Τ.

[0524] 在视觉伺服中,在获取当前的图像特征量f的情况下,将该图像特征量f与目标图像特征量fg的差分向机器人的动作反馈。 [0524] In the visual servo, in the case of acquiring the current image feature amount f, and the image feature amount and the difference f fg target image characteristic amount of feedback to the operation of the robot. 具体而言,使机器人向减小图像特征量f与目标图像特征量fg的差分的方向进行动作。 Specifically, the direction of the robot to reduce the difference with the image feature amount f fg of target image characteristic amount is operated. 为此,必须知道怎么使关节角Θ活动,图像特征量f就怎么变化运一关系性。 Therefore, we must know how to make the joint angle Θ activity, the amount of image feature f on how to change a relationship of transport. 一般地该关系性形成为非线性,例如在η = g( Θ1,Θ2,Θ3,Θ4,Θ5,Θ6)的情况下,函数g为非线性函数。 The general form of the relationship is non-linear, for example, in the case where η = g (Θ1, Θ2, Θ3, Θ4, Θ5, Θ6), the function g is a nonlinear function.

[0525] 因此,在视觉伺服中,广泛公知有使用雅克比矩阵J的手段。 [0525] Thus, in the visual servo, there is widely known means using the Jacobian matrix J. 即使两个空间处于非线性关系,各个空间中的微少的变化量之间也能够W线性关系表现出来。 Even in the two spatial non-linear relationship between the amount of change in each of the minute space W can be demonstrated a linear relationship. 雅克比矩阵J是将上述微少变化量彼此建立联系的矩阵。 Jacobian matrix J is above the meager amount of change in the matrix to establish contact with each other.

[05%]具体而言,在机器人20000的手尖的位置姿势X为X=[x,y,z,Rl,R2,R3]T的情况下,关节角的变化量与位置姿势的变化量之间的雅克比矩阵化W下式(1)表现出来,位置姿势的变化量与图像特征量变化量之间的雅克比矩阵JiW下式(2)表现出来。 [05%] Specifically, in the hand position and posture of the robot 20,000 X is X = [x, y, z, Rl, R2, R3] is T, the change in the amount of change in position and posture of the joint angle between the Jacobian matrix W of the formula (1) shown, the amount of change between the position and posture variation amount of the image feature amount Jacobian matrix JiW the formula (2) shown.

[0527]数式1 [0527] Equation 1

[052引 [052 Cited

Figure CN104802166BD00461

[0529] 数式2 [0529] Equation 2

[0530] [0530]

Figure CN104802166BD00471

[0531] 而且,通过使用化、Ji,能够如下式(3)、(4)所示地表述ΔΘ、ΔX、Δf的关系。 [0531] Also, by using technology, Ji,, (4) the relationship expressed ΔΘ, ΔX, Δf can be the following formula (3) as shown. 化一般被称为机器人雅克比矩阵,并且若有机器人20000的连杆长度、旋转轴等的机构信息,贝U 能够解析计算化。 Of the robot is generally called Jacobian matrix, and the length of the link information if the robot mechanism, such as the rotary shaft of 20,000, analytical calculation of U can shell. 另一方面,Ji能够事先从使机器人20000的手尖的位置姿势微量变化时的图像特征量的变化等推测出来,并且也提出了在动作中随时推断Ji的手段。 On the other hand, it can be previously Ji posture image feature amount when the change inferred small changes, and also proposes means in operation at any time Ji inferred from the hand of the robot position 20000.

[0532] AX = JaA0.....(3) [0532] AX = JaA0 ..... (3)

[0533] Af = JiAX.....(4) [0533] Af = JiAX ..... (4)

[0534] 并且,通过使用上式(3)、(4),能够如下式巧)所示地表现图像特征量变化量Δ f与关节角的变化量A Θ的关系。 Relationship [0534] Further, by using the formula (3), (4), the following equation can be clever) performance shown in the image feature amount and the change amount Δ f joint angle change amount of A Θ.

[0535] Af = JvA Θ.....(5) [0535] Af = JvA Θ ..... (5)

[0536] 运里,Jv = JUa,并且表示关节角的变化量与图像特征量变化量之间的雅克比矩阵。 [0536] in operation, Jv = JUa, Jacobian matrix and represents the amount of change between the joint angle and the change amount of the image feature amount. 另外,也将Jv表述为图像雅克比矩阵。 In addition, it will Jv expressed as image Jacobian matrix. 在图41中图示出了上式(3)~(5)的关系性。 FIG 41 is illustrated in the relational formula (3) to (5).

[0537] 依据W上内容,目标轨道生成部112将f与fg的差分作为Δί·,而求出关节角的驱动量(关节角的变化量)A Θ即可。 [0537] based on the content W, the difference f with 112 fg of target track generation unit as Δί ·, the driving amount (the change amount of the joint angle) A Θ joint angle can be determined. 运样,能够求出用于使图像特征量f与fg接近的关节角的变化量。 Transport sample can be obtained for the change amount of the image feature f fg proximity with the joint angle. 具体而言,为了从Af求出ΔΘ,将上式(5)的两边从左边乘Wjv的逆矩阵J厂1即可,但是进一步考虑到作为λ的控制增益,利用下式(6)求出成为目标的关节角的变化量A0g。 Specifically, in order to obtain from ΔΘ of Af, on both sides of the equation (5) from 1 to the left by the inverse matrix J Wjv plant, but further considering a control gain λ, and is obtained using the following equation (6) the amount of change A0g become the target joint angle.

[053引Λ 目邑二一λ]γ-ι0--fg).....(6) [053 mesh Yap twenty-one primer Λ λ] γ-ι0 - fg) ..... (6)

[0539] 此外,在上式(6)中求出了Jv的逆矩阵J厂1,但是在未求出J厂1的情况下,也可W使用Jv的广义逆矩阵(疑似逆矩阵)Jv#。 [0539] Further, in the above formula (6) obtains the inverse matrix J Jv plant 1, but not in the case of obtaining plant J 1, W may be a generalized inverse matrix Jv (pseudo inverse matrix) Jv #.

[0540] 通过使用上式(6),从而每当获取新的图像时,求出新的A0g。 [0540] By using the above formula (6), so that each time a new image is acquired, obtaining new A0g. 由此,能够使用获取的图像,边更新成为目标的关节角,边进行与目标状态(图像特征量成为fg的状态)接近的控制。 Accordingly, it is possible to use an image acquired while updating the target joint angle becomes, be close to the edge of the target state (a state fg image characteristic amount) of control. 在图42中图示出了该流程。 In FIG 42 illustrates this process. 若从第m-1图像(m为整数)求出图像特征量fm-i,则通过形成上式(6)的f = 能够求出Δ 0gm-i。 If f is formed on the formula (6) from the first image is m-1 (m is an integer) the image feature amount obtained fm-i, it can be determined by = Δ 0gm-i. 然后,在第m-1图像与下一个图像亦即第m图像之间,将求出的A 0gm-i作为目标而进行机器人20000的控制即可。 Then, m-1 between the first image and the next image i.e. the m-th image, and the determined A 0gm-i may be controlled as a target of the robot 20,000. 然后,若获取第m图像,则从该第m图像求出图像特征量fg,并利用上式(6)计算作为新的目标的Δ 0gm。 Then, if the obtained image of the m-th, m-th image is obtained from the image feature amount fg, Δ 0gm using formula (6) is calculated as a new target. 在第m图像与第m + 1图像之间,在控制中使用计算出的A 0gm。 M between the first image and the m + 1 image, using the calculated A 0gm in the control. ^下,在结束该处理之前(在图像特征量与fg充分接近之前),持续进行该处理即可。 Under ^, before the end of the process (before the image feature amount and sufficiently close fg), the process can be continued.

[0541] 此外,虽然求出成为目标的关节角的变化量,但是未必需要使关节角变化目标量。 [0541] Further, although the amount of change of the determined target joint angle becomes, but need not necessarily be the amount of change of the target joint angle. 例如,在第m图像与第m+1图像之间,将A0gm作为目标值而进行控制,但是也大多考虑如下情况:在实际的变化量还未成为A 0gm时,获取下一个图像亦即第m+1图像,并通过它计算新的目标值Δ目抑+1。 For example, between the m-image and the second image m + 1, it will be controlled as a target value A0gm, but mostly consider the following: a change in the actual amount has not become A 0gm, i.e. of obtaining the next image image m + 1, + 1 and Δ mesh suppressed by calculating its new target.

[0542] 4.异常检测手段 [0542] 4. The abnormality detecting means

[0543] 对本实施方式的异常检测手段进行说明。 [0543] The abnormality detection means of the present embodiment will be described. 如图43A所示,在机器人20000的关节角为化时,获取第P图像信息,并根据该第P图像信息计算图像特征量巧。 As shown in FIG. 43A, when the joint angle of the robot is 20,000, the first acquired image information P, and calculates an image feature amount based on the coincidence of the P image information. 然后,在比第P图像信息的获取时刻靠后的时刻,在机器人20000的关节角为0q时,获取第q图像信息,并根据该第q图像信息计算图像特征量fq。 Then, at the time the rear than the acquisition time of the image information P, when the joint angle of the robot 20,000 0q, acquiring image information of q, and calculates an image feature quantity fq based on the q-image information. 运里,第P图像信息与第q图像信息可W是时间序列上邻接的图像信息,也可W是不邻接的(在第P图像信息的获取后、第q图像信息的获取前,获取其他的图像信息)图像信息。 Operation, the first P image information of q image information may be W is adjacent in time-series image information may be W is not adjacent (after obtaining the first P image information, before the acquisition, acquisition of q image information other image information) image information.

[0544] 在视觉伺服中,如上所述地将fp、fq与fg的差分作为Δ f而用于Δ 0g的计算,但是巧与fq的差分fq-巧不外乎也是图像特征量变化量。 [0544] In the visual servo, as described above will be fp, fq and the difference Δ f fg as used in calculating [Delta] 0g, but clever clever differential fq- fq is nothing more than the image feature amount change amount. 另外,由于关节角化、关节角是由关节角检测部115从编码器等获取的,所W能够作为实测值而求出,θρ与的差分0q-化是关节角的变化量A Θ。 Further, since the joint diagonalization, by the joint angle of the joint angle detection unit 115 acquired from the encoder, the measured value W can be obtained as a differential of θρ 0q- with joint angle is the amount of change A Θ. 即,对于两个图像信息而言,为了分别求出对应的图像特征量f与关节角9,求出图像特征量变化量作为A f = fq -巧,还求出对应的关节角的变化量作为Δ 0 = 0q - 白P。 That is, information for the two images, in order to obtain each image feature amount corresponding to the joint angle f and 9, the image change amount is determined as the feature amounts A f = fq - Qiao, further obtains the amount of change corresponding to the joint angle as Δ 0 = 0q - P. white

[0545] 而且,如上式(5)所示,存在Δ f = Jv Δ目的关系。 [0545] Further, as shown in (5), the presence of Δ f = Jv Δ Formula object relationships. 即,若使用实测的Δ目= 0q -化与雅克比矩阵Jv而求出Δ fe = Jv Δ Θ,则求出的Δ fe应与完全不产生误差的理想的环境下实测的Af = fq_fp-致。 That is, when the measured [Delta] Head = 0q - Jv of the Jacobian matrix and the determined Δ fe = Jv Δ Θ, the obtained Δ fe should not generated at the ideal environment Found error Af = fq_fp- cause.

[0546] 由此,变化量推断部1130通过对关节角信息的变化量作用使关节角信息与图像特征量相对应(具体而言使关节角信息的变化量与图像特征量变化量相对应)的雅克比矩阵Jv,从而对推断图像特征量变化量Afe进行运算。 [0546] Accordingly, the change amount estimating unit 1130 by varying the amount of effect on joint angle information of the joint angle information corresponding to the image feature amount (specifically, the amount of change in the amount of change in the image feature amount information corresponding to the joint angle) the Jacobian matrix Jv, thereby inferring an image characteristic amount Afe calculates the amount of change. 如上所述,若为理想的环境,则求出的推断图像特征量变化量Δ fe应与变化量运算部1120中作为Δ f = fq - fp而求出的图像特征量变化量A f-致,反之而言,在Δ f与Δ fe有较大不同的情况下,能够判定为产生了某些异常。 As described above, the image feature amount variation amount estimation if an ideal environment, and should be determined Δ fe change amount calculation unit 1120 as Δ f = fq - fp and the image feature amount calculated variation induced by A f- , conversely, the lower the Δ f and Δ fe greatly different, can be judged that there is some abnormality.

[0547] 运里,作为Af与Afe产生误差的因素,考虑根据图像信息对图像特征量进行运算时的误差、编码器读取关节角的值时的误差、雅克比矩阵Jv所包含的误差等。 [0547] in operation, as Af Afe generated with sources of error, consider the error when image information on the image feature amount calculation, the encoder reading errors during joint angle values, Jacobian matrix contained errors Jv . 但是,在编码器读取关节角的值时,产生误差的可能性在与其他两种相比时较低。 However, when the encoder reads the value of the joint angle, the possibility of an error is low when compared to the other two. 另外,雅克比矩阵Jv所包含的误差也不是很大的误差。 Further, the error Jv Jacobian matrix is ​​not included in a large error. 与此相对,由于在图像中拍摄有非识别对象的多数的物体, 从而导致根据图像信息对图像特征量进行运算时的误差的产生频率比较高。 On the other hand, since the image is captured in a majority of non-recognition target object, thereby causing the frequency error generated in accordance with image information when the image feature amount calculation higher. 另外,在图像特征量运算中产生异常的情况下,存在误差变得非常大的可能性。 Further, in the case where abnormality occurs in the image feature amount calculation, there is the possibility of error becomes very large. 例如,若从图像中识别希望的物体的识别处理失败的话,则有在与原来的物体位置大不同的图像上的位置,误识别为存在物体的可能性。 For example, when the recognition process of recognizing an image from a desired object fails, then the position on the original position of the object image is different from a large, possibility of erroneous recognition of an object. 由此,在本实施方式中,主要检测图像特征量的运算中的异常。 Thus, in the present embodiment, the primary image feature quantity detecting operation abnormality. 但是, 也可W将由其他因素引起的误差作为异常来检测。 However, by W may be an error caused by other factors as the abnormality is detected.

[0548] 在异常判定中,例如进行使用阔值的判定处理即可。 [0548] In the abnormality determination, for example, use can be wide value determination process. 具体而言,异常判定部1140进行图像特征量变化量Δί·与推断图像特征量变化量Afe的差别信息、和阔值的比较处理,并且在差别信息比阔值大的情况下,判定为异常。 Specifically, the abnormality determination section 1140 performs image feature amount and the change amount Δί · inferred image characteristic change in the amount of Afe difference information, and comparison processing width value, and in the difference information than they are wide value, it is determined to be abnormal . 例如设定给定的阔值化,在满足下式(7)的情况下,判定为产生异常即可。 For example setting a given value of the width, in the case of satisfying the following formula (7), it is determined to abnormal. 运样,能够利用下式(7)等容易的运算来检测异常。 Sample transport, easy operation can be utilized by the following formula (7) or the like to detect an abnormality.

[0549] ΔΓ- Afe| >Th.....(7) [0549] ΔΓ- Afe |> Th ..... (7)

[0550] 另外,阔值化无需为固定值,也可W与状况对应地使其值变化。 [0550] Further, the value of the width need not be a fixed value, and the condition may be W corresponding to its value change. 例如,异常判定部1140也可W构成为,变化量运算部1120中的图像特征量变化量的运算所使用的两个图像信息的获取时刻之差越大,则将阔值设定得越大。 For example, the abnormality determination section 1140 may be configured as W, the greater the time difference between two images acquired information is image feature quantity variation amount change amount calculation unit 1120 in operation is used, then the value is set larger width .

[0551 ] 如图41等所示,雅克比矩阵Jv是将A Θ与Af建立联系的矩阵。 [0551] As shown in FIG 41 and the like, Jacques A Θ is the matrix Af establish contact ratio matrix Jv. 而且如图44所示,即便在使相同的雅克比矩阵Jv作用的情况下,与作用于A θ而得到的Δ fe相比,作用于比Δ θ 变化量大的A θ '而得到的Δ fe' 一方变化量较大。 Also shown in Figure 44, so that even in the case where the same Jacobian matrix Jv effect, compared with the effect on the A θ Δ fe obtained by acting on the large variation ratio Δ θ A θ 'obtained in [Delta] fe 'one variation amount is large. 此时,难W考虑雅克比矩阵Jv完全不产生误差,从而与关节角变化Δ Θ、Δ θ'的情况下的图像特征量的理想的变化量Afi、Afi'相比,如图44所示,Δ fe、Δ fe '产生偏差。 In this case, it is difficult to consider W Jacques than completely generating an error matrix of Jv, so that the joint angle change Δ Θ, Δ θ 'over the amount of change in the image feature amount in the case where Afi, Afi' comparison, shown in Figure 44 , Δ fe, Δ fe 'vary. 而且,从图44的A1与A2的比较可知,变化量越大,该偏差就越大。 Further, it is understood from the comparison of A1 and A2 in FIG. 44, the greater the change amount of the deviation is larger.

[0552] 若假设在图像特征量运算中完全不产生误差,则根据图像信息求出的图像特征量变化量A f与相等。 [0552] Assuming that an error does not occur completely in the image feature amount calculation, the image information from the image feature amount calculated change amount A f equal. 在该情况下,上式(7)的左边表示因雅克比矩阵产生的误差, 并且如Δ Θ、Δ fe等那样在变化量较小的情况下成为与A1相当的值,如Δ θ'、Δ fe'等那样在变化量较大的情况下成为与A2相当的值。 In this case, the left side of the equation (7) represents a ratio of an error matrix generated Yinya Ke, and as Δ Θ, Δ fe A1 and the like that has become a value corresponding to the amount of change is small in the case where, as Δ θ ', Δ fe 'like that has become the value corresponding to A2 in the case where the amount of change is large. 但是如上所述,Afe、Afe'的两方所使用的雅克比矩阵Jv是相同的,虽然上式(7)的左边的值变大,但是判定为A2-方与A1相比为异常度更高的状态是不适当的。 As described above, however, Afe, Afe 'two parties use the same Jacobian matrix Jv ratio, although the value of the left side of the equation (7) becomes large, but the determination is compared with a square A2- A1 is more abnormality degree high state is inappropriate. 即,不可W说在与A1对应的状况中不满足上式(7)(不判定为异常), 在与A2对应的状况中满足上式(7)(判定为异常)是适当的。 That is, not W does not satisfy the said formula (7) (abnormality is not determined) in the situation corresponding to A1, A2 to satisfy the condition corresponding to the appropriate equation (7) (abnormality determination). 因此,在异常判定部1140中,若Δ Θ、Δ fe等变化量越大,则进一步将阔值化也设定得越大。 Accordingly, the abnormality determination section 1140, the larger the amount of change [Delta] if Θ, Δ fe other, the width is further binarized also set larger. 运样,由于与对应于A1的状况相比,对应于A2的状况下的阔值化大,所W能够进行适当的异常判定。 Sample transport, since compared to the corresponding condition of A1, A2 corresponds to the width of the large-condition value, the abnormality W can be appropriately determined. 考虑两个图像信息(若在图43A中则是第P图像信息与第q图像信息)的获取时刻之差越大,A0、Afe等也越大,因此在处理上,例如,与图像获取时刻之差对应地设定阔值化即可。 Consider two image information (in FIG. 43A, if the P picture is the first information and the second image information q) the greater the difference between the time of acquisition, A0, Afe, also larger, so the treatment, for example, the image acquisition time the difference between the value of the set corresponding to the width.

[0553] 另外,考虑异常判定部1140中检测到异常的情况下的各种控制。 [0553] Further, considering a variety of abnormality determination unit 1140 under the control of an abnormality is detected. 例如,在由异常判定部1140检测到异常的情况下,机器人控制部1110也可W进行使机器人20000停止的控制。 For example, in a case where the abnormality determination section 1140 detects the abnormality, the control unit 1110 of the robot W, the robot may stop control of 20,000. 如上所述,检测到异常的情况例如是来自图像信息的图像特征量的运算产生较大误差的情况等。 As described above, when detecting an abnormal image such as, for example, a case where the feature quantity calculation from the image information generated in large errors. 即,若使用该图像特征量(若为图43A的例子则是fq)来进行机器人20000的控制,则存在向和图像特征量与目标图像特征量fg接近的方向相距甚远的方向使机器人20000移动的可能性。 That is, if the image characteristic amount (if the example of FIG. 43A is FQ) to control the robot 20000, the image feature quantity distance to the target image feature amount fg direction approaching the far direction there robot 20,000 the possibility of movement. 因该情况,恐怕会使臂2210等与其他物体相碰撞,并由于采取不合理的姿势而使手部等所把持的对象物落下。 Because of the situation I would run the arm 2210 and the like collide with other objects, and to take an unreasonable posture since the hand gripped like object dropped. 由此,作为异常时的控制的一个例子,考虑使机器人20000的动作本身停止,而不进行那样风险较大的动作。 Thus, as an example of the control when an abnormality, considering that the stopping operation of the robot itself 20,000, without risk that a large operation.

[0554] 另外,若推断出图像特征量fq产生较大误差,而不希望进行使用fq的控制,则也可W不立即使机器人动作停止,并且不将fq用于控制。 [0554] Further, when the image feature amount fq inferred to considerable error, not desirable to use a control fq, then W can not make the operation of the robot is stopped immediately, and is not used to control the fq. 由此例如,在由异常判定部1140检测到异常的情况下,机器人控制部1110也可W跳过基于变化量运算部1120中的图像特征量变化量的运算所使用的两个图像信息中的、在时间序列上靠后的时刻获取的图像信息亦即异常判定图像信息所实现的控制,而进行基于在比异常判定图像信息靠前的时刻获取的图像信息所实现的控制。 Thus, for example, in a case where the abnormality determination section 1140 detects an abnormality, the robot control unit 1110 may change the image information skipping two W image feature amount variation amount calculation unit 1120 used in the calculation-based , by the time the image information acquired in a time series abnormality determination control i.e. implemented image information, and image information is controlled based on the front than the abnormality determination time acquisition of the image information is achieved.

[0555] 若为图43A的例子,则异常判定图像信息是第q图像。 [0555] FIG 43A is an example of when, the abnormality is determined that the image information of an image q. 另外,在图42的例子中,使用邻接的两个图像信息来进行异常判定,并且判定为在第m-2图像信息与第m-1图像信息中无异常,在第m-1图像信息与第m图像信息中无异常,在第m图像信息与第m+1图像信息中有异常。 Further, in the example of FIG. 42, the use of two adjacent image information abnormality determination, and determines that no abnormality in the image of the m-2 m-1 information and the second image information, the first image information and the m-1 m-no abnormal image information, image information in the m-m + 1-abnormal image information. 在该情况下,考虑可知fm-lW及fm不存在异常,而fm+l存在异常,从而Δ 0gm-l、Δ 0gm能够用于控制,但是将A 0gm+i用于控制是不适当的。 In this case, considering understood fm fm-lW and there is no abnormality, the abnormality fm + l, so that Δ 0gm-l, Δ 0gm be used to control, but the A 0gm + i for controlling inappropriate. 本来,在第m+1图像信息与下一个第m+2图像信息之间将A 0gm+l用于控制,但是运里,由于该控制不适当所W不进行。 Originally, in the m + 1 of the next image information m + 2 between the image information for controlling the A 0gm + l, but in operation, since the control is inappropriate W is not performed. 在该情况下,在第m+1图像信息与第m+2图像信息之间,也使用之前求出的A0gm而使机器人20000进行动作即可。 In this case, m + 1 prior to the image information between the m + 2 and the image information obtained A0gm be used to operate the robot to 20,000. 由于A 0gm至少是在fm的计算时刻向目标方向使机器人20000移动的信息,所W即使在fm+l的计算后继续利用,也难W认为会产生较大的误差。 Since the information is to enable the robot A 0gm 20,000 moving direction of the target at least fm of calculation time, the continued use even after W fm + l calculated, it is difficult that W will cause large errors. 运样,即使在检测到异常的情况下, 也能够用在此w前的信息、特别是在比异常检测时刻靠前的时刻获取并且未检测到异常的信息,进行大体控制,从而使机器人20000的动作继续进行。 Sample transport, even in a case where an abnormality is detected, it is possible to use this information before w, particularly in the abnormality detection time than the forward time and the information acquired abnormality is not detected, control is generally performed such that the robot 20,000 the action continues. 之后,若获取新的图像信息(若为图42的例子则是第m+2图像信息),则利用从该新的图像信息求出的新的图像特征量来进行控制即可。 Thereafter, if new image information is acquired (if the example of FIG. 42 is the image information of the m + 2), the control may be performed using the new image of the new image feature amount information obtained.

[0556] 在图48的流程图中,示出了到异常检测时为止考虑的本实施方式的处理流程。 [0556] In the flowchart of FIG. 48 shows a process flow of the present embodiment to the embodiment of the abnormality detection so far considered. 若开始进行该处理,则首先进行由图像信息获取部116实现的图像的获取、与由图像特征量运算部117实现的图像特征量的运算,并在变化量运算部1120中对图像特征量变化量进行运算(S10001)。 After starting the process, it is first acquired by the image information obtaining image portion 116 implemented calculates the image feature amount by the image feature amount calculation unit implementation 117, and the image feature amount of change in the change amount calculating portion 1120 amount calculation (S10001). 另外,进行由关节角检测部115实现的关节角的检测,并且在变化量推断部1130中对推断图像特征量变化量进行推断(S10002)。 Further, the joint detection implemented by the joint angle detection unit 115 angle and inference (S10002) estimation of the image feature amount in the change amount of the change amount estimating unit 1130. 然后,根据图像特征量变化量与推断图像特征量变化量的差分是否在阔值W下来进行异常判定(S10003)。 Then, according to the image feature amount and the change amount difference change amount of the image feature amount estimation value width W is in the off abnormality determination (S10003).

[0557] 在差分在阔值W下(S10003中为是)的情况下,不产生异常,从而使用S10001中求出的图像特征量进行控制(S10004)。 [0557] In the case where the difference in the width value W (YES in S10003), the abnormality is not generated, thereby using the image feature amount obtained in the control S10001 (S10004). 然后,进行当前的图像特征量是否与成为目标的图像特征量充分接近(狭义而言为一致)的判定,在为是的情况下,正常地到达目标而结束处理。 Then, whether or not the current image and the feature quantity of an image feature quantity becomes the target is sufficiently close (in a narrow sense is uniform) is determined, as in the case where, normally reach the target and processing is terminated. 另一方面,在S10005中为否的情况下,动作本身不产生异常,但是未到达目标,从而回到S10001而继续进行控制。 On the other hand, in the case of NO in S10005, the abnormal operation itself does not produce, but does not reach the target, so control continues to return to S10001.

[0558] 另外,在图像特征量变化量与推断图像特征量变化量的差分比阔值大(S10003中为否)的情况下,判定为产生异常。 [0558] Further, the difference in image feature quantity change amount estimation image variation amount is larger than the width of the feature quantity values ​​(NO in S10003) case, it is determined abnormal. 然后,对异常的产生是否是N次连续进行判定(S10006), 在连续产生的情况下,为不优选继续动作的程度的异常,从而停止动作。 Then, whether the abnormality is generated N times continuously determining (S10006), in the case of continuously generated, the degree of abnormality is not preferable to continue operation, thereby stopping the operation. 另一方面,在异常的产生不是N次连续的情况下,使用过去的并且判定为不产生异常的图像特征量来进行控审IJ (S10007 ),并回到S10001而继续进行下一个时刻的图像处理。 On the other hand, in the abnormal generation is not N successive case used in the past and determined that no abnormal image feature amount to be controlled trial IJ (S10007), and return to S10001 to continue the next time the image deal with. 在图48的流程图中,如上所述,在到一定程度的异常之前(运里为连续N-1次W下的异常产生),不立即停止动作,而进行使动作继续的方向上的控制。 In the flowchart of FIG. 48, as described above, prior to a certain degree of abnormality (operation abnormality was generated in W N-1 times), the operation is not stopped immediately, and the direction control so that the operation to continue .

[0559] 此外,在W上的说明中,没有特别考虑图像信息的获取时刻、关节角信息的获取时亥IJ、图像特征量的获取时刻(运算结束时刻)之间的时间差。 [0559] In the description of the W, no particular consideration the image acquisition time information, the time between acquiring Hai IJ, the image feature amount acquisition timing information of the joint angle (operation end time) difference. 但是实际上,如图45所示,即使在给定的时刻获取图像信息,在编码器读取该图像信息的获取时的关节角信息并且将读取的信息发送至关节角检测部115之前也会产生时滞。 Before practice, however, it is shown in Figure 45, even if at a given time acquiring image information, joint angle information when reading the image information acquired in the encoder and the read information is transmitted to the joint angle detection unit 115 also Delays occur. 另外,由于在图像获取后进行图像特征量的运算,所W在运也产生时滞,并且由于因图像信息的不同而使图像特征量的运算负荷不同,所W时滞的长度也不同。 Further, since the image feature amount in the image acquisition operation, the W is also a time lag in the operation, and because the image feature amount depending calculation load different image information, Delay length W is different. 例如,在完全没有拍摄识别对象物W外的物体并且背景为单一素色等情况下,能够高速地进行图像特征量的运算,但是在拍摄有各种物体的情况下等, 图像特征量的运算需要时间。 For example, completely outside the object in the photographing identification object and the background as a single plain W, etc., can be performed at high speed computation image feature amount, but in the case where imaging of various objects, etc., the image feature amount calculation needs time.

[0560] 目P,在图43A中,简单地对使用第P图像信息与第q图像信息的异常判定进行了说明,但是实际上如图45所示,需要考虑从第P图像信息的获取到对应的关节角信息的获取的时滞t0p、与从第P图像信息的获取到图像特征量的运算结束的时滞tfp,第q图像信息也同样需要考虑t0q与tfq。 [0560] P mesh, in FIG 43A, the abnormality determination of simply using the first information and the second q P picture image information has been described, but in fact shown in Figure 45, consider acquiring image information from the first to P Delay t0p acquired angle information corresponding to the joint, and tfp Delay P from image information acquired by the image feature amount computation end, the q-picture information and also to consider t0q tfq.

[0561] 异常判定处理例如是在获取第q图像信息的图像特征量fq的时刻开始的,但是必须适当地判定作为取得差分的对象的巧是多久前获取的图像特征量、或者与化的获取时刻是何时。 [0561] abnormality determination process, for example, the acquisition time of the image feature amount of q image information fq starting, but must be appropriately determined as to obtain a difference object clever image feature quantity how long ago acquired acquiring or with of moment was when.

[0562] 具体而言,在第ia为自然数)时刻获取第一图像信息的图像特征量η、并且在第j (j为满足j辛i的自然数)时刻获取第二图像信息的上述图像特征量f2的情况下,变化量运算部1120将图像特征量η与图像特征量f2的差分作为图像特征量变化量而求出,变化量推断部1130在第k化为自然数)时刻获取与第一图像信息对应的变化量推断用信息pi、并且在第1(1为自然数)时刻获取与第二图像信息对应的变化量推断用信息P2的情况下,根据变化量推断用信息pi与变化量推断用信息p2,求出推断图像特征量变化量。 [0562] Specifically, in the above-described feature amount of the image ia is a natural number) the image feature amount acquired in time the first image information [eta], and at the j (j is a natural number satisfying j oct i) is the time of acquiring the second image information f2 is the case, the change amount calculation unit 1120 and the image feature amount η differential image feature amount f2 as the image change amount of the feature amount obtained, the amount of change in the k-th estimation unit 1130 into a natural number) the first image in time acquires variation information corresponding to inference information pi, and 1 (1 is a natural number) time acquisition case where the change amount of the second image information corresponding to inference information P2 in accordance with the change amount estimation inference information pi and the amount of change with information p2, the image feature amount calculated change amount estimation.

[0563] 若为图45的例子,则图像特征量、关节角信息是在各种时刻获取的信息,但是在W fq的获取时刻(例如第加刻)为基准的情况下,与第调像信息对应的图像特征量巧是在靠前(tf qWi - tf P)的时刻获取的图像特征量,即,确定第i时刻比第j时刻靠前(tf qWi - t巧)。 Under [0563], the image characteristic amount, joint angle information is acquired at various times when the information is an example of FIG. 45, but in W fq acquisition time (e.g., first moment plus) as a reference, the first tone image information corresponding to the image feature amounts is coincidence in front (tf qWi - tf P) of the image feature amount acquisition time, i.e., at time i is determined forward than the j-th time (tf qWi - t Qiao). 运里,t巧日图45所示表示图像获取时刻之差。 In operation, as shown in FIG. 45 indicates the date t Qiao difference between the time the image acquisition.

[0564] 同样,确定作为的获取时刻的第1时刻是比第j时刻靠前(tfq-t0q)的时刻,作为化的获取时刻的第k时刻是比第j时刻靠前(tfqWi-t0p)的时刻。 [0564] Similarly, the acquisition time is determined as the first moment in time is the ratio of the j-th front (tfq-t0q) time, as the time of acquisition of the k-th time than the time the j-th front (tfqWi-t0p) moment. 在本实施方式的手段中,需要取得A f与Δ Θ的对应,具体而言,若Δ f是根据第P图像信息与第q图像信息求出的, 则A Θ也需要与第P图像信息W及第q图像信息对应。 In the present embodiment the means, the need to obtain the corresponding A f and Δ Θ, in particular, if the Δ f is determined in accordance with the first information and the second q P picture image information, the image information is also needed A Θ of the first P W q corresponding to the second image information. 如若不然,根据ΔΘ求出的推断图像特征量变化量A fe变得根本与Δ f不具有对应关系,而没有进行比较处理的意义。 Otherwise, the image feature amount based on the estimated amount of change A fe ΔΘ obtained with Δ f becomes simply does not have a correspondence relationship, without performing comparison processing sense. 由此如上所述,可W说确定时刻的对应关系是重要的。 Thus described above, W said correspondence between the determined time is important. 此外,在图45中,由于非常高速且高频率地进行关节角的驱动本身,所W作为连续的过程而进行处理。 Further, in FIG 45, due to the very high speed and high frequency drive itself articulate angle, W the process is carried out as a continuous process.

[0565] 此外,在现有的机器人20000W及机器人控制装置1000中,能够考虑图像信息的获取时刻与对应的关节角信息的获取时刻之差足够小。 [0565] Further, in the conventional robot and robot control apparatus 20000W 1000, can be considered the image information acquired with the time information of the joint angle corresponding to the difference between the time of obtaining sufficiently small. 由此,也可W考虑第k时刻为第一图像信息的获取时刻,第1时刻为第二图像信息的获取时刻。 Thus, W may be considered to acquire the k-th time in the time the first image information, the first time is the acquisition time of the second image information. 在该情况下,由于能够使图45中的t 化、t0q为0,所W容易进行处理。 In this case, it is possible to make the figure of 45 t, t0q is 0, W is easily handled.

[0566] 另外,作为更具体的例子,考虑在根据前一个图像信息计算图像特征量的时刻进行下一个图像信息的获取的手段。 [0566] Further, as a more specific example, consider the image information obtained by the means at a time of calculating the image feature amount of the image information before a. 在图46示出了该情况的例子。 FIG 46 shows an example of this case. 图46的纵轴是图像特征量的值,"实际的特征量"是假设获取了与该时刻的关节角信息对应的图像特征量的情况下的值,而无法在处理上确认。 FIG 46 is a vertical axis values ​​of the image feature amount, the "actual characteristic amount" is assumed the case where the acquired value and the image feature amount of joint angle information corresponding to that time, but can not be confirmed in the process. 从实际的特征量顺利地推移运一情况可知,可W考虑关节角的驱动是连续的。 Feature amount from the actual transition smoothly transported understood case, W may be considered joint angle drive is continuous.

[0567] 在该情况下,由于与在B1的时刻获取的图像信息对应的图像特征量是在经过t2后的B2的时刻获取的,所WB1的实际的特征量与B2的图像特征量对应(若没误差则一致)。 [0567] In this case, since the acquisition of the image feature amount corresponding to image information acquired at time B1 is the time elapsed B2 after t2, the image features of an actual feature amount WB1 and B2 corresponding to the amount ( If the error is not consistent). 而且在B2的时刻获取下一个图像信息。 And acquires the next image information at time B2.

[0568] 同样,对于在B2获取的图像信息的图像特征量而言,在B3结束运算,并且在B3获取下一个图像信息。 [0568] Also, the image feature amount acquired in terms of image information B2, B3 at the end of the operation, and acquires the next image information B3. W下相同地,对于在B4获取的图像信息的图像特征量而言,在经过tl后的B5结束运算,并且在B5获取下一个图像信息。 Under the same manner as W, the image feature amount acquired in terms of image information B4, B5 has elapsed after the end of the operation tl, and acquires the next image information B5.

[0569] 若为图46的例子,则在将B5的时刻计算出的图像特征量、与B2的时刻计算出的图像特征量用于异常判定处理的情况下,图像信息的获取时刻分别为B4与B1。 [0569] FIG. 46 is an example of when, at the time of the B5 image feature amount calculated, and the calculated timing B2, the image feature amount for abnormality determination process, the image acquisition timing information respectively B4 and B1. 如上所述,在图像信息的获取时刻与对应的关节角信息的获取时刻之差足够小的情况下,关节角信息使用B4的时刻的信息与B1的时刻的信息即可。 As described above, in the information acquisition time and the joint angle corresponding to the difference between the acquisition timing of the image information is sufficiently small, the joint angle information using the time information to the time information B1 to B4. 即如图46所示,在将B2与B5之差作为Ts,并使时刻的基准为B5的情况下,作为比较对象的图像特征量使用靠前Ts的时刻的特征量。 That is, as shown in FIG. 46, in the case where the difference between B2 and B5 as Ts, and a reference time for B5 using the feature amount of the front time Ts as the image feature quantity comparison target. 另外,在求出关节角信息的差分时使用的两个关节角信息使用靠前tl的时刻的信息、W及靠前TsW2 的时刻的信息即可。 Further, the use of two joint angles in the joint angle information obtaining differential information using the information of the time tl front, W, and forward the information to the time TsW2. 另外,两个图像信息的获取时刻之差为(Ts+t2-tl)。 Further, the time difference between two images acquired information to (Ts + t2-tl). 由此,在根据图像信息的获取时刻之差来决定阔值化的情况下,使用(Ts+t2-tl)的值即可。 Accordingly, in a case where the time difference based on the acquired image information to determine the value of the width, the value (Ts + t2-tl) of the can.

[0570] 另外,考虑各种信息的获取时刻,但是如上所述,在可使Af与ΔΘ之间存在对应关系的方式进行时刻的确定运一点上是相同的。 [0570] Further, considering various information acquisition time, but as mentioned above, the embodiment allows the correspondence relation exists between Af and ΔΘ determination operation time point is the same.

[0571] 5.变形例 [0571] 5. Modification

[0572] 在W上的说明中,获取Af与ΔΘ,并根据ΔΘ求出推断图像特征量变化量Afe,从而对Δ f与Δ fe进行比较。 [0572] In the description of the W, the acquisition and Af Delta] [theta, and calculates the image feature quantity estimation according to the amount of change Delta] [theta Afe, thereby Δ f and Δ fe compared. 但是本实施方式的手段并不限定于此。 However, the means according to the present embodiment is not limited thereto. 例如像上述的测定手段那样,也可W通过某些手段获取机器人20000的手尖的位置姿势信息、或者由手尖进行把持等的对象物的位置姿势信息。 Measuring means such as for example, as described above, W may acquire location information of the posture of the hand of the robot 20,000 by some means, the position or the posture information of the object gripped by the hand and the like.

[0573] 在该情况下,作为变化量推断用信息而获取位置姿势信息X,因此能够求出其变化量Δ X。 [0573] In this case, as a variation amount estimation information acquired by the position and orientation information of X, it is possible to determine the amount of change Δ X. 而且如上式(4)所示,通过对AX作用雅克比矩阵Ji,能够与Δ Θ的情况相同地求出推断图像特征量变化量A fe。 And the above formula (4), by comparison matrix Ji, in the case of Δ Θ is possible to obtain the same effect of the AX Jacques inferred image characteristic change in the amount A fe. 若求出Δ fe,则之后的处理与上述例子相同。 If the determined Δ fe, after the same processing as the above-described example. 即,变化量推断部1130通过对位置姿势信息的变化量作用使位置姿势信息与图像特征量相对应(具体而言使位置姿势信息的变化量与图像特征量变化量相对应)的雅克比矩阵,而对推断图像特征量变化量进行运算。 That is, the change amount estimating unit 1130 by the amount of change of the effect of the position of the position information of the posture of the posture information of the corresponding image feature amount (specifically, the image feature amount of change in the change in the amount corresponding to the position and posture information of) the Jacobian , while the change amount estimation image feature amount calculation. 在图43B中,与图43A对应地示出了该处理的流程。 In FIG. 43B, and FIG. 43A shows the corresponding flow of the processing.

[0574] 运里,作为位置姿势信息,在使用机器人20000的手尖(手部或者末端执行器2220) 的位置姿势的情况下,Ji是使手尖的位置姿势信息的变化量与图像特征量变化量相对应的信息。 [0574] shipped in as position and posture information in the case where the robot 20,000 of the hand (the hand or the end 2220) of the position and posture, Ji is to make the hand change amount and the image feature amount of the position attitude information the amount of change corresponding information. 另外,作为位置姿势信息,在使用对象物的位置姿势的情况下,Ji是使对象物的位置姿势信息的变化量与图像特征量变化量相对应的信息。 Further, as the position and posture information in the case where the position and orientation of the object, the object is to make Ji amount of change of the position and posture information of the image feature amount corresponding to the amount of change in information. 或者,若已知利用末端执行器W什么样的相对的位置姿势把持对象物运一信息,则由于末端执行器2220的位置姿势信息与对象物的位置姿势信息一对一对应,所W也能够将一方的信息转换为另一方的信息。 Alternatively, if it is known the use of the end effector W what opposing gripping position and posture information of a transport object, since the position and posture information of the position and posture information of the object end effector 2220 to-one correspondence, it is possible that W the conversion information is information of the other party. 即,考虑在获取末端执行器2220的位置姿势信息后,将其转换为对象物的位置姿势信息,之后使用使对象物的位置姿势信息的变化量与图像特征量变化量相对应的雅克比矩阵Ji来求出A fe等各种实施方式。 That is, considering after obtaining the end effector position and orientation information 2220 to convert it to position and orientation information of the object, after use so that the amount of change in position and posture information of the object and the image characteristic change in the amount corresponding to the Jacobian A fe Ji obtains other embodiments.

[0575] 另外,本实施方式的异常判定的比较处理并不限定于使用图像特征量变化量ΔΓ 与推断图像特征量变化量A fe来进行。 [0575] Further, the abnormality determination process according to the present comparative embodiment is not limited to using the image feature amount and the change amount estimation ΔΓ image feature amount to the amount of change A fe. 图像特征量变化量Δ f、位置姿势信息的变化量Δ X、 W及关节角信息的变化量A Θ能够通过使用雅克比矩阵、雅克比矩阵的逆矩阵(广义而言为广义逆矩阵)亦即逆雅克比矩阵从而相互转换。 The image feature amount change amount Δ f, the amount of change of the position and posture information of Δ X, W, and the amount of change A Θ of joint angle information can be used by the Jacobian matrix, the matrix of Jacobian inverse matrix (generalized inverse matrix in a broad sense) is also That is the inverse Jacobian matrix thus interchangeable.

[0576] 即如图49所示,本实施方式的手段能够应用于包含如下构成的机器人控制装置: 包含根据图像信息来控制机器人20000的机器人控制部1110;求出表示机器人20000的末端执行器2220或者对象物的位置姿势信息的变化量的位置姿势变化量、或者表示机器人20000的关节角信息的变化量的关节角变化量的变化量运算部1120;根据图像信息而求出图像特征量变化量,并根据图像特征量变化量而求出位置姿势变化量的推断量亦即推断位置姿势变化量、或者关节角变化量的推断量亦即推断关节角变化量的变化量推断部1130; W及通过位置姿势变化量与推断位置姿势变化量的比较处理、或者通过关节角变化量与推断关节角变化量的比较处理来进行异常判定的异常判定部1140。 [0576] That is, as shown in FIG. 49, the means of the present embodiment can be applied also includes the robot controller: to control the robot comprises a robot control unit 1110 in accordance with image information, 20,000; calculated represents the robot end effector 2220 20000 or posture changing the amount of change in position and posture information of the object, or the amount of change represents an amount of joint angle of the robot joint angle change information 20,000 change amount calculation unit 1120; the image information obtaining image feature quantity change amount and an image feature amount and the change amount estimation obtained change in the amount of the position and posture i.e. the estimated position posture change amount, or a change in the amount estimation joint angle i.e. change amount estimation joint angle change amount estimation unit 1130; W and by comparing the position and orientation estimation processing the amount of change and the amount of change of position and posture, or the abnormality determination is performed by comparing the joint angle change amount and the change amount of the joint angle estimation unit 1140 abnormality determination.

[0577] 在图49中,在与图36进行比较的情况下,形成为变化量运算部1120和变化量推断部1130替换的结构。 [0577] In FIG. 49, in comparison with the case of FIG. 36, is formed as a change amount calculation unit 1120 and the change amount estimating unit 1130 alternative configuration. 即变化量运算部1120根据关节角信息而求出变化量(运里是关节角变化量或者位置姿势变化量),变化量推断部1130根据图像特征量的差分而推断变化量(求出推断关节角变化量或者推断位置姿势变化量)。 I.e. change amount calculation unit 1120 obtains information in accordance with joint angle change amount (operation amount of change in the joint angle is the position or posture change amount), the change amount estimating unit 1130 inferred from the amount of change from the difference image feature amount (determined inference joint amount of change in the estimated position angle or posture change amount). 另外,在图49中,变化量运算部1120形成为获取关节角信息的部件,但是如上所述,在变化量运算部1120中,也可W使用测定结果等来获取位置姿势信息。 Further, in FIG. 49, the change amount calculating portion 1120 is formed to acquire member joint angle information, but as described above, the change amount calculation unit 1120, W may also be used to obtain the results of measurements of position and orientation information.

[0578] 具体而言,在获取Δ f与Δ θ的情况下,也可W利用从上式(5)求出的下式(8)来求出推断关节角变化量A 06,从而进行ΔΘ与Δθ6的比较处理。 [0578] Specifically, in the case of obtaining Δ f and Δ θ, but also from W using equation (5) is obtained to determine the inferred formula (8) joint angle change amount A 06, thereby performing ΔΘ the comparison process Δθ6. 具体而言,使用给定的阔值化2,在下式(9)成立的情况下判定为异常即可。 Specifically, the given value width of 2, is determined in the case the following formula (9) can be established as abnormal.

[0579] A0e = Jv-iAf.....(8) [0579] A0e = Jv-iAf ..... (8)

[0580] |ΔΘ_Δθθ|>Ώι2.....(9) [0580] | ΔΘ_Δθθ |> Ώι2 ..... (9)

[0581] 或者,在使用如上所述的测定手段而获取Δ f与Δ X的情况下,也可W利用从上式(4)求出的下式(10)来求出推断位置姿势变化量AXe,从而进行ΔΧ与AXe的比较处理。 [0581] Alternatively, in the case described above, the measuring means and acquiring Δ f Δ X and W may be using (10) obtains the amount of change in position and posture estimation equation from the formula (4) AXe, thereby performing the comparison process ΔΧ and Ax. 具体而言,使用给定的阔值化3,在下式(11)成立的情况下,判定为异常即可。 Specifically, the given value of the width 3, the case (11) holds the following formula, can be determined to be abnormal.

[0582] AXe = Ji-iAf.....(10) [0582] AXe = Ji-iAf ..... (10)

[058;3] |AX-AXe|>Th3.....(11) [058; 3] | AX-AXe |> Th3 ..... (11)

[0584] 另外,也不限定于利用直接求出的信息进行比较的情况。 [0584] Further, not limited to the use of a case where the information is obtained directly compared. 例如,在获取Af与ΔΘ的情况下,也可W根据A f利用上式(10)来求出推断位置姿势变化量Δ Xe,并根据A Θ利用上式(3)来求出位置姿势变化量Δ X(严格来说该Δ X也不是实测值而是推断值),从而进行使用上式(11)的判定。 For example, in the case where Af and ΔΘ acquisition, but also on the A f W With the equation (10) obtains the estimated position posture change amount Δ Xe, is obtained and the formula (3) according to the position and posture change A Θ using amount Δ X (strictly speaking, the Δ X is not found but the estimated value), which determines the formula (11) in use.

[0585] 或者,在获取Af与ΔΧ的情况下,也可W根据Af利用上式(8)来求出推断关节角变化量Δ 06,并根据ΔΧ利用从上式(3)求出的下式(12)来求出关节角变化量ΔΘ(严格来说该Δ Θ也不是实测值而是推断值),从而进行使用上式(9)的判定。 [0585] Alternatively, in the case of obtaining the ΔΧ and Af, Af may utilize W according to the above formula (8) is obtained infer joint angle change amount Δ 06, and in accordance with the use ΔΧ from the formula (3) obtained formula (12) obtains the joint angle change amount Delta] [theta (strictly speaking, the Δ Θ is not found but the estimated value), thereby performing judgment formula (9) in use.

[0586] A0=Ja-iAX.....(12) [0586] A0 = Ja-iAX ..... (12)

[0587] 目P,变化量运算部1120进行获取多个位置姿势信息并作为位置姿势变化量而求出多个位置姿势信息的差分的处理、获取多个位置姿势信息并根据多个位置姿势信息的差分而求出关节变化量的处理、获取多个关节角信息并作为上述关节角变化量而求出多个关节角信息的差分的处理、W及获取多个关节角信息并根据多个关节角信息的差分而求出位置姿势变化量的处理中的任一个处理。 [0587] P mesh, change amount calculation unit 1120 acquires a plurality of positions as the position and posture information processing posture change amount difference and a plurality of position and orientation information is determined, acquiring a plurality of attitude information and position information according to a plurality of posture the variation amount difference obtained joint processing a plurality of joint angle information acquisition and processing of a plurality of difference information as the joint angle change amount calculated joint angle, W, and a plurality of joint angle information is acquired and a plurality of joints angle difference information and obtains the amount of change process according to any one of the position and orientation process.

[0588] 在图47中,W-并标注本说明书中的数式编号的方式总结了W上所示的Δί·、ΔΧ、 ΑΘ的关系。 [0588] In FIG. 47, W- this specification and marked manner equation summarizes the relationship between number Δί ·, ΔΧ, ΑΘ shown on W. 即,对于本实施方式的手段而言,若获取Δί·、ΔΧΚ及ΔΘ中的任意两个信息,贝U 通过将它们转换为Af、ΔΧ、Δ Θ中的任一个信息而进行比较,从而能够实现本实施方式的手段,并且能够对获取的信息、用于比较的信息进行各种变形实施。 That is, the means for the embodiment according to the present embodiment, if the obtained information of any two Δί ·, ΔΧΚ and ΔΘ in, U shell by converting them to Af, ΔΧ, Δ Θ in any one of the information and compared, thereby It means to achieve the present embodiment, and the information can be acquired, information comparison is performed for various modified embodiments.

[0589] 此外,本实施方式的机器人控制装置1000等也可W利用程序来实现其处理的一部分或者大部分。 [0589] Further, the present embodiment is a robot control device 1000 and the like can also be implemented using a program W part or most of its processing. 此时,通过CPU等处理器执行程序,从而实现本实施方式的机器人控制装置1000等。 In this case, a program executed by a processor such as a CPU, thereby realizing the present embodiment, robot controller 1000 and the like. 具体而言,读出存储于非暂时性信息存储介质的程序,并且CPU等处理器执行读出的程序。 Specifically, reads out a program stored in a non-transitory information storage medium, and a processor such as a CPU executing a program read out. 运里,信息存储介质(能够利用计算机读取的介质)是储存程序、数据等的介质,其功能能够通过光盘(DVD、CD等)、HDD(硬盘驱动器)、或者存储器(卡式存储器、ROM等)等来实现。 Operation, the information storage medium (computer-readable medium can be utilized) is a medium for storing programs, data, etc., which function through the optical disc (DVD, CD, etc.), an HDD (hard disk drive), or a memory (memory card, ROM etc.) or the like. 而且,CPU等处理器根据储存于信息存储介质的程序(数据),进行本实施方式的各种处理。 Further, CPU and other processors according to a program (data) stored in the information storage medium, performs various processing according to this embodiment. 即,在信息存储介质存储用于使计算机(具备操作部、处理部、存储部、输出部的装置)作为本实施方式的各部而发挥功能的程序(用于使计算机执行各部的处理的程序)。 That is, in an information storage medium storing instructions for causing a computer (comprising, processing unit, storage unit, output unit means operating portion) The respective units of the present embodiment functions play program (program for causing a computer to execute the process of each section) .

[0590] 此外,W上对本实施方式进行了详细说明,但是可W在实质上不脱离本发明的新内容和效果的条件下,进行多种多样的改变,运对于本领域技术人员来说是很容易理解的。 [0590] Further, the present embodiment W been described in detail, but may be W without materially departing from the new content and effects of the present invention conditions, a variety of changes, transport skilled in the art is it is easy to understand. 因此,运种改变例也均包含在本发明的范围内。 Thus, changing the kinds of transport cases it was also included in the scope of the present invention. 例如,在说明书或附图中,至少一次与更加广义或同义的不同用语一起被记载的用语,在说明书或附图中的任何位置,均能够替换成该不同用语。 For example, in the specification or drawings, terms with at least one different term having a broader or synonymous with being described, the description in the figures or in any position, both can be replaced with the different term. 另外,机器人控制装置1000等的结构、动作也不限定于本实施方式中说明的结构、动作,而能够进行各种变形实施。 Further, the device 1000 and the like robot control structure, the operation of the present embodiment is not limited to the embodiment described configuration, operation, and can be variously modified embodiments.

[0591] 第屯实施方式 [0591] The first embodiment Tun

[0592] 1.本实施方式的手段 [0592] 1. The present embodiment means

[0593] 首先对本实施方式的手段进行说明。 [0593] First embodiment of the means according to the present embodiment will be described. 在多数状况下使用针对检查对象物的检查(特别是外观检查)。 Using the inspection (in particular visual inspection) for inspection object under most conditions. 对于外观检查(目视检查)而言,用人们的眼睛进行观看并观察的检查方法是基本,但是从检查的使用者的省力化、检查的高精度化等观点来看,提出了利用检查装置使检查自动化的手段。 For visual inspection (visual inspection), the viewing and inspection with the eyes of the people is the basic observation method, but the effort of checking the user, high-accuracy inspection viewpoint, proposed the use of test apparatus the means of inspection automation.

[0594] 运里的检查装置也可W是专用的装置,例如作为专用的检查装置,如图54所示,考虑包括拍摄部CA、处理部PR、W及接口部IF的装置。 [0594] shipped in the inspection device W can also be a dedicated device, such as a dedicated inspection apparatus, shown in Figure 54, consider the apparatus includes a shooting unit CA, the processing unit PR, W, and the interface section IF. 在该情况下,检查装置获取使用拍摄部CA而拍摄的检查对象物0B的拍摄图像,并在处理部PR中使用拍摄图像来进行检查处理。 In this case, the inspection apparatus acquires the captured image captured using the imaging portion of the object to be inspected 0B CA and use the captured image in the processing section PR, the inspection process is performed. 考虑运里的各种检查处理的内容,但是例如,在检查中作为合格图像而预先获取判定为合格的状态的检查对象物0B的图像(可W是拍摄图像,也可W由模型数据制成),并且进行该合格图像与实际的拍摄图像的比较处理即可。 Consider operation in the content of the various inspection processes, but, for example, in the check as acceptable image previously acquired images judged to be acceptable in the state of the inspection object 0B (W is a captured image can also be made by the model data W ), and qualified to carry out the image processing can be compared with the actual shooting image. 若拍摄图像与合格图像接近,则能够判定该拍摄图像所拍摄的检查对象物0B是合格的,若拍摄图像与合格图像的差异大,则能够判定检查对象物0B存在某些问题而不合格。 If the picture image and passing close to it can be determined that the captured image captured by the inspection target object 0B is acceptable, if the captured image and passing a large difference image, it can be determined that there are certain problems and unacceptable inspection object 0B. 另外,在专利文献1中,公开有利用机器人作为检查装置的手段。 Further, in Patent Document 1, there is disclosed an inspection means of the robot apparatus.

[0595] 但是,从上述合格图像的例子亦可知,为了利用检查装置进行检查,需要预先设定用于该检查的信息。 [0595] However, the above examples also known qualified image, in order to perform inspection using the inspection apparatus, it is necessary to set information for the check. 例如,虽然取决于检查对象物0B的配置方式,但是需要预先设定从什么方向观察检查对象物0B之类的信息。 For example, although depending on the configuration of the inspection target 0B, but requires the observation information is set in advance such 0B inspection target from what direction.

[0596] -般地,如何观察检查对象物0B(狭义而言为在拍摄于拍摄图像时呈何种形状、尺寸)因检查对象物0B与观察的位置、方向的相对的关系而变化。 [0596] - camel, how to observe the inspection target 0B (narrow sense of what shape, size, was taken in at the time of image capturing) by the relative relationship between the position of the inspection target with the observed 0B direction varies. W下,将观察检查对象物0B 的位置表述为视点位置,视点位置狭义而言为表示拍摄部的配置的位置的意思。 Under W, the observation position of the inspection target 0B is expressed as the viewpoint position, the viewpoint position is a position disposed in a narrow sense of the meaning of the imaging unit. 另外,将观察检查对象物0B的方向表述为视线方向,视线方向狭义而言为表示拍摄部的拍摄方向(光轴的方向)的意思。 Further, the direction of the inspection target 0B observed is expressed as the gaze direction, the sight line direction in a narrow sense is a photographing direction of the imaging unit (optical axis direction) of the mean. 若未设置视点位置、视线方向的基准,则由于在每次检查时检查对象物0B的观察方式可能变化,所W根本不可能与观察方式对应地进行判定检查对象物0B的正常异常的外观检查。 If not set viewpoint position, the reference viewing direction, since the inspection target 0B way of observation may vary impossible for the W in correspondence with the observation mode is determined in each examination normal abnormal appearance inspection object to be inspected to 0B .

[0597] 另外,作为用于对该检查对象物0B判定为无异常的基准的合格图像,无法决定可W保持从何视点位置、视线方向观察的图像。 [0597] Further, as qualified for the reference image is no abnormality, the determination can not determine the 0B inspection object W can be held where the viewpoint position, the line of sight direction of the image viewed. 即,若在检查时进行观察的位置方向不确定, 则相对于检查时获取的拍摄图像的比较对象(检查基准)也不确定,从而无法进行适当的检查。 That is, when viewed in the direction of position check uncertainty, with respect to the comparison target acquired captured image check (inspection standard) is not determined, whereby an appropriate inspection can not be performed. 此外,只要保持从所有视点位置、视线方向观察判定为合格的检查对象物0B的情况下的图像,则能够避免没有合格图像的状况。 Further, as long as they observe all of the viewpoint position determined from gaze direction in the case where an image of the object 0B qualified inspection, it is possible to avoid a situation no qualified images. 但是,该情况下的视点位置、视线方向会变得比较庞大,从而合格图像的张数也变得比较庞大,因此不现实。 However, the viewpoint position in this case, the viewing direction becomes relatively large, so that the number of qualified image becomes relatively large, and therefore unrealistic. 依据W上的点,也需要预先保持合格图像。 Basis points on the W, but also need to keep pre-qualified image.

[0598] 并且,一般地,合格图像、拍摄图像在检查中也会包含不必要的信息,因此在使用图像整体来进行检查处理(比较处理)时,恐怕检查的精度会变低。 [0598] and, in general, a qualified image, the captured image will contain unnecessary information during the examination, so when the check processing is performed (Comparative process) using the entire image, probably inspection accuracy becomes low. 例如,在拍摄图像除了检查对象物W外,也存在拍摄到了工具、夹具等的情况,不优选将上述信息用于检查。 For example, in addition to the captured image inspection object W, there is a case where captured tools, jigs, etc., is not preferable that the information for checking. 另外,在检查对象物的一部分是检查对象的情况下,恐怕也会因检查对象物的不是检查对象的区域的信息而使检查精度降低。 Further, the object to be inspected is part of the object under examination, the information region because it will probably inspection target object is not the object under examination to check the accuracy decreases. 具体而言,如用图64A~图64D进行后述那样,在考虑针对较大的物体A而组装比物体A小的物体B的作业的情况下,检查的对象应为被组装的物体B的周围, 并且检查物体A整体的必要性较低,并且因为使A整体为检查对象也会提高误判定的可能性。 Specifically, as with FIG 64A ~ FIG. 64D as will be described later, in consideration for the assembly of large objects than the object A and the object B A small job, check the object corresponding to the object B to be assembled around, and checks the need for a lower overall object a, and as a whole so that the inspection target a can also increase the possibility of erroneous determination. 由此,若考虑提高检查处理的精度,检查区域在检查中也成为重要的信息。 Accordingly, in consideration of improving the accuracy of the inspection process, the inspection of the inspection region has also become important information.

[0599] 但是W往,上述检查区域、视点位置、视线方向、或者合格图像之类的用于检查的信息是由具有图像处理的专业知识的使用者进行设定的。 [0599] However, information W to the inspection area, viewpoint position, the line of sight direction, or the like passing the image for inspection is set by a user having knowledge of image processing. 运是因为虽然是通过图像处理来进行合格图像与拍摄图像的比较处理,但是还要求与该图像处理的具体的内容对应地变更检查所需要的信息的设定。 Operation is performed because, although qualified image comparison processing of the captured image by image processing, it is also required to change setting information required for examination in the specific content of the image correspondence processing.

[0600] 例如,应用使用图像中的边缘的图像处理、全部使用像素值的图像处理、使用亮度、色差色相的图像处理、或者其他的图像处理中的一种的情况是否适于合格图像与拍摄图像的比较处理(狭义而言为相似度的判定处理),可能与检查对象物0B的形状、色调、质感等对应地变化。 [0600] For example, the image processing application uses an edge image, the entire image processing using pixel values, using the brightness, color hue image processing, or one of the other image processing the captured image is appropriate for eligibility comparing the processed image (narrow sense of the similarity determination process), and may change the shape of the inspection target object 0B, color, texture and other correspondence. 由此若为能够变更图像处理的内容的检查,则进行检查的使用者必须适当地设定进行何种图像处理。 Thereby checking if the user is able to change the contents of the image processing, a check which must be appropriately set image processing.

[0601] 另外,即使在设定完图像处理的内容的情况下、或者在通用性较高的图像处理的内容事先设定完毕的情况下,使用者也需要适当地掌握该图像处理的内容。 [0601] Further, even in a case where the completion of the setting contents of the image processing, or in the case where the content of highly versatile image processing set in advance is completed, the user also needs to properly grasp the contents of the image processing. 运是因为若图像处理的具体的内容发生变化,则与检查相适的视点位置、视线方向也可能发生变化。 If the operation is changed because of the specific contents of image processing, and checks the appropriate phase viewpoint position, the line of sight direction may also be changed. 例如,在使用边缘信息来进行比较处理的情况下,可W将能够对检查对象物0B的形状中的复杂的部分进行观察的位置方向设为视点位置、视线方向,而观察平坦的部分的位置方向为视点位置方向是不适当的。 For example, in the case where the edge information of the comparison process, W can be viewed in the direction that the position of a complicated shape part of an inspection object is set to 0B viewpoint position, the line of sight direction and the viewing position of the flat portion direction is the direction of the viewpoint position is inappropriate. 另外,若使用像素值来进行比较处理,则优选将能够对因色调的变化较大、或者因来自光源的光充分照射而能够明亮地观察的区域进行观察的位置方向作为视点位置、视线方向。 Further, when the comparison process is performed using the pixel values, it is preferable to be able to be viewed in the direction of the position of the region resulting from greater change in color tone, or sufficiently irradiated by light from a light source and can be brightly observed as the viewpoint position, the line of sight direction. 即,在现有的手段中,在包含视点位置或视线方向、合格图像在内的检查所需要的信息的设定中,需要图像处理的专业知识。 That is, in the conventional approach, the setting information of the viewpoint position or the sight line direction comprising qualified image including checks in need thereof, requires specialized knowledge of image processing. 进一步而言,若图像处理的内容不同,需要使合格图像与拍摄图像的比较处理的基准也进行变更。 Further, if the content of the image processing of different images need to make a qualified comparison processing of the captured image of the reference also changed. 例如,需要与图像处理的内容对应地决定合格图像与拍摄图像有多少程度相似则合格、有多少程度差异则不合格的基准,但是若没有图像处理的专业知识,则也无法设定该基准。 For example, the contents of image processing required to determine eligibility image corresponding to the captured image how much the degree of similarity is qualified, the reference number of the degree of difference is substandard, but no expertise in image processing, you can not set the benchmark.

[0602] 目P,即使能够通过使用机器人等使检查自动化,也难W设定该检查所需要的信息, 对于不具有专业的知识的使用者而言,不能说实现检查的自动化是容易的。 [0602] Head P, even if the inspection can be automated by using a robot, it is difficult to check the W configuration information needed for the user does not have the professional knowledge is concerned, it can not be said to achieve automated inspection is easy.

[0603] 另外,本申请人设想的机器人通过做成对使用者而言容易进行执行机器人作业时的指导,并且设置有各种传感器等而使机器人本身能够识别作业环境的机器人,能够灵活地进行多样的作业。 [0603] Further, by the present applicant conceived the robot is made easier for the user to guide the robot when performing work, and is provided with various sensors the robot itself can recognize the robot operation environment can be flexibly a variety of jobs. 运样的机器人适于多品种制造(狭义而言为平均一个品种的制造量较少的多品种少量制造)。 Sample transport robot suitable for producing multi-species (the average amount of a species of smaller manufacturing mix low-volume manufacturing in a narrow sense). 但是,即使容易进行制造时刻的指导等,是否容易进行制造成的制品的检查也成为另一个问题。 However, even easier to guide the manufacturing time and so on, easy to check whether the products are manufactured has become another problem. 运是因为若制品不同则应检查的对象物的位置也不同,作为结果,在拍摄图像W及合格图像中,应成为比较对象的检查区域对于每个制品也不同。 Because the transport position of the object should check if various articles are also different, as a result, the image in the captured image W and qualified, should be the comparison target inspection area is different for each article. 即, 在设想多品种制造的情况下,若将检查区域的设定委托给使用者,则与该设定处理相关的负担较大,而导致生产性的降低。 That is, in the case of manufacturing a multi-species contemplated that, when setting the inspection region entrusted to the user, the greater the processing burden associated with the setting, resulting in reduced productivity.

[0604] 因此,本申请人提出如下手段:根据第一检查信息而生成用于检查处理的第二检查信息,从而减少检查处理中的使用者的负担,并提高机器人作业时的生产性。 [0604] Accordingly, the present applicant has proposed the following means: a second examination information in accordance with a first inspection process for checking information generated, thereby reducing the burden on the user to check the process, and improve productivity during robot operation. 具体而言, 本实施方式的机器人30000是使用由拍摄部(例如图52的拍摄部5000)拍摄的检查对象物的拍摄图像,而进行对检查对象物进行检查的检查处理的机器人,根据第一检查信息,生成包含检查处理的检查区域在内的第二检查信息,并根据第二检查信息而进行检查处理。 Specifically, the robot of the present embodiment is the use of 30,000 captured image captured by the imaging unit (e.g., the imaging unit 5000 in FIG. 52) of the inspection target object, and the robot inspection object inspecting check processing, according to a first examination information, comprising generating an inspection region including the second check process checks the information, and the check processing according to the second test information.

[0605] 运里,第一检查信息是机器人30000在比执行检查处理靠前的时刻能够获取的信息,并表示第二检查信息的生成所使用的信息。 [0605] in operation, the first check information is information of the robot at the moment than 30,000 front of performing the checking process can be obtained, and said second information generating inspection information to be used. 由于第一检查信息是事先获取的信息,也能够表现为事先信息。 Since the information is first check the information obtained in advance, it is possible the performance of the prior information. 在本实施方式中,第一检查信息可W由使用者输入,也可W在机器人30000中生成。 In the present embodiment, the first inspection information input by the user may be W, W may be generated in the robot 30,000. 即使在由使用者输入第一检查信息的情况下,该第一检查信息在输入时也不要求图像处理的专业知识,而成为能够容易进行输入的信息。 Even in a case where a first input by the user to check information, the check information is at the first input does not require knowledge of image processing, becomes possible to easily input information. 具体而言,可W是包含检查对象物0B的形状信息、检查对象物0B的位置姿势信息、W及针对检查对象物0B的相对的检查处理对象位置中的至少一个的信息。 Specifically, W is an inspection object 0B shape information, position and posture information of the inspection object 0B, W, and at least one of the information for the relative target position of the inspection process in the inspection target object containing 0B.

[0606] 如后述那样,通过使用形状信息(狭义而言为Ξ维模型数据)、位置姿势信息、检查处理对象位置的信息,能够生成第二检查信息。 [0606] As described later, by using the shape information (Ξ dimensional model data of a narrow sense), posture information of the position, the object position information check process, a second inspection information can be generated. 而且形状信息作为CAD数据等,一般是事先获取的,在使用者输入形状信息时,选择现有的信息即可。 And as the CAD data, shape information, is typically obtained in advance, and when the user inputs the shape information, the existing information can be selected. 例如,在保持有作为检查对象物0B的候补的各种物体的数据的状况下,使用者从该候补之中选择检查对象物0B即可。 For example, in the various objects held as a candidate of the inspection target object data status 0B, 0B user can select the object to be inspected from among the candidates. 另外, 对于位置姿势信息而言,若在检查时知道检查对象物0B是如何配置的(例如W何种姿势配置于作业台上的什么位置),则也能够容易地设定位置姿势信息,位置姿势信息的输入不要求图像处理的专业知识。 Further, the position and orientation information, if the inspection target object knows how to 0B is arranged in the inspection (e.g., a posture which W disposed at what position of the workbench), it is possible to easily set the position and orientation information, the position enter the posture information is not required for image processing expertise. 另外,检查处理对象位置是表示检查对象物0B中欲进行检查的位置的信息,例如若对检查对象物0B的给定的部分的破损进行检查,则是表示该给定的部分的位置的信息。 In addition, the check processing target position information indicating the position of the inspection target object 0B are to be checked, for example, if the inspection object inspection given damaged portion 0B, it is information indicating the given position portion . 另外,将针对物体A而组装物体B的检查对象物0B作为对象,在检查是否正常进行物体A与物体B的装配的情况下,物体A与物体B的装配位置(接触面、接触点、插入位置等)成为检查处理对象位置。 In addition, when the for object A is assembled inspection target object B, 0B as an object, checking whether the normal assembled object A and the object B is, the mounting position of the object A and the object B (the contact surface, the contact point of insertion location, etc.) becomes the processing target inspection position. 检查处理对象位置也相同地,若能够掌握检查的内容则能够容易进行输入,而在输入时不需要图像处理的专业知识。 Check processing target positions are the same, if the check is informed of the content can be easily input, and image processing expertise is not required at the time of entry.

[0607] 此外,本实施方式的手段并不限定于自动生成第二检查信息的全部。 [0607] Further, the means according to the present embodiment is not limited to automatically generate all the second examination information. 例如,也可W 利用本实施方式的手段生成第二检查信息中的一部分,而其他的第二检查信息通过使用者手动输入。 For example, W can also utilize the means of the present embodiment generates a portion of the second embodiment of the examination information, and the other second inspection information input manually by a user. 在该情况下,使用者并非能够完全省略第二检查信息的输入,但是至少在能够自动生成难W设定的视点信息等运一点上,能够利用本实施方式的手段而容易进行检查运一优点是不变的。 In this case, the user is not able to completely omit the input of a second test information, but at least a means to automatically generate hard W transported set viewpoint information point, with the present embodiment can be easily transported to check an advantage It is constant.

[060引另外,检查处理是针对基于机器人30000的机器人作业的结果而进行的处理,从而第一检查信息也可W是在机器人作业中获取的信息。 [060 cited Further, for the inspection process is a process based on a result of robot operation of the robot 30000 is performed so that the first inspection information may also be information acquired in W is a robot operation.

[0609] 运里机器人作业是指由机器人进行的作业,考虑由螺钉紧固、焊接、压焊、抽点(Snapshot)等形成的结合、W及使用手部、工具、夹具的变形等各种作业。 [0609] where robot operation refers to operation performed by a robot operation, considered by the screws, welding, pressure welding, the snapshot (the Snapshot) formed like combined, W, and using hand tools, jigs, and other deformation operation. 在针对机器人作业的结果而进行检查处理的情况下,该检查处理判定是否正常进行了机器人作业。 In the case where the processing for checking the results of the robot operation, the inspection process determines whether or not the normal operation of the robot. 在该情况下,为了开始执行机器人作业,需要获取与检查对象物0B、作业内容相关的各种信息。 In this case, to perform robot operation starts, the need to obtain the inspection target 0B, various information related to the content of the job. 例如,作业对象物(检查对象物0B的全部或者一部分)在作业前配置在什么位置、W何种姿势、 在作业后变化为何种位置姿势成为已知的信息。 For example, a work object (0B all or a portion of the inspection object) disposed in front of the job in what position, W posture which, after the job change in posture to become what is known as the position information. 另外,若进行螺钉紧固、焊接,则作业对象物中的紧固螺钉的位置、进行焊接的位置为已知。 Position Further, if screw-fastening, welding, the work object in a position fastening screws, welding is known. 同样,若使多个物体结合,则物体A在什么位置从什么方向与什么物体结合为已知的信息,若对作业对象物施加变形,则作业对象物中的变形位置与变形后的形状也都为已知的信息。 Similarly, when the plurality of binding objects, the object A binding in what position the object from any direction of what are known information, the position of the deformed shape after deformation when the work object in the work object to the applied strain is also all known information.

[0610] 目P,在机器人作业成为对象的情况下,对于与上述的形状信息、位置姿势信息、W 及检查处理对象位置对应的信息、其他第一检查信息所包含的信息而言,在完成了机器人作业的前提下,相当大的部分(根据情况所需的第一检查信息的全部)成为已知。 [0610] P mesh, in a case where the object of the robot operation, for information, the first additional information and the examination information of the shape, position and posture information, W, and inspection positions corresponding to the processing target contained, in complete under the premise of the robot operation, a substantial portion (the first check all the information required under the circumstances) be known. 即,在本实施方式的机器人30000中,第一检查信息挪用进行机器人的控制的单元(例如图50的处理部11120)等所保持的信息即可。 Unit (e.g., processing unit 11120 in FIG. 50) i.e., the robot 30,000 embodiment, the first check information diverted robot control information and the like can be held. 另外,即便在使用图51A、图51B如后述那样将本实施方式的手段应用于与机器人30000不同的处理装置10000的情况下,处理装置10000从包括在机器人内的控制部3500等获取第一检查信息即可。 Further, even in FIG 51A, FIG 51B as described later, the means of the present embodiment is applied to the robot 30,000 different processing apparatus case 10000, 10000 acquired from the processing device included in the robot control unit 3500 like the first check the information can be. 因此,从使用者来看,为了进行检查,无需重新输入第一检查信息,就能够容易地生成第二检查信息。 Thus, from the user point of view, in order to check, without re-entering the first check information, it is possible to easily generate a second inspection information.

[0611] 由此,即使是不具有图像处理的专业知识的使用者,也能够容易地执行检查(至少获取第二检查信息),或者能够在检查执行时减少设定第二检查信息的负担等。 [0611] Accordingly, even a user having no professional knowledge of image processing, it is possible to easily perform the inspection (inspection information acquiring at least a second), or reducing the burden of setting a second test information during inspection execution . 此外,在本说明书的W下的说明中,对检查处理的对象是机器人作业的结果的例子进行说明。 In the description of this specification under W, the object of the check process is the result of an example of operation of the robot will be described. 即,使用者无需输入第一检查信息,但是如上所述,使用者输入第一检查信息的一部分或者全部也无妨。 That is, the user does not enter the first test information, but as mentioned above, the first check user input information part or all of it anyway. 即使是使用者输入第一检查信息的情况,对于使用者而言,在该第一检查信息的输入中不要求专业知识运一点上,容易进行检查运一优点是不变的。 Even where the user is input to the first examination information, for the user, the first input of the examination information not required in the knowledge that operation, inspection operation is easy advantage is constant.

[0612] 另外,在W下的说明中,使用图52、图53如后述那样,主要对由机器人30000生成第二检查信息、并且在该机器人30000中执行检查处理的例子进行说明。 [0612] In the description of the W in FIG 52, FIG. 53 as described later, for example, the main information is generated by the second check 30,000 robot, and the robot performing the checking process will be described 30,000. 但是本实施方式的手段并不限定于此,W下的说明能够如图51A所示地扩大为在处理装置10000中生成第二检查信息、并且机器人30000获取该第二检查信息而执行检查处理的手段。 However, the means according to the present embodiment is not limited to this embodiment, W can be described as shown in FIG 51A to expand to generate the second check information processing apparatus 10000 and the second robot 30 000 acquires examination information checking processing is performed means. 或者,也能够如图51B 所示地扩展为在处理装置10000中生成第二检查信息、并且不在机器人中而是在专用的检查装置等执行使用该第二检查信息的检查处理的手段。 Alternatively, it is possible to generate a second expanded as shown in the inspection information processing apparatus 10000, but the use of the robot and not the second check process checks the information on the implementation of a dedicated inspection apparatus 51B shown means.

[0613] W下,对本实施方式的机器人30000、处理装置10000的系统构成例进行说明,之后对具体的处理流程进行说明。 [0613] of W, the robot of the present embodiment 30 000 embodiment, the processing device configuration system 10000 is described embodiment, after the concrete processing flow will be described. 进一步具体而言,作为离线处理而对从第一检查信息的获取到第二检查信息的生成的流程进行说明,并且作为在线处理(online)而对使用了生成好的第二检查信息的机器人所进行的实际的检查处理的流程进行说明。 More specifically, as an offline processing to be explained from the acquired information to generate a second check process checks the first information, and a processing line (online) and using a second generation good robot checks the information the actual flow check processing will be described.

[0614] 2.系统构成例 [0614] Example 2. System configuration

[0615] 接下来对本实施方式的机器人30000、处理装置10000的系统构成例进行说明。 [0615] Next, the robot of the present embodiment 30 000 embodiment, the system configuration of the processing apparatus 10000 will be described. 如图50所示,本实施方式的机器人包括信息获取部11110、处理部11120、机器人机构300000W 及拍摄部5000。 As shown, the embodiment according to the present embodiment includes a robot 50 information obtaining unit 11110, the processing unit 11120, and the imaging unit of the robot mechanism 300000W 5000. 但是,机器人30000并不限定于图50的结构,而能够省略上述一部分的构成要素、或追加其他构成要素等而实施各种变形。 However, the robot is not limited to the structure of FIG. 30 000 50, and various modifications can be implemented a part of the components is omitted, or adding other constituent elements.

[0616] 信息获取部11110在检查处理之前获取第一检查信息。 [0616] Information acquisition section 11110 acquires examination information prior to the first inspection process. 在由使用者输入了第一检查信息的情况下,信息获取部11110进行接受来自使用者的输入信息的处理。 In the case where a first input by the user to check the information, the information acquisition processing section 11110 accepts the input information from the user. 另外,在使机器人作业所使用的信息为第一检查信息的情况下,信息获取部11110进行从图50中的未图示的存储部等读出作业时在处理部11120中使用的控制用信息的处理等即可。 In addition, when the robot operation information is used as a first test information, the information acquisition unit 11110 used for controlling the read operation is not shown in FIG. 50 and the like from the storage unit in the processing unit with information 11120 the treatment can be.

[0617] 处理部11120根据信息获取部11110所获取的第一检查信息,进行第二检查信息的生成处理,并且进行使用第二检查信息的检查处理。 [0617] The processing section 11120 acquires information of a first checking section 11110 acquires information of a second process for generating inspection information, and the check process using the second test information. 在后面对处理部11120中的处理进行详细叙述。 Described in detail in the face processing unit 11120 after treatment. 另外,处理部11120在检查处理、W及检查处理W外(例如装配等机器人作业),进行机器人30000的控制。 Further, the processing unit 11120 in the outer inspection process, W, and inspection process W (e.g. a robot assembly operations, etc.), for controlling the robot 30,000. 例如在处理部11120中,进行包括在机器人机构300000内的臂3100、与拍摄部5000等的控制。 For example, in the processing unit 11120, make the robot including an arm mechanism 300,000 in 3100, the control unit 5000 and photographing or the like. 此外,拍摄部5000也可W是安装于机器人的臂3100的手眼摄像机。 Further, the imaging portion 5000 may be attached to the W-eye camera arm of the robot hand 3100.

[0618] 另外,如图51A所示,本实施方式的手段能够应用于如下处理装置,其是针对使用由拍摄部(在图51A中示出了拍摄部5000,但并不限定于此)拍摄的检查对象物的拍摄图像而进行上述检查对象物的检查处理的装置,输出用于检查处理的信息的处理装置10000,其根据第一检查信息,生成将检查处理的包含拍摄部的视点位置W及视线方向的视点信息、 与检查处理的检查区域包含在内的第二检查信息,并对进行检查处理的装置输出第二检查信息。 [0618] Further, as shown in FIG means of the present embodiment can be applied to a process unit 51A, which is used for imaging a portion (in FIG. 51A shows a photographing section 5000, but is not limited thereto) Shooting the captured image of the object inspection apparatus performs inspection processing of the inspection object, and outputs an information processing apparatus 10000 of the inspection process, which generates the check processing portion comprises viewpoint position W based on the first imaging examination information viewpoint and view direction information, the inspection process comprises a second inspection region including the test information, and means for outputting a second check process checks the information. 在该情况下,第一检查信息的获取与第二检查信息的生成是通过处理装置10000进行的,处理装置10000例如图51A所示,能够作为包括信息获取部11110与处理部11120的处理装置来实现。 In this case, the second acquiring examination information generating first examination is carried out by the information processing apparatus 10000 and 10000 as shown in the processing apparatus, comprising a processing device capable of acquiring information processing section 11110 and section 11120 of FIG. 51A achieve.

[0619] 运里,进行检查处理的装置如上所述可W是机器人30000。 Means [0619] operation, the process described above can be inspected W is a robot 30,000. 在该情况下,如图51A所示,机器人30000包括臂3100、用于检查对象物的检查处理的拍摄部5000、W及进行臂3100 及拍摄部5000的控制的控制部3500。 In this case, as shown in FIG. 51A, the robot arm including 30,000 3100, for taking check processing portion inspection target object 5000, and W is a control unit for controlling the imaging portion 3500 and arms 3100 5000. 控制部3500作为第二检查信息而从处理装置10000获取将表示拍摄部5000的视点位置W及视线方向的视点信息、与检查区域包含在内的信息, 并依据该第二检查信息,进行使拍摄部5000向与视点信息对应的视点位置W及视线方向移动的控制,从而使用获取的拍摄图像与检查区域而执行检查处理。 The control unit 3500 acquired from the processing apparatus 10000 as the second test information representing the viewpoint position of the viewpoint information imaging unit 5000 W, and the gaze direction, including the inspection area information contains photographing, and according to the second inspection information, so W moves to the position of the viewpoint and the viewing direction corresponding to the viewpoint information control unit 5000, thereby using the acquired captured image with the inspection area check process is performed.

[0620] 运样,能够在处理装置10000中进行第二检查信息的生成,并在其他机器中使用该第二检查信息而适当执行检查处理。 [0620] sample transport, inspection can be performed to generate a second information processing apparatus in 10000, and the second check information used in other machines appropriately performing the checking process. 若进行检查处理的装置是机器人30000,则与图50相同地,能够实现使用第二检查信息来进行检查处理的机器人,但是在形成为第二检查信息的生成处理与使用第二检查信息的检查处理的执行主体不同的机器运一点上,图51A与图50 不同。 If the check processing apparatus for a robot 30000 is the same as FIG. 50, the check information can be implemented using a second robot performs the inspection process, but is formed using the second generation processing inspection examination information of the second test information different execution subject processing machine operation regard, FIG. 51A and FIG. 50 are different.

[0621] 另外,处理装置10000不仅进行第二检查信息的生成处理,也可W配合地进行机器人30000的控制处理。 [0621] Further, the processing apparatus 10000 not only inspection information generation processing of the second, W can be controlled with the handling robot of 30,000. 例如,处理装置10000的处理部11120生成第二检查信息,并且进行基于该第二检查信息的机器人的控制用信息的生成。 For example, the processing unit processing apparatus 10000 11120 generates a second test information, and controls the robot based on the second test information with the information generated. 在该情况下,机器人的控制部3500依据由处理装置10000的处理部11120生成的控制用信息,使臂3100等进行动作。 In this case, the control unit 3500 based on the robot apparatus from the processing section 10000 generates the control information 11120, the arms 3100 or the like operation. 即,处理装置10000承担机器人的控制的实质的部分,该情况下的处理装置10000也能够理解为机器人的控制装置。 That is, the processing apparatus 10000 bear a substantial portion of the control of the robot, the processing apparatus 10000 in this case can also be understood as a robot control apparatus.

[0622] 另外,使用由处理装置10000生成的第二检查信息而执行检查处理的主体并不限定于机器人30000。 [0622] In addition, by the processing information generated by the second check 10000 performs inspection processing of the body of the robot apparatus is not limited to 30,000. 例如,也可W在如图54所示的专用的机器中使用第二检查信息来进行检查处理,该情况的构成例如图51B所示。 For example, W can be checked using the second process check information in a dedicated machine shown in FIG. 54 constituting the case, for example, as shown in FIG. 51B. 在图51B中,示出了检查装置接受第一检查信息的输入(运例如使用图54的接口部IF即可)、并对处理装置10000输出该第一检查信息的例子。 In FIG 51B, the inspection apparatus shown receives an input of the first check information (e.g. using the transport interface section IF to FIG. 54), and the output of the first processing example 10,000 examination information apparatus. 在该情况下,处理装置10000使用从检查装置输入的第一检查信息而生成第二检查信息。 In this case, the processing apparatus 10000 using the inspection apparatus first checks the input information to generate a second inspection information. 但是,第一检查信息能够如从使用者直接向处理装置输入的例子那样,进行各种变形实施。 However, as the first check information can be input directly to the above example of the apparatus from a user, and various modified embodiments.

[0623] 如图52所示,本实施方式的机器人30000也可W是臂为1个的单臂机器人。 [0623] As shown in FIG 52, the robot 30,000 embodiment W may be the arms of a single-arm robot. 在图52 中,作为臂3100的末端执行器而设置有拍摄部5000(手眼摄像机)。 In Figure 52, as an end effector arm 3100 provided with a photographing unit 5000 (hand-eye camera). 但是,能够设置手部等把持部作为末端执行器、在该把持部、臂3100的其他位置等设置拍摄部5000等来实施各种变形。 However, like the hand grip can be provided as an end portion, is provided imaging unit 5000 like the grip portion, the other arm location 3100 to implement various modifications. 另外,在图52中,作为与图51A的控制部3500对应的机器而示出了PC等机器,但是该机器也可W是与图50的信息获取部11110W及处理部11120对应的机器。 Further, in FIG. 52, as the control section 3500 of FIG. 51A corresponds to a machine such as a PC is shown a machine, but the machine may also be W is 11110W acquisition unit and the processing unit and information 11120 corresponding machine 50 of FIG. 另外,在图52中,包括接口部6000在内,作为接口部6000而示出了操作部6100与显示部6200,但是能够对是否包括接口部6000、或者如何形成在包括接口部6000情况下的该接口部6000的结构进行变形实施。 Further, in FIG. 52, includes an interface unit 6000 included as an interface portion 6000 is shown for the operation unit 6100 and display unit 6200, but can include an interface unit 6000 whether, or how the interface is formed in the case portion 6000 includes the configuration interface unit 6000 is modified embodiment.

[0624] 另外,本实施方式的机器人30000的结构并不限定于图52。 [0624] Further, the structure of the robot according to the present embodiment is not limited to 30,000 52 FIG. 例如,如图53所示,机器人30000也可W至少包括第一臂3100、W及与第一臂3100不同的第二臂3200,并且拍摄部5000是设置于第一臂3100 W及第二臂3200的至少一方的手眼摄像机。 For example, as shown in FIG. 53, the robot 30 000 may also include at least a first arm 3100 W, 3100 W different from the first arm and a second arm 3200, and the imaging unit 5000 is provided to the first arm and the second arm 3100 W 3200 of at least one hand-eye camera. 在图53中,第一臂3100是由关节3110、3130与设置于关节之间的框架3150、3170构成的,第二臂3200也同样, 但是并不限定于此。 In FIG 53, a first arm 3100 is provided on the frame joint 3110,3130 3150,3170 joints between the configuration, also the second arm 3200, but is not limited thereto. 另外,在图53中,示出了具有两支臂的双臂机器人的例子,但是本实施方式的机器人也可W具有3支W上的臂。 Further, in FIG 53, shows an example of dual-arm robot having two arms, the robot of the present embodiment may have a W 3 W on the arms. 虽然也记载了拍摄部5000是设置于第一臂3100的手眼摄像机巧000-1)与设置于第二臂3200的手眼摄像机(5000-2)的两方上,但是也可W 是设置在其中一方上。 Although described imaging unit 5000 is provided to the hand-eye camera 3100 of the first arm clever 000-1) disposed on the second arm and the hand-eye camera (5000-2) both of 3200, but may be provided in which W party on.

[0625] 另外,图53的机器人30000包括基座单元部4000。 [0625] Further, the robot 53 includes a base unit section 30000 4000. 基座单元部4000设置于机器人主体的下部,并支承机器人主体。 The base unit 4000 is provided in a lower portion of the robot body, the robot and the support body. 在图53的例子中,形成为在基座单元部4000设置有车轮等、 并且机器人整体能够移动的结构。 In the example of FIG. 53, a portion formed to the base unit 4000 is provided with wheels, and the entire movable structure of the robot. 但是,也可W是基座单元部4000不具有车轮等,而固定于地面等的结构。 However, W may also be a portion of the base unit 4000 does not have wheels, fixed to the floor structure or the like. 在图53的机器人中,通过在基座单元部4000收纳控制装置(在图52中是作为控制部3500而示出的装置),从而使机器人机构300000与控制部3500作为一体而构成。 In the robot of FIG. 53, is constituted by the base unit in the control device accommodating unit 4000 (control unit 3500 is the apparatus as shown in FIG. 52), so that the robot control unit 3500 and 300,000 mechanism integrally. 或者,也可W如相当于图52的控制部3500的装置那样,不设置特定的控制用的机器,而通过内置于机器人的基板(更具体而言为设置于基板上的1C等),实现上述控制部3500。 Alternatively, as also it corresponds to FIG W apparatus control section 52 as 3500, is not provided a particular machine control, built in the robot by the substrate (more specifically, disposed on the board 1C), to achieve The control unit 3500.

[0626] 在使用具有两支W上的臂的机器人的情况下,能够进行灵活的检查处理。 In the case [0626] In a robot having two arms of the W, the inspection process can be flexible. 例如,在设置多个拍摄部5000的情况下,能够从多个视点位置、视线方向同时进行检查处理。 For example, in the case where a plurality of imaging unit 5000, the inspection process can be performed simultaneously from a plurality of viewpoint positions, visual line direction. 另外, 也能够用设置于给定的臂的手眼摄像机,对由设置于其他臂的把持部把持的检查对象物0B 进行检查。 Further, it is possible for a given arm is provided with a hand-eye camera, the grip portion is provided on the other arm of the inspection target gripping 0B checked. 在该情况下,不仅是拍摄部5000的视点位置、视线方向,也能够使检查对象物0B 的位置姿势变化。 In this case, not only the position of the viewpoint of the imaging unit 5000, the gaze direction, it is possible to make the inspection target position posture change 0B.

[0627] 此外,如图20所示,与本实施方式的处理装置或者机器人30000中的处理部11120 等对应的部分的功能也可W通过经由包含有线W及无线的至少一方的网络20而与机器人30通信连接的服务器700来实现。 [0627] Further, as shown, the processing portion of the apparatus according to the present embodiment functions in the embodiment of the robot or the processing unit 30000 and the like corresponding to 11120 W through 20 may also be via a network comprises at least one of wired and wireless W 20 with the the robot 30 connected to the communication server 700 is achieved.

[0628] 或者在本实施方式中,也可W在作为处理装置的服务器700-侧进行本发明的处理装置等的处理的一部分。 [0628] or, in the present embodiment, the part of the processing may be performed W processing device according to the present invention, such as the server processing apparatus 700 side. 此时,通过设置于机器人侧的处理装置与作为处理装置的服务器700的分散处理,来实现该处理。 At this time, the processing means is provided on the side of the robot and the processing means as the dispersion of the server 700, to implement this process. 具体而言,服务器700侧进行本发明的处理装置的各处理中的、分配于服务器700的处理。 Specifically, each processing server-side processing apparatus 700 according to the present invention in the distribution server 700 for processing. 另一方面,设置于机器人的处理装置10000进行本发明的处理装置的各处理中的、分配于机器人的处理部等的处理。 On the other hand, the processing apparatus is provided on the robot unit 10000 perform processing such as the respective processing apparatus according to the present invention, the robot partitioned.

[0629] 例如,本发明的处理装置进行第一~第M(M为整数)处理,考虑能够W使第一处理通过子处理laW及子处理化来实现、并使第二处理通过子处理2aW及子处理化来实现的方式,将第一~第Μ的各处理分割为多个子处理的情况。 [0629] For example, the processing apparatus of the present invention will be first to M (M is an integer) processing, consideration W can be processed by the first sub-processing of the sub-processing and laW be implemented by the sub-process and the second process 2aW and the sub-processing of the embodiment implemented, the process of each of the first to Μ is divided into a plurality of sub-processes. 在该情况下,考虑服务器700侧进行子处理la、子处理2a、· ··子处理Ma,设置于机器人侧的处理装置100进行子处理化、子处理2b、· ··子处理Mb的分散处理。 In this case, regardless of the server side sub-process 700 La, the sub-processing 2a, · ·· subprocess Ma, arranged in the processing apparatus 100 of the robot-side sub-processing of the sub-processing 2b, · ·· dispersing subprocess Mb deal with. 此时,本实施方式的处理装置、即执行第一~第Μ处理的处理装置可W是执行子处理la~子处理Ma的处理装置,可W是执行子处理化~子处理Mb的处理装置,也可W是执行子处理la~子处理MaW及子处理化~子处理Mb的全部的处理装置。 In this case, the processing apparatus according to the present embodiment, i.e., the first to perform processing apparatus Μ process may execute sub-process W is a processing apparatus la ~ Ma sub-processing, and may execute sub-process W is a process of sub ~ Mb processing apparatus , W may also be a sub-process performing process MaW La ~ sub ~ and the sub-processing of the sub-processing of all of the processing apparatus Mb. 进一步而言,本实施方式的处理装置是对第一~第Μ处理的各处理至少执行一个子处理的处理装置。 Further, the processing apparatus according to the present embodiment is the processing of the respective first to Μ processing apparatus to perform at least one sub-processing process.

[0630] 由此,例如与机器人侧的处理装置10000相比处理能力较高的服务器700能够进行处理负荷高的处理等。 [0630] Thus, for example, as compared with the processing apparatus of the robot-side processing capacity of 10000 high server 700 can be performed with high processing load processing. 并且,在处理装置也进行机器人控制的情况下,服务器700能够一并控制各机器人的动作,从而例如容易使多个机器人协调动作等。 Further, in the processing apparatus in the case where the robot controller, the server 700 can collectively controls the operation of each robot, the robot so that for example easily plurality of cooperative operation and the like.

[0631] 另外,近几年,制造多品种且少数的部件的情况有增加的趋势。 [0631] Further, in recent years, the case of manufacturing a multi-species and a few parts tends to increase. 而且,在变更制造的部件的种类的情况下,需要变更机器人进行的动作。 Further, in the case of changing the type of manufactured parts, it is necessary to change the operation of the robot. 若为如图20所示的结构,则即使不重新进行针对多个机器人的各机器人的指导作业,服务器700也能够一并变更机器人进行的动作等。 If the configuration shown in FIG. 20, even without re-operation for guiding a plurality of robots each robot, the server 700 can be collectively changed for operation of the robot, and the like. 并且,与针对各机器人设置一个处理装置的情况相比,能够大幅度减少进行处理装置的软件更新时的麻烦等。 And, compared with the case for each robot is provided a processing apparatus, and the like can be significantly reduced when the troublesome software update processing apparatus.

[0632] 3.处理流程 [0632] 3. The process flow

[0633] 接下来对本实施方式的处理流程进行说明。 [0633] Next, the process flow of the present embodiment will be described. 具体而言,对进行第一检查信息的获取W及第二检查信息的生成的流程、与根据生成的第二检查信息来执行检查处理时的流程进行说明。 Specifically, the process of generating the second test information acquisition W and performing a first test information, the flow when the check processing is performed in accordance with a second inspection information generated will be described. 在设想由机器人执行检查处理的情况下,由于第一检查信息的获取W及第二检查信息的生成即使不伴随机器人的检查处理中的动作也能够执行,所W表述为离线处理(offline)。 In the case of performing the checking process is contemplated by the robot, since the acquired information to generate W, and the second inspection even if the first test information check processing operation of the robot can be performed without accompanying, W is expressed as the offline processing (offline). 另一方面,由于检查处理的执行伴随机器人动作,所W表述为在线处理。 On the other hand, because the robot performing the checking process is accompanied by action, it is expressed as the W-line processing.

[0634] 此外,W下,对检查处理的对象是基于机器人的装配作业的结果、并且检查处理也由机器人执行的例子进行说明,但是如上所述地存在能够进行各种变形实施的点。 [0634] In addition, the W, the object of the inspection process based on the results of the robot assembly operation, and the checking processing will be described by the example of the robot is also performed, but there can be various points of the modified embodiments described above.

[0635] 3.1离线处理 [0635] 3.1 offline processing

[0636] 首先,在图55中示出了本实施方式的第一检查信息与第二检查信息的具体例子。 [0636] First, in FIG. 55 shows a specific example of the first information and the second check test information according to this embodiment. 第二检查信息包含视点信息(视点位置W及视线方向)、检查区域(R0I,确认区域)W及合格图像。 The second viewpoint information contains check information (viewpoint position and the sight line direction W), the examination region (R0I, confirmation area) W and qualified images. 另外,第一检查信息包含形状信息(Ξ维模型数据)、检查对象物的位置姿势信息W及检查处理对象位置。 Further, the first check information includes shape information (a Cascade dimensional model data), the position of the posture inspection object W, and inspection information of the processing target position.

[0637] 在图56的流程图中示出了具体的离线处理的流程。 [0637] In the flowchart of FIG. 56 shows a specific flow of off-line processing. 若开始进行离线处理,则首先信息获取部11110作为第一检查信息而获取检查对象物的Ξ维模型数据(形状信息) (S100001)。 After starting the offline processing, the first information acquisition section 11110 acquires Ξ dimensional model data (shape information) (S100001) as the first inspection object inspection information. 在检查(外观检查)中,重要之处在于如何观察检查对象物,从给定的视点位置、 视线方向观察的观察方式取决于检查对象物的形状。 Checking (appearance inspection), it is important how to observe is that the inspection target, observation mode as viewed from a given viewpoint position, the line of sight direction depending on the shape of the inspection target. 特别是,对于Ξ维模型数据而言,由于是无缺损、无变形的理想的状态下的检查对象物的信息,所W成为针对检查对象物的实物的检查处理中有用的信息。 In particular, for Ξ dimensional model data, because it is no defect, information of the object under examination without modification of the ideal state, the process W to be inspected for physical inspection object information useful.

[0638] 在检查处理是针对基于机器人的机器人作业的结果进行的处理的情况下,信息获取部11110获取作为机器人作业的结果而获取的检查对象物的Ξ维模型数据亦即作业后Ξ 维模型数据、与机器人作业前的检查对象物的Ξ维模型数据亦即作业前Ξ维模型数据。 [0638] In the case where the inspection process is a process performed for the robot based on the result of the operation of the robot, information Ξ dimensional model data as a result of operation of the robot and the object to be inspected acquired job Ξ i.e. dimensional model acquisition section 11110 acquires Ξ dimensional model data of the object before the inspection data, i.e., with the robot operation before the job Ξ dimensional model data.

[0639] 在对机器人作业的结果进行检查的情况下,需要对是否适当地进行了作业进行判定。 [0639] In the case where the result of the inspection robot operation, it is necessary for the operation is properly performed is determined. 在作业为将物体A与物体B进行装配的装配作业的情况下,对是否针对物体A在规定的位置从规定的方向组装物体B进行判定。 Assembled for the case where the object A and the object B in the operation of assembly operations, whether an object is assembled from a direction B for the predetermined position of the object A in a predetermined determination. 即,物体A、物体B个体的Ξ维模型数据的获取是不足够的,重要之处在于针对物体A在规定的位置从规定的方向组装有物体B的数据、即理想地结束作业的状态下的Ξ维模型数据。 That is, the object A, acquires the object B individual Ξ dimensional model data is not enough, it is important that for the position of the object A in a predetermined direction from a predetermined assembled data of the object B, the next state, that is over the end of the operation the Ξ-dimensional model data. 由此本实施方式的信息获取部11110获取作业后Ξ维模型数据。 Thus in this embodiment the information acquisition section 11110 acquires the job Ξ dimensional model data. 另外,如后述检查区域、合格阔值的设定那样,也存在作业前后的观察方式的差异成为要点的场景,因此也预先配合地获取作业前Ξ维模型数据。 Further, as described later, setting the inspection region, as qualified value width difference was observed before and after the operation of the embodiment there are scenes become points, and therefore in advance with access to the front working Ξ dimensional model data.

[0640] 在图57A、图57B中示出了作业前Ξ维模型数据与作业后Ξ维模型数据的例子。 [0640] In FIG. 57A, FIG. 57B shows an example Ξ dimensional model data after Ξ dimensional model data and a job before the job. 在图57A、图57B中,W针对立方体的块状的物体A,在沿着给定的1个轴方向偏移的位置W与物体A相同的姿势组装立方体的块状的物体B的作业为例而进行说明。 In FIG. 57A, FIG. 57B, the work W for the B block object cube object A, at a position shifted along a given axis direction of the object W and the same posture A block assembly as cube the embodiment will be described. 在该情况下,作业前Ξ 维模型数据由于是物体B的组装前,所W如图57A所示是物体A的Ξ维模型数据。 In this case, the job data before a Cascade dimensional model because it is before the assembly of the object B is a Cascade dimensional model data of the object A is shown as W in FIG. 57A. 另外,如图57B所示,作业后Ξ维模型数据是W上述条件组装了物体A与物体B的数据。 Further, as shown in FIG. 57B, a Cascade dimensional model data assignment is W above-described conditions in which the data object A and the object B. 此外,在图57A、 图57B中,在W平面的方式图示的关系上,Ξ维模型数据成为是否从给定的视点位置、视线方向进行观察那样的数据,但是从"Ξ维"运一词语亦可知,作为第一检查信息而获取的形状数据成为观察的位置、方向不被限定的Ξ维的数据。 Further, in FIG. 57A, FIG. 57B, in the embodiment illustrated the relationship between the plane W, the data Data to a Cascade dimensional model as if viewed from a given viewpoint position, the line of sight direction, but from "a Cascade dimension" a transport also known words, as the first shape data acquired examination information of a position observation, a Cascade dimensional data direction is not defined.

[0641] 另外,在S100001中,也配合地获取作为视点信息(包含视点位置W及视线方向的信息)的候补的视点候补信息。 [0641] Further, in the S100001, as also with access to the viewpoint information (viewpoint information includes position W and the visual line direction) of the viewpoint candidate information candidate. 设想该视点候补信息不是由使用者输入或者由处理部11120 等生成的信息,而例如是处理装置10000(或者机器人30000)的制造商在出厂前预先设定的信息。 This viewpoint is not contemplated candidate information input by the user or the information generated by the processing unit 11120 and the like, and for example, the information apparatus 10000 (30,000 or robot) for the manufacturer and preset at the factory process.

[0642] 虽然视点候补信息如上所述是作为视点信息的候补的信息,但是考虑可能成为该视点候补信息的点非常多(狭义而言为无限)。 [0642] Although the view candidate information as described above as an information point of view candidate information, but consider the point of view may become very multi-candidate information (in a narrow sense is unlimited). 例如,在W检查对象物为基准的对象物坐标系(物体坐标系)中设定视点信息的情况下,对象物坐标系中的检查对象物的内部所包含的点W外的点全部可能成为视点候补信息。 For example, when the viewpoint information in a set W of the inspection target object reference coordinate system (body coordinate system), dot points outside the object to be inspected W inside the object coordinate system may be contained in all view candidate information. 当然,使用运么多的视点候补信息(处理上不限定视点候补信息),能够与状况对应地灵活或者精细地设定视点信息。 Of course, the use of transport so many candidate information viewpoint (the viewpoint is not limited to processing candidate information), or can correspond flexibly to the situation set finely viewpoint information. 由此,若在视点信息的设定时处理负荷不成为问题,则在S100001中可W不获取视点候补信息。 Accordingly, if the processing load setting information is not a problem viewpoint, the viewpoint does not acquire candidate information may S100001 W. 但是在W下的说明中,预先设定视点候补信息,W使得即使在各种物体成为检查对象物的情况下也能够通用地进行利用,并且不使视点信息的设定中的处理负荷增大。 However, in the description of the W set in advance viewpoint candidate information, so that W be the case even if the object to be inspected can be performed using a variety of common objects, and not to set a processing load is increased viewpoint information .

[0643] 此时,检查时刻的检查对象物0B的配置的位置姿势等不限定为已知。 [0643] In this case, the configuration of the position and posture inspection object inspection timing or the like is not limited to 0B known. 因此,由于是否能够使拍摄部5000向与视点信息对应的位置姿势移动的情况不明确等,所W将视点信息限定为非常少的数量(例如一个、两个)是不现实的。 Accordingly, since whether the case that the imaging portion 5000 to a position corresponding to the viewpoint information of the gesture is not clear like movement, the viewpoint information is defined as W a very small number (e.g., one, two) is not realistic. 运是因为在仅生成少数的视点信息的情况下,若无法使拍摄部5000向该少数的视点信息的全部移动,则无法执行检查处理。 Transported because only a few in the case of generating viewpoint information, if not the photographing unit 5000 to all the mobile minority viewpoint information, the inspection process can not be performed. 为了抑止运种危险性,需要使视点信息生成一定程度的数量,作为结果,视点候补信息的数量也成为一定程度的数量。 In order to suppress the transport of dangerous species, the number of viewpoints is necessary to generate a certain degree of information, as a result, the number of viewpoints candidate information has become a certain degree of number.

[0644] 在图58中示出了视点候补信息的例子。 [0644] In FIG. 58 shows an example of candidate information of the viewpoint. 在图58中,在对象物坐标系的原点的周围设定有18个视点候补信息。 In Figure 58, around the origin of the object coordinate system 18 is set viewpoint candidate information. 具体的坐标值如图59所示。 Specific coordinate values ​​59 shown in FIG. 若为视点候补信息A,则视点位置在X 轴上,并且位于从原点离开给定的距离(若为图59的例子则是200)的位置。 If the candidate information of the viewpoint A, the viewpoint position on the X axis and located a given distance away from the origin (the example of FIG. 59 when it is 200) position. 另外,视线方向与由(ax、ay、az)表示的矢量对应,在为视点候补信息A的情况下,成为X轴负方向、即原点方向。 Further, by the visual line direction vector (ax, ay, az) represents a corresponding, in the case of candidate viewpoint information A, a negative X-axis direction, i.e. the direction of the origin. 此外,即使仅确定了视线方向矢量,也由于拍摄部5000能够进行绕着视线方向矢量的旋转,所W姿势不固定为一种。 Further, even if only to determine the gaze direction vector, and since the imaging section 5000 can rotate about the visual line direction vector, the W position is not fixed as a. 由此运里,预先将规定绕着视线方向矢量的旋转角度的另一矢量设定作为(bx、by、bz)。 Further vector thus transported in the rotation angle, about a predetermined advance direction vector is set as the visual line (bx, by, bz). 另外,如图58所示,将xyz各轴上的2点与xyz中的巧由之间的点的合计18个点作为视点候补信息。 Further, as shown in FIG. 58, the xyz 2 points on each axis in the xyz total of 18 points by the coincidence between the point candidate information as the viewpoint. 若运样地W环绕对象物坐标系的原点的方式设定视点候补信息,则在世界坐标系(机器人坐标系)中,不论检查对象物如何配置,都能够实现适当的视点信息的设定。 If the sample transport way around the origin of the object W to the coordinate system of the viewpoint setting candidate information, in the world coordinate system (robot coordinate system), the object to be inspected regardless of configuration, are able to achieve an appropriate setting of the viewpoint information. 具体而言,能够抑止无法使拍摄部5000向根据视点候补信息而设定的视点信息的全部(或者大多数)移动、或者即使移动也因遮挡物等而无法检查之类的可能性,从而能够实现至少足够数量的视点信息下的检查。 Specifically, it is possible that the imaging unit 5000 can not be suppressed to the information according to the viewpoint set by the viewpoint candidate information of all (or most) movement, or even if the movement is also due to the covering or the like can not check the possibility of such, it is possible to checking at least achieved a sufficient number of viewpoint information.

[0645] 在外观检查中,虽然仅从一个视点位置W及视线方向进行检查是无妨的,但是若考虑精度,则优选从多个视点位置W及视线方向进行检查。 [0645] In the visual inspection, the inspection Although only a viewpoint position and gaze direction W is nothing wrong, but in consideration of precision, it is preferable that a check from a plurality of viewpoint positions W and gaze direction. 运是因为考虑在仅从1个方向检查时,存在无法充分(例如在图像上W足够大的尺寸)观察应检查的区域的可能性等。 Is transported in the inspection because it is considered only in one direction, the possibility can not be sufficiently (e.g. image on a size large enough W) to be checked to observe the presence of other regions. 因此, 优选第二检查信息不是一个视点信息,而是包含多个视点信息的视点信息组。 Thus, preferably the second test information is not a viewpoint information, but includes viewpoint information group of the plurality of viewpoint information. 运例如是通过使用上述视点候补信息中的多个候补信息(基本上是全部候补信息)而生成视点信息从而实现的。 Operation, for example, by using a plurality of candidate information of the candidate information viewpoint (substantially all candidate information) and generates viewpoint information thus achieved. 另外,即使在不使用上述视点候补信息的情况下,求出多个视点信息即可。 Further, even in the case without using the candidate information of the viewpoint, to obtain a plurality of viewpoint information. 即,第二检查信息包含将多个视点信息包含在内的视点信息组,视点信息组的各视点信息包含检查处理中的拍摄部5000的视点位置W及视线方向。 That is, the second set of check information comprises a plurality of viewpoint information included in the viewpoint information of each viewpoint viewpoint information contains information set viewpoint position and gaze direction W imaging portion 5000 of the inspection process. 具体而言,处理部11120根据第一检查信息,作为第二检查信息而生成将拍摄部5000的多个视点信息包含在内的视点信息组。 Specifically, the first processing section 11120 according to inspection information, examination information as the second viewpoint information generating unit sets a plurality of viewpoints captured information 5000 inclusive.

[0646] 上述视点候补信息为对象物坐标系中的位置,但是在视点候补信息的设定阶段, 具体的检查对象物的形状、尺寸是不确定的。 [0646] The viewpoint information candidate object position coordinate system, but in view of the stage information candidate set, the specific shape of the object to be inspected, is uncertain size. 具体而言,图58虽然是W检查对象物为基准的对象物坐标系,但是该对象物坐标系中的物体的位置姿势成为不定的状态。 Specifically, while FIG. 58 is a W inspection object as a reference object coordinate system, the coordinate system of the object position and orientation of the object becomes indefinite state. 对于视点信息而言,由于至少需要规定与检查对象物的相对的位置关系,所W为了从视点候补信息生成具体的视点信息,需要与检查对象物的对应。 For viewpoint information, since at least a predetermined relative positional relationship with the object to be inspected, in order to generate the W specific viewpoint information from the viewpoint candidate information, corresponding to the inspection target needs.

[0647] 运里,鉴于上述视点候补信息,对于设定了视点候补信息的坐标系的原点而言,其是作为全部的视点候补信息的中屯、的位置,并且在W各视点候补信息配置拍摄部5000的情况下,位于该拍摄部5000的拍摄方向(光轴方向)。 [0647] Yun, the candidate information in view of the above-described viewpoint, a viewpoint to the set of candidate information of the origin of the coordinate system, its entire village as the candidate information of the viewpoint, the positions, and the candidate information arranged in each viewpoint W When imaging unit 5000, the imaging direction (optical axis direction) of the imaging portion 5000. 即,可W说坐标系的原点是在使用拍摄部5000的观察下最佳的位置。 That is, said W is in the coordinate origin using the best imaging position of the lower portion 5000 of the observation. 由于在检查处理中最应观察的位置是上述检查处理对象位置(狭义而言可W是装配位置,也可W如图58所示是组装位置),所W使用作为第一检查信息而获取的检查处理对象位置而生成与检查对象物相对应的视点信息。 Since most position to be viewed in the inspection process in the inspection process is the target position (in a narrow sense can be assembled position W, W may be assembled position shown in FIG. 58), W is used as the first examination information acquired check generating processing target position corresponding to the inspection target viewpoint information.

[0648] 目P,第一检查信息包含针对上述检查对象物的相对的检查处理对象位置,机器人30000W检查处理对象位置为基准,设定与检查对象物对应的对象物坐标系,并使用对象物坐标系,生成视点信息。 [0648] Head P, the first check information may include relative positions of the check processing for the inspection target object, a robot 30000W check processing target position as a reference, set corresponding to the inspection target object coordinate system, and using the object coordinate system, generating viewpoint information. 具体而言,信息获取部11110作为第一检查信息而获取针对检查对象物的相对的检查处理对象位置,处理部11120W检查处理对象位置为基准,设定与检查对象物对应的对象物坐标系,并使用对象物坐标系,生成视点信息(S100002)。 Specifically, the information acquisition section 11110 acquires the position opposite to the check processing for the inspection target object as the first examination information processing unit 11120W inspection processing target position as a reference, set corresponding to the inspection target object coordinate system, and using the object coordinate system, generating viewpoint information (S100002).

[0649] 例如,检查对象物的形状数据呈图60所示的形状,获取其中的点0为检查处理对象位置的第一检查信息。 [0649] For example, shape data of the object to be inspected in the shape shown in FIG. 60, the point 0 is acquired wherein the first inspection of the inspection target position information processing. 在该情况下,将点0作为原点,而设定检查对象物的姿势为图60所示那样的对象物坐标系即可。 In this case, the point 0 as the origin, the posture inspection object is set as shown in FIG. 60 as to the object coordinate system. 若确定对象物坐标系中的检查对象物的位置姿势,则各视点候补信息与检查对象物的相对关系明确,因此能够使用各视点候补信息作为视点信息。 When determining the position and orientation coordinates of the object inspection target object, the relative relationship between each viewpoint candidate information object to be inspected and clear, and therefore can be used as the candidate information of each viewpoint viewpoint information.

[0650] 若生成包含多个视点信息的视点信息组,则进行各种第二检查信息的生成。 [0650] When generating viewpoint information group including a plurality of viewpoint information, a second check is performed to generate a variety of information. 首先, 进行与各视点信息对应的合格图像的生成(S100003)。 First, generating a qualified image information corresponding to each viewpoint (S100003). 具体而言,处理部11120作为检查处理所使用的合格图像而获取由配置于与视点信息对应的视点位置W及视线方向的假想摄像机拍摄的Ξ维模型数据的图像。 Specifically, the image processing unit 11120 acquires Ξ dimensional model data captured by the viewpoint position W, and disposed in the gaze direction information corresponding to the viewpoint of the virtual camera as an acceptable image checking process is used.

[0651] 合格图像需要成为显示检查对象物的理想的状态的图像。 [0651] Eligible ideal image to be an image display state of the object to be inspected. 由于作为第一检查信息而获取的Ξ维模型数据(形状信息)是检查对象物的理想的形状数据,所W将从与视点信息对应地配置的拍摄部观察的该Ξ维模型数据的图像作为合格图像来使用即可。 Since Ξ dimensional model data (shape information) as the first examination information is acquired the desired shape of the inspection target object data, the image data of the dimensional model Ξ W from the imaging section arranged corresponding to the viewpoint information, as observed qualified image can be used. 此外,在使用Ξ维模型数据的情况下,实际不是基于拍摄部5000的拍摄,而是进行使用假想摄像机的处理(具体而言为将Ξ维数据投影为二维数据的转换处理)。 Further, in the case where Ξ-dimensional model data, not based on the actual photographing part 5000, but is processed using a virtual camera (specifically, the two-dimensional projection data Ξ dimensional data conversion process). 此外,若设想针对机器人作业的结果的检查处理,则合格图像是显示机器人作业结束时的理想的检查对象物的状态的图像。 In addition, if the envisaged results for the robots work of the inspection process, the qualified image is an image of the ideal state of the object to be inspected at the end of the robot operation display. 而且,由于机器人作业结束时的理想的检查对象物的状态由上述作业后Ξ维模型数据显示,所W将由假想摄像机拍摄的作业后Ξ维模型数据的图像作为合格图像即可。 Further, since the ideal state of the object to be inspected at the end of robot operation shown by the above data Ξ dimensional model job, the job W by a virtual image captured by the camera Ξ dimensional model can be image data as acceptable. 由于合格图像是在每个视点信息中求出的,所W如上所述在设定18个视点信息的情况下,合格图像也为18个。 Since the image is determined in conformity to each viewpoint information, as described above, in the case where W is set viewpoint information 18, 18 is also qualified images. 图61A~图61G各自的右侧的图像与进行对应于图57B的装配作业的情况下的合格图像对应。 FIGS 61A ~ FIG. 61G qualified image corresponding to the image for a case where the respective right side in FIG. 57B corresponds to the assembly operation. 在图61A~图61G中,记载了7个视点量的图像,而如上所述,W视点信息的数量求出图像数量。 In FIGS. 61A ~ FIG. 61G, the viewpoint 7 discloses amount of an image, and as described above, the number of viewpoint information W determined number of images. 此外,在S100003的处理中,考虑到后述检查区域、合格阔值的处理,也预先求出由假想摄像机拍摄的作业前Ξ维模型数据的作业前图像(图61A~图61G各自的左侧)。 Further, in the process of S100003, in consideration of the processing to be described later inspection region, passing width value, also determined in advance before the image is captured by the job before the job virtual camera Ξ dimensional model data (FIG. 61A ~ 61G of each of the left side of FIG. ).

[0652] 接下来,作为第二检查信息而求出用于检查处理的合格图像W及拍摄图像的图像中的区域亦即检查区域(S100004)。 [0652] Next, as a second region for obtaining inspection information i.e. examination region (S100004) qualified images captured image and the image W in the inspection process. 检查区域如上所述。 Examination region as described above. 此外,在检查中重要的部分的观察方式与视点信息对应地变化,因此,针对视点信息组所包含的各视点信息进行检查区域的设定。 In addition, important observation mode corresponding to the viewpoint information portion changes in the inspection, therefore, the inspection region set for each viewpoint viewpoint information included in the information group.

[0653] 具体而言,处理部11120作为合格图像而获取由配置于与视点信息对应的视点位置W及视线方向的假想摄像机拍摄的作业后Ξ维模型数据的图像,作为作业前图像而获取由配置于与视点信息对应的视点位置W及视线方向的假想摄像机拍摄的作业前Ξ维模型数据的图像,并根据作业前图像与合格图像的比较处理,求出用于检查处理的图像中的区域亦即检查区域。 [0653] Specifically, the processing section 11120 acquires the image as an acceptable image Ξ dimensional model data after the job is captured by the viewpoint position W arranged in the direction of the viewpoint and the sight line information corresponding to the virtual camera, as the image obtained by the front working the image data before the job Ξ dimensional model disposed at a viewpoint position and viewing direction W corresponding virtual viewpoint information captured by the camera, and the image comparison process according to the preceding job and qualified image, obtains a region in the image processing inspection i.e. examination region.

[0654] 在图62A~图62D中示出了检查区域的设定处理的具体例子。 [0654] In FIGS. 62A ~ 62D shown in FIG. Specific examples of the inspection region setting processing. 在针对物体A从右组装物体B的机器人作业的结果是检查对象的情况下,如图62B所示地作为合格图像而获取由假想摄像机拍摄的作业后Ξ维模型数据的图像,如图62A所示地作为作业前图像而获取由假想摄像机拍摄的作业前Ξ维模型数据的图像。 Results for assembling an object from the right object A robot B inspection target job is the case where, as shown in FIG. 62B qualified image obtained after the image captured by the virtual camera job Ξ-dimensional model data, as shown in FIG 62A shown as a front work image obtained before an image captured by an imaginary camera job dimensional model data Ξ. 运样,在作业前后带有状态的变化的机器人作业中,更重要的是状态的变化部分。 Sample transport, with the robot operation before and after the change of state of operation, the more important parts of the state changes. 若为图62B的例子,则在检查中应该判断的是物体B 是否与物体A组装在一起、其组装位置是否正确之类的条件。 If the example of FIG. 62B, the inspection should be determined whether the object is assembled with the object A and B, the conditions are correct assembly position of its class. 虽然也可W对物体A中的与作业相关的部分(例如装配中的接合面)W外的部分进行检查,但是重要度比较低。 Although the object W may be the portion A associated with the job (e.g., assembly bonding surface) of the outer part of the inspection W, but the importance is relatively low.

[0655] 目P,能够将合格图像W及拍摄图像中重要度较高的图像中的区域考虑为在作业的前后产生变化的区域。 [0655] Head and P, W, and the image passing higher degree of importance of the captured image region image region considered to produce varying before and after the operation. 由此在本实施方式中,处理部120作为比较处理而进行求出作业前图像与合格图像的差分亦即差分图像的处理,并作为检查区域而求出差分图像中的包含检查对象物的区域。 Thus in the present embodiment, the processing unit 120 performs processing as a comparative treatment difference i.e. difference image and the image before passing the image of the job is obtained, and an inspection region to obtain the inspection target region including the difference image . 若为图62A、图6¾的例子,则差分图像为图62C,因此设定将图62C所包含的物体B的区域包含在内的检查区域。 If FIG. 62A, FIG 6¾ example, the differential image of FIG. 62C, thus setting the inspection region of the object region B of FIG. 62C included inclusive.

[0656] 运样,能够将差分图像中的包含检查对象物的区域、即推断为检查的重要度较高的区域作为检查区域。 [0656] The sample transport, it is possible to check the object region including the difference image, i.e., a higher degree of importance is inferred to be inspected region as the examination region.

[0657] 此时,检查处理对象位置(图62A等中的装配位置)作为第一检查信息是已知的,该检查处理对象位置在图像中处于什么位置也是已知的。 [0657] In this case, the check processing target position (FIG. 62A in the assembled position, etc.) as the first inspection information is known, the position of the inspection processing target position in the image in what is also known. 如上所述,由于检查处理对象位置是作为检查处理中检查的基准的位置,所W也可W从差分图像与检查处理对象位置求出检查区域。 As described above, since the inspection target position is processed as a position check process of checking the reference, the W W may be determined with the examination region from the difference image processing inspection target position. 例如,如图62C所示,处理部11120求出差分图像中的检查处理对象位置与差分图像所剩余的区域的纵向的长度的最大值BloWfeight、W及横向的长度的最大值Blobwidth。 For example, as shown, the maximum longitudinal length BloWfeight 11120 obtains difference image processing inspection target position and the remaining region difference image processing unit, W, and a maximum horizontal length of Blobwidth 62C. 运样,若W检查处理对象位置为中屯、,将上下各自BloWleight、左右各自Blobwidth的距离的范围内的区域作为检查区域,则能够作为检查区域而求出差分图像中的包含检查对象物的区域。 Sample transport, if W is in a position to check the processing target Tun ,, respective upper and lower BloWleight, the region around each Blobwidth distance range as the inspection area, it can be determined in the difference image as an inspection comprises inspection target region area. 此外,在本实施方式中,也可W在纵向W及横向分别具有余白,若为图62D的例子,贝U 将在上下左右分别具有30像素的余白的区域作为检查区域。 In the present embodiment, each W may have a margin W in the longitudinal and transverse directions, if the example of FIG. 62D, each having U shell margin region as the examination region 30 pixels both vertically and horizontally.

[0化引图63A~图63D、图64A~图64D也是同样的。 63D, FIGS. 64A ~ FIG. 64D is the same [0 of primers FIGS 63A ~ FIG. 图63A~图63D是在从视点位置观察的情况下,在物体A的里侧组装比较细的物体B的作业(或者在图像中的横向有孔的物体A插入棒状的物体B的作业)的例子。 FIGS 63A ~ FIG. 63D is when viewed from the viewpoint position in the job back side assembled object A relatively thin object B (or in an image transverse hole of the object A inserted job rod-shaped object B) is example. 在该情况下,差分图像中的检查对象物的区域被分为不连续的多个区域,但是能够与图62A~图62D相同地进行处理。 In this case, the inspection target region in the difference image is divided into a plurality of discrete regions, but can be treated in the same manner in FIG. 62A ~ 62D of FIG. 运里,设想物体化k物体A细,物体A 的图像中的上端部附近、或者下端部附近在检查中的重要度较低。 Yun, the contemplated objects of the object k A thin near the upper end portion of the object A in the image, or a lower degree of importance in the vicinity of the lower portion of the inspection. 若为本实施方式的手段, 则如图63D所示,能够将被认为重要度较低的物体A的区域从检查区域除去。 When the means of the present embodiment, as shown in FIG 63D, the region can be considered to be a lower degree of importance of the object A is removed from the examination region.

[0659]图64A~图64D是针对较大的物体A组装比物体A小的物体B的作业。 [0659] FIG. FIGS. 64A ~ 64D is a job for assembling large object A is smaller than the object A to the object B. 运例如与将作为物体B的螺钉紧固于作为PC、打印机等的物体A的规定的位置的作业对应。 For example, to transport a screw fastening object B as a work position corresponding to a predetermined PC, printer and the like of the object A. 在运种作业中, 检查PC、打印机整体的必要性低,而进行螺钉紧固的位置的重要度高。 Kinds of high importance in the transport operation, check the PC, the need for low overall printer, and the screw-fastening position. 在运一点上,若为本实施方式的手段,则如图64D所示,可W将物体A的大部分从检查区域除去,将应检查的物体B的周围设定作为检查区域。 In operation point, if the means of the present embodiment, as shown in FIG 64D, most of W may be removed from the inspection area of ​​the object A and around the object to be examined B is set as an inspection area.

[0660] 此外,上述手段是通用性较高的检查区域的设定手段,但是本实施方式的检查区域的设定手段并不限定于此,也可W通过其他手段来设定检查区域。 [0660] Further, the above-described means is highly versatile inspection region setting means, the inspection area setting means of the present embodiment is not limited thereto, set W may be the examination region by other means. 例如,在图62D中,由于即使是更狭小的检查区域也足够,所W也可W使用设定更狭小的区域的手段。 For example, in FIG. 62D, since even a narrower enough examination region, the means used W W may be set narrower area.

[0661] 接下来,进行在合格图像与实际拍摄的拍摄图像的比较处理中使用的阔值(合格阔值)的设定处理(S100005)。 [0661] Next, the comparison value used in the broad process of qualified images captured image captured in practice (acceptable value width) setting process (S100005). 具体而言,处理部11120获取上述合格图像与作业前图像,并根据作业前图像与合格图像的相似度,设定基于拍摄图像与合格图像的检查处理所使用的阔值。 Specifically, the processing section 11120 acquires the image before passing the image of the job, and the job based on the similarity image before passing the image is set based on the width of the captured image with the value checking process used by a qualified image.

[0662] 在图65A~图6抓中示出了阔值设定处理的具体例子。 [0662] catch specific example shown in width value setting process in FIGS. 65A ~ FIG. 图65A是合格图像,若理想地进行机器人作业(广义而言若检查对象物为理想的状态),则实际拍摄的拍摄图像也应与合格图像一致,相似度为最大值(运里是1000)。 FIG 65A is a qualified image, if the robot operation is desirably carried out (in a broad sense to check if the object is an ideal state), the actual image captured by the image should be consistent with the passing, the maximum degree of similarity (in operation 1000) . 另一方面,若完全没有与合格图像一致的要素,则相似度为最小值(运里是0)。 On the other hand, if there is no consistent element images passing similarity to the minimum (in transport is 0). 运里的阔值是如下值:若合格图像与拍摄图像的相似度在该阔值W上则判定为检查合格,若相似度比阔值小则判定为检查不合格。 The value of the width is transported in the following values: If qualified image captured image on the similarity value width W is determined to pass the inspection, it is determined that the similarity is smaller than the width value failed inspection. 即,阔值为0与1000之间的给定的值。 That is, the width is given a value between 0 and 1000.

[0663] 运里,图65B是与图65A对应的作业前图像,但是由于图65B也包含与图65A通用的部件,所W作业前图像与合格图像的相似度成为不为0的值。 [0663] in operation, FIG. 65B is a front job image corresponding to FIG. 65A, FIG. 65B but also due to the common member comprises FIG. 65A, the similarity of the image before passing the image of the work W is not a value of 0. 例如,在使用图像的边缘信息来进行相似度的判定的情况下,将作为图65A的边缘信息的图65C用于比较处理,但是作为作业前图像的边缘信息的图6加也包含与图65C-致的部分。 For example, in the case where the edge information of the image to the determination of the degree of similarity, as the comparison processing of FIG. 65C for the edge information of FIG. 65A, but as the front working edge information of the image of FIG. 6 and FIG. 65C also includes adding - consistent part. 若为图65C、图6抓的例子,则相似度的值不到700。 If FIG. 65C, the example of FIG. 6 arrested, the similarity value is less than 700. 由此,即使将完全没有进行作业的状态的检查对象物拍摄于拍摄图像, 该拍摄图像与合格图像的相似度也保持700左右的值。 Thus, even if there is no object to check the status of jobs in the captured image is captured, the captured image and the similarity image is also qualified holding value of about 700. 将完全没有进行作业的状态的检查对象物拍摄于拍摄图像例如是指,无法执行作业本身、或者执行了作业但是组装侧的物体落下而未拍摄于图像之类的状态,并且机器人作业失败的可能性较高。 The status of the job is completely no inspection target object in the captured image photographing means, for example, can not perform the job itself, a job or performs it without dropping the object side of the assembled state of the image or the like taken in, and the robot operation may fail high. 即,由于作为检查即使应成为"不合格"的状况也出现700左右的相似度,所W作为阔值而设定为低于该值的值可W说是不适当的。 That is, since even a check should be "fail" situation there is a similarity of about 700, as the width W is set to a value lower than the value of the value W may be said to be inadequate.

[0664] 由此在本实施方式中,将相似度的最大值(例如1000)和作业前图像与合格图像的相似度(例如700)之间的值设定作为阔值。 [0664] Thus in the present embodiment, the maximum degree of similarity (e.g., 1000) and a front working conformity with image similarity image (e.g. 700) between the set value as the value of width. 作为一个例子,使用平均值即可,也可W利用下式(13)求出阔值。 As an example, using the average value can also be determined using the width W value of formula (13).

[0665] 阔值={1000+(合格图像与作业前图像的相似度)}/2.....(13) [0665] = {1000+ width value (degree of similarity with the image before passing the image of the job)} / 2 ..... (13)

[0666] 另外,阔值的设定能够进行各种变形实施,例如也可W与合格图像和作业前图像的相似度的值对应地,变更求出阔值的公式。 [0666] Further, set width value can be modified in various embodiments, for example, may be the similarity of the image before passing the image and the W values ​​corresponding to the job, change the formula to obtain wide values.

[0667] 例如,能够进行如下变形实施:在合格图像与作业前图像的相似度处于600W下的情况下,将阔值固定为800,在合格图像与作业前图像的相似度处于900W上的情况下,将阔值固定为1000,在除此之外的情况下,使用上式(13)。 [0667] For example, embodiments can be modified as follows: In conformity with the image before the image similarity in the case where the job at 600W, the width value is fixed at 800, in the image before passing the image to the job in the case of the similarity 900W under the fixed value width of 1000, in other cases, the use of formula (13).

[0668] 此外,合格图像与作业前图像的相似度因作业前后的检查对象物的观察方式的变化而变化。 [0668] Further, the image before passing the image similarity job due to a change of observation object to be inspected before and after the job is changed. 例如,在与不同于图65(A)~图65(D)的视点信息对应的图66A~图66D的情况下, 装配前后的检查对象物的观察方式的差异较小,作为结果,合格图像与作业前图像的相似度比上述例子高。 For example, different from the case in FIG. 65 (A) corresponding to the viewpoint information of FIG. 66A to FIG. 66D to FIG.'s 65 (D), the difference of observation object to be inspected before and after the assembly is small, as a result, the image of qualified job before the image similarity is higher than the above example. 即,与合格图像、检查区域相同地,针对视点信息组所包含的各视点信息也进行相似度W及阔值的计算处理。 That is, the qualified image, the same examination region, the viewpoint information for each of the viewpoint information is also included in the group W, and the similarity calculation processing width value.

[0669] 最后,处理部11120针对视点信息组的各视点信息,设定表示使拍摄部5000向与视点信息对应的视点位置W及视线方向移动时的优先度的优先度信息(S100006)。 [0669] Finally, the processing section 11120 for each viewpoint viewpoint information block is set so that the priority information indicates the priority of the imaging unit when moving the viewpoint position 5000 for W and the viewing direction corresponding to the viewpoint information (S100006). 如上所述, 检查对象物的观察方式与由视点信息表示的视点位置W及视线方向对应地变化。 , Inspection of observation target object and the viewpoint position and the sight line direction of W indicated by the change information corresponding to the viewpoint as described above. 由此,也可能产生如下状况:从给定的视点信息可良好地观察检查对象物中的应检查的区域,与此相对,从其他视点信息无法观察该区域。 Thus, a condition may also arise: can be well observed in the inspection target region to be examined from a given viewpoint information, whereas the information can not be observed in the region from another viewpoint. 另外,如上所述,由于视点信息组包含足够数量的视点信息,所W在检查处理中,无需将其全部作为对象进行检查,若在规定的视点信息(例如2处位置)合格的话,则最终结果也合格,从而可W不对之前没有成为对象的视点信息进行处理。 As described above, since the viewpoint information group contains a sufficient number of viewpoint information, the W in the inspection process, without having to inspect the object in its entirety, if the predetermined viewpoint information (e.g., at position 2) qualify, then the final the results also qualified, which can not become the object W right before the viewpoint information processing. 依据上述内容,若考虑检查处理的效率化,则优选进一步优先处理可良好地观察应检查的区域等、检查处理中有用的视点信息。 According to the above content, in consideration of the efficiency of the inspection process, it is preferably further priority region can be satisfactorily observed and the like check, the check processing viewpoint information useful. 因此在本实施方式中,针对各视点信息设定优先度。 Therefore, in the present embodiment, the priority is set for each of the viewpoint information.

[0670] 运里,在检查处理是针对机器人作业的结果的处理的情况下,明确作业的前后的差异在检查中是有用的。 [0670] transportation, in the case of check processing is handled for the results of operations of the robot, a clear difference before and after the job is useful in the examination. 作为极端的例子,如图67A所示,考虑在较大的物体A从附图上左侧组装较小的物体B的作业。 As an extreme example, as shown in FIG. 67A, in consideration of the assembling work A larger objects smaller object B on the left side from the drawings. 在该情况下,在使用与图67A的视点位置1W及视线方向1对应的视点信息1的情况下,作业前图像如图67B所示,合格图像如图67C所示,不产生差异。 In this case, the viewpoint information in a case where the viewpoint position and gaze direction 1W used in FIG. 67A corresponds to 1 1, before the operation image shown in FIG. 67B, 67C shown in FIG qualified image, no difference. 即,视点信息1在对运里的作业进行检查时不是有用的视点信息。 That is, when a viewpoint information on transport in the job checks are not useful viewpoint information. 另一方面,在使用与视点位置2 W及视线方向2对应的视点信息2的情况下,作业前图像如图67D所示,合格图像如图67E所示,差异明确。 On the other hand, in the case where the viewpoint position and the sight line direction of 2 W 2 corresponding to the viewpoint information 2, the front working image shown in FIG. 67D, 67E shown in FIG qualified image, a clear difference. 在该情况下,可W使视点信息2的优先度比视点信息1的优先度高。 In this case, W 2 so that the viewpoint information priority is a higher priority than a viewpoint information.

[0671] 目P,作业的前后的变化量越大,则将优先度设定得越高即可,作业的前后的变化量大的情况表示用图65A~图6抓说明的作业前图像与合格图像的相似度低。 [0671] P mesh, the greater the amount of change before and after the operation, then the priority can be set to be higher, the amount of change before and after the operation of the front case where the image represents a working grip FIGS. 65A ~ FIG. 6 and described image similarity qualified low. 由此在S100006 的处理中,计算多个视点信息各自的作业前图像与合格图像的相似度(运在S100005的阔值设定时求出),相似度越低,则设定越高的优先度即可。 In the process whereby the S100006, before calculating the respective plurality of viewpoint information and qualified job image similarity image (transport width is obtained when the value setting S100005), the lower the similarity, the higher priority is set degrees. 在执行检查处理时,从优先度高的视点信息按顺序使拍摄部5000移动,从而进行检查。 When performing the checking process, from the viewpoint of high priority information sequentially imaging the moving portion 5000 to perform inspection.

[0672] 3.2在线处理 [0672] 3.2 online processing

[0673] 接下来,用图68的流程图对使用第二检查信息的检查处理亦即在线处理的流程进行说明。 [0673] Next, the inspection process using the second inspection line information that is the processing flow will be described with reference to a flowchart of FIG. 68. 若开始进行在线处理,则首先进行由上述离线处理生成的第二检查信息的读入侦001)。 After starting the process line, a second check is first performed by the information processing offline investigation generated read 001).

[0674] 然后,机器人30000根据基于由优先度信息表示的优先度而设定的移动顺序,使拍摄部5000向与视点信息组的各视点信息对应的视点位置W及视线方向移动。 [0674] Then, the robot moves in accordance with the order of 30,000 based on the priority indicated by the priority information set by the imaging unit 5000 is moved to the position of the viewpoint and the viewing direction W of each viewpoint information corresponding to the viewpoint information group. 运例如能够通过图50的处理部11120、或者图51A的控制部3500的控制来实现。 Operation can be realized by the control of the processing unit 11120 in FIG. 50, FIG. 51A or a control unit 3500. 具体而言,选择视点信息组所包含的多个视点信息中的优先度最高的一个视点信息(S2002),并使拍摄部5000向与该视点信息对应的视点位置W及视线方向移动(S2003)。 Specifically, the highest priority of a plurality of viewpoint information viewpoint information selected viewpoint information included in the group (S2002), and imaging section 5000 is moved to a viewpoint position and the sight line direction W corresponding to the viewpoint information (S2003) . 运样,通过依据上述优先度而进行的拍摄部5000的控制,能够实现高效的检查处理。 The sample transport by controlling the imaging unit based on the priority of 5000 is performed, it is possible to achieve efficient check processing.

[0675] 但是,离线处理中的视点信息基本上是对象物坐标系所规定的信息,不是考虑现实空间(世界坐标系、机器人坐标系)中的位置的信息。 [0675] However, in the offline processing viewpoint information is information substantially specified object coordinate system, is not considered real space position information in the (world coordinate system, the robot coordinate system). 例如,如图69A所示,在对象物坐标系中,在检查对象物的给定的面F1的方向设定视点位置W及视点方向。 For example, as shown in the object coordinate system, the inspection object surface F1 given direction is set viewpoint position and a viewpoint direction W 69A. 在该情况下,在该检查对象物的检查处理中,如图69B所示,在检查对象物W使面F1朝下的方式配置于作业台的情况下,上述视点位置W及视线方向为作业台下,而无法使拍摄部5000(机器人30000的手眼摄像机)移动至该位置方向。 In this case, the inspection process in the inspection target object, as shown in FIG. 69B, in that the surface of the inspection object W F1 arranged downward in the case of work stations, the position of the viewpoint and the sight line direction job W the audience, but not the 5000 (hand-eye camera robot 30000) imaging section is moved to the position and direction.

[0676] 即,S2003成为如下控制:未必使拍摄部5000移动至与视点信息对应的位置姿势, 而进行是否能够移动的判定(S2004),在能够移动的情况下进行移动。 [0676] That is, a control becomes S2003: not cause imaging portion 5000 moves to a position corresponding to the posture of viewpoint information, and determines (S2004) whether or not movable, in the case of moving movable. 具体而言,处理部11120在根据机器人的可动范围信息,判定为无法使拍摄部5000向与多个视点信息中的第i (i为自然数)视点信息对应的视点位置W及视线方向移动的情况下,跳过与第i视点信息对应的拍摄部5000的移动,而进行与移动顺序中的第i视点信息的下一个的第j(j为满足ij 的自然数)的视点信息相对的控制。 Specifically, the processing unit 11120 in accordance with the movable range of the robot information, the determination (i is a natural number) and moves the plurality of the i-th viewpoint is not information that the imaging unit 5000 and the viewpoint position W gaze direction information corresponding to the viewpoint case, skip movement of the i-th viewpoint information corresponding to the imaging portion 5000, and carried out at the j-th one (j to satisfy ij is a natural number) viewpoint information corresponding control information to the moving order of the i-th viewpoint. 具体而言,在S2004中判定中为否的情况下,跳过S2005 W下的检查处理,而返回至S2002,并接着进行视点信息的选择。 Specifically, the judgment in S2004 is NO, the check process is skipped in S2005 W, returns to S2002, and then the selected viewpoint information.

[0677] 运里,若使视点信息组所包含的视点信息的数量为N(N为自然数,并且若为上述图58的例子,则N=18),i为满足1 < i <加勺整数,j为满足1 < j <NW及j辛i的整数。 [0677] shipped, the number of viewpoint information Ruoshi viewpoint information included in the group of N (N is a natural number, and if the above example of FIG. 58 is, then N = 18), i is satisfied <i <plus spoon integer 1 , j is an integer of 1 <j <NW oct i and j is satisfied. 另外,机器人的可动范围信息是表示机器人中的特别是设置有拍摄部5000的部分能够移动的范围的信息。 Further, the movable range information of the robot is the robot is provided with a particular portion of information of the imaging unit 5000 is movable range. 对于机器人所包含的各关节而言,在设计上决定该关节的选取的关节角的范围。 For each included in the articulated robot, determine the range of the joint angle of the joint selection in design. 而且,若决定各关节的关节角的值,则能够根据正向运动学计算机器人的给定的位置。 Further, when the determined value of the joint angle of each joint, it can be given a position of the robot is calculated according to the forward kinematics. 即,可动范围信息是从机器人的设计事项求出的信息,可W是关节角的可取值的组,可W是拍摄部5000的可取空间的位置姿势,也可W是其他信息。 That is, the movable range information is the information obtained from the design robot matters, W may be a group of possible values ​​of the joint angles, W may be a desirable position and orientation of the imaging space portion 5000, W may also be other information.

[0678] 此外,机器人的可动范围信息用机器人坐标系、或者世界坐标系来表现。 [0678] Further, the movable range of the robot using information of the robot coordinate system or world coordinate system performance. 因此,为了进行视点信息与可动范围信息的比较,需要将如图69A所示的对象物坐标系中的视点信息转换为如图69B所示的现实空间中的位置关系、即机器人坐标系中的视点信息。 Accordingly, the viewpoint information for comparison with information of the movable range, the object needs to be in the coordinate system shown in FIG. 69A viewpoint information into real space positional relationship shown in FIG. 69B, i.e. robot coordinate system the viewpoint information.

[0679] 由此信息获取部11110预先作为第一检查信息而获取表示检查对象物的全局坐标系中的位置姿势的对象物位置姿势信息,在S2004的处理中,处理部11120根据基于对象物位置姿势信息求出的全局坐标系与对象物坐标系的相对关系,求出全局坐标系所表现的视点信息,并根据全局坐标系所表现的机器人的可动范围信息、与全局坐标系所表现的视点信息,对是否能够使拍摄部5000向由视点信息表示的视点位置W及视线方向移动进行判定。 [0679] Thus information acquisition unit 11110 as the first pre-inspection position and posture information acquired indicates a global coordinate system in the inspection target object position and orientation information in the processing of S2004, the processing section 11120 based on the target position according to posture information obtained from the global coordinate system relative relationship between the coordinate system and the object, obtains the global coordinate system performance viewpoint information, and the information in accordance with the movable range of the global coordinate system of the robot performance, and the performance of the global coordinate system viewpoint information, whether capable of capturing portion 5000 is moved to a viewpoint position and the sight line direction of W indicated by the determined viewpoint information.

[0680] 由于该处理是坐标变换处理,所W需要两个坐标系的相对关系,能够根据全局坐标系中的对象物坐标系的基准的位置姿势、即对象物位置姿势信息,求出该相对关系。 [0680] Since this process is a coordinate transformation process, the relative relationship between the two W required coordinate system, it is possible that the object position based on the reference posture information of the global coordinate system in the object coordinate system of the position and orientation, relative to the determined relationship.

[0681] 此外,由视点信息表示的视线方向不必是唯一决定拍摄部5000的姿势的信息。 [0681] In addition, the line of sight direction indicated by the viewpoint information is not necessarily information for uniquely determined posture of the imaging unit 5000. 具体而言,在图58、图59的视点候补信息的说明中如上所述,由(x、y、z)决定视点位置,并且由(ax、ay、az似及(bx、by、bz)决定拍摄部5000的姿势,但是也可W不考虑(bx、by、bz)。在对是否能够使拍摄部5000向由视点信息表示的视点位置W及视线方向移动进行的判定中,若将(x、y、z)、(ax、ay、az)W及(bx、by、bz)的全部作为条件,则难W实现满足该条件的拍摄部5000的移动。具体而言,即使能够从由(X、y、Z)表示的位置拍摄作为原点的方向的(ax、ay、 az),表示此时的绕着(ax、ay、az)的旋转角度的矢量也仅取规定范围,而可能无法满足(bx、 by、bz)。由此在本实施方式中,视线方向也可W不包含(bx、by、bz),若满足(x、y、z) W及(ax、ay、az ),则判定为拍摄部5000能够移动至视点信息。 Specifically, in FIG. 58, described viewpoint candidate information as described above in FIG 59, the viewpoint position determined by the (x, y, z), and a (ax, ay, az and the like (bx, by, bz) imaging posture decision unit 5000, but may not consider W (bx, by, bz). Out of whether the photographing portion 5000 is moved to a viewpoint position and the sight line direction of W indicated by the viewpoint information for the determination, if ( moving imaging unit 5000 x, y, z), (ax, ay, az) W and (bx, by, bz) as all conditions are difficult to satisfy the condition of W achieved. specifically, even can be made from the position (X, y, Z) represented by the photographing direction as an origin (ax, ay, az), the rotation angle at this time is about a vector (ax, ay, az) also take only a predetermined range, which may can not be met (bx, by, bz). thus in the present embodiment, gaze direction may not contain W (bx, by, bz), if yes (x, y, z) W and (ax, ay, az ), it is determined that the imaging unit 5000 can be moved to the viewpoint information.

[0682] 在拍摄部5000向与视点信息对应的位置姿势移动完成的情况下,进行由该位置姿势的拍摄部5000实现的拍摄而获取拍摄图像(S2005)。 [0682] In the case where the imaging section 5000 to a position corresponding to the viewpoint information of completion of the movement of the posture, for acquiring the captured image (S2005) by the photographing position and posture of the imaging portion 5000 to achieve. 通过拍摄图像与合格图像的比较来进行检查处理,其中,在合格图像中上述(bx、by、bz)使用规定的值,与此相对,对于拍摄拍摄图像的拍摄部5000而言,存在相对于视线方向的旋转角度与由(bx、by、bz)表示的角度不同的可能性。 Check process is performed by comparing the image of the captured image and conformity, wherein the predetermined usage value (bx, by, bz) qualified image, contrast, for the captured image photographing part 5000, the relative presence of the rotation angle of the gaze direction by the angle (bx, by, bz) represents different possibilities. 例如,如合格图像为图70A、而拍摄图像为图70B的情形那样,存在两个图像之间产生给定的角度下的旋转的情况。 For example, qualified as the image in FIG 70A, and the captured image of FIG. 70B as is the case, there is generated a case where the rotation at a given angle between the two images. 在运种情况下检查区域的切出是不适当的,相似度的计算也同样是不适当的。 In case the cut-out operation is not appropriate examination region, the similarity is calculated are equally inappropriate. 此外,为了便于说明,图70B与从模型数据制成的合格图像相同地, 使背景为单一素色,但是由于图70B为拍摄图像,所W也可能照入其他物体。 Further, for convenience of explanation, FIG. 70B and qualified image data from the model made in the same manner, so that a single plain background, but due to FIG. 70B is a photographed image, the W may shining into other objects. 另外,从照明光等的关系来看,也考虑检查对象物的色调与合格图像不同的情况。 Further, from the point of view of the relationship between the illumination light and the like, are also contemplated conformity with the image color tone different from the case where the inspection target.

[0683] 由此运里,进行拍摄图像与合格图像之间的图像旋转角度的计算处理(S2006)。 [0683] Thus in operation, performs calculation processing (S2006) an image rotation angle between the image and the captured image is qualified. 具体而言,在生成合格图像时使用上述(bx、by、bz),因此与合格图像对应的拍摄部(假想摄像机)的相对于视线方向的旋转角度为已知的信息。 Specifically, the above-described (bx, by, bz), thus qualifying phase corresponding to the image capturing portion (virtual camera) for the rotation angle of the gaze direction information known at the time of generation of qualified images. 另外,拍摄拍摄图像时的拍摄部5000的位置姿势在用于使拍摄部5000向与视点信息对应的位置姿势移动的机器人控制中也应成为已知,如若不然,则根本无法移动。 Further, the photographing position and posture 5000 at the time of shooting an image capturing portion for imaging the robot 5000 for controlling the position of the viewpoint information corresponding to the gesture movement should also be known, If not, then can not move. 由此,形成为也能够求出拍摄时的拍摄部5000的相对于视线方向的旋转角度的信息。 Thereby, it is possible to obtain information relative to the imaging portion 5000 when the rotation angle of the shooting direction of line of sight. 在S2006的处理中,根据相对于视线方向的两个旋转角度的差分,求出图像间的旋转角度。 In the process of S2006, the differential with respect to the line of sight of the two angles of rotation, angle of rotation between the images is obtained. 并且,使用求出的图像旋转角度,进行合格图像与拍摄图像的至少一方的旋转变形处理,从而修正合格图像与拍摄图像的角度的不同。 Then, using the obtained rotation angle of the image, an image captured image conformity rotation of at least one of the deformation process, thereby correcting the angle of a qualified image of the captured image is different.

[0684] 由于通过W上的处理而取得了合格图像与拍摄图像的角度的对应,所W提取各图像中的由S100004求出的检查区域(S2007),而使用该区域来进行确认处理(S2008)。 [0684] Since the processing by the angle corresponding to the acquired W qualified image captured image, extracted by the W S100004 obtained examination region (S2007), was used to confirm that the region processed (S2008 each image ). 在S2008中,计算合格图像与拍摄图像的相似度,若该相似度在由S100005求出的阔值W上则判定为合格,如若不然则判定为不合格即可。 In S2008, the captured image is calculated qualified image similarity, when the similarity obtained in the S100005 value width W is determined to be acceptable, otherwise it is determined to be defective can.

[0685] 但是,也可W不如上所述地仅从一个视点信息进行检查处理,而使用多个视点信息。 [0685] However, W may also be carried out from only one viewpoint information check process is not as described above, using a plurality of viewpoint information. 由此,对是否执行了指定次数的确认处理进行判定(S2009),若执行指定次数则结束处理。 Thus, whether to perform the confirmation process determines the specified number of times (S2009), if the specified number of times executed, the process ends. 例如,若为在3处位置的确认处理中没有问题的情况下成为合格的检查处理,则在S2008 中进行了3次合格的判定的情况下,在S2009判定中为是,而使检查对象物为合格并结束检查处理。 For example, if it is no problem in the confirmation processing in the case where the third position to become qualified inspection process is carried out at a qualified three times is determined in S2008, in S2009 the determination is YES, the object to be inspected qualified and the inspection process. 另一方面,即使在S2008中为合格,若该判定为第一次或者第二次,则在S2009中判定中为否,并继续进行针对下一个视点信息的处理。 On the other hand, qualified, it is determined if the first or the second, even if it is determined in S2008 is NO in S2009, and continues processing for the next viewpoint information.

[0686] 此外,在W上的说明中,在线处理也是由信息获取部11110、处理部11120进行的, 但是并不限定于此,也可W如上所述地利用机器人30000的控制部3500来进行上述处理。 [0686] In the description of the W, the online processing section 11110 is acquired by the information processing unit 11120 is performed, but is not limited thereto, as described above, W may be the control unit of the robot 3500 to 30,000 the above process. 即在线处理可W由图50的机器人30000的信息获取部11110、处理部11120来进行,也可W由图51A的机器人的控制部3500进行。 I.e. W online processing may be acquired by the robot 50 of FIG portion information 30000 11110, 11120 to the processing unit, W may be controlled by the robot of FIG. 51A 3500. 或者,也可W由图51A的处理装置的信息获取部11110、处理部11120进行,该情况下的处理装置10000能够如上所述地考虑为机器人的控制装置。 Alternatively, W may be acquired by the information processing apparatus 51A of FIG portion 11110, 11120 for the processing unit, the processing unit 10000 in this case can be considered as described above robot control apparatus.

[0687] 在利用机器人30000的控制部3500进行在线处理的情况下,机器人30000的控制部3500在根据机器人30000的可动范围信息,判定为无法使拍摄部5000向与多个视点信息中的第ia为自然数)视点信息对应的视点位置W及视线方向移动的情况下,跳过与第i视点信息对应的拍摄部5000的移动,而进行与移动顺序中的第i视点信息的下一个的第j(j为满足i辛j的自然数)视点信息相对的控制。 [0687] In the case of a robot control section 3500 performs on-line processing of 30,000, 30,000 robot control unit 3500 in accordance with the movable range of the robot information 30000, it is determined not to make the imaging portion of the plurality of viewpoint information 5000 and ia the case of movement of the viewpoint position W and the gaze direction a natural number) viewpoint information corresponding to the skipped movement of the i-th viewpoint information corresponding to the imaging portion 5000, and for the first next information and the movement order of the i-th viewpoint j (j is a natural number that satisfies i oct-j) of control information corresponding to the viewpoint.

[〇68引此外,本实施方式的处理装置10000等也可W通过程序来实现其处理的一部分或者大部分。 [〇68 cited Further, the processing apparatus 10000 and the like according to the present embodiment may be implemented as part of W or most of its processes by a program. 在运种情况下,CPU等处理器执行程序,从而实现本实施方式的处理装置10000 等。 In the transport case, CPU and other processor executes the program, thereby realizing the like processing apparatus 10000 of this embodiment. 具体而言,读出存储于非暂时性信息存储介质的程序,并且CPU等处理器执行读出的程序。 Specifically, reads out a program stored in a non-transitory information storage medium, and a processor such as a CPU executing a program read out. 运里,信息存储介质(能够利用计算机读取的介质)是储存程序、数据等的介质,其功能能够通过光盘(DVD、CD等)、HDD(硬盘驱动器)、或者存储器(卡式存储器、ROM等)等来实现。 Operation, the information storage medium (computer-readable medium can be utilized) is a medium for storing programs, data, etc., which function through the optical disc (DVD, CD, etc.), an HDD (hard disk drive), or a memory (memory card, ROM etc.) or the like. 而且,CPU等处理器根据储存于信息存储介质的程序(数据),进行本实施方式的各种处理。 Further, CPU and other processors according to a program (data) stored in the information storage medium, performs various processing according to this embodiment. 即,在信息存储介质存储用于使计算机(具备操作部、处理部、存储部、输出部的装置)作为本实施方式的各部而发挥功能的程序(用于使计算机执行各部的处理的程序)。 That is, in an information storage medium storing instructions for causing a computer (comprising, processing unit, storage unit, output unit means operating portion) The respective units of the present embodiment functions play program (program for causing a computer to execute the process of each section) .

Claims (17)

  1. 1. 一种机器人,其特征在于,是使用由拍摄部拍摄的检查对象物的拍摄图像来进行检查上述检查对象物的检查处理的机器人,并且根据第一检查信息,生成包含上述检查处理的检查区域的第二检查信息,并根据上述第二检查信息,进行上述检查处理, 上述第二检查信息包含将多个视点信息包含在内的视点信息组,并且上述视点信息组的各视点信息包含上述检查处理中的上述拍摄部的视点位置以及视线方向。 1. A robot, characterized in that, using the captured image captured by the imaging unit to the object of inspection robot check process checks the inspection object, and in accordance with a first inspection information, generates an inspection process of inspecting the the second examination information area, and a second inspection based on the information, the examination process, and the second set of check information comprises a plurality of viewpoint information included in the viewpoint information, and viewpoint information of the respective viewpoint information including the group the viewpoint position and the sight line direction of the imaging unit in the inspection process.
  2. 2. 根据权利要求1所述的机器人,其特征在于, 对上述视点信息组的各视点信息,设定使上述拍摄部向与上述视点信息对应的上述视点位置以及上述视线方向移动时的优先度。 The robot according to claim 1, wherein the viewpoint information of each viewpoint information group, the priority is set so that the imaging unit is moved to a position above the viewpoint and the sight-line direction corresponding to the viewpoint information .
  3. 3. 根据权利要求2所述的机器人,其特征在于, 依据基于上述优先度而设定的移动顺序,使上述拍摄部向与上述视点信息组的上述各视点信息对应的上述视点位置以及上述视线方向移动。 3. The robot according to claim 2, wherein, based on the priority order based on the degree of movement and set so that the imaging portion to the above-described viewpoint above the viewpoint position of each block information corresponding to the viewpoint and line of sight of movement direction.
  4. 4. 根据权利要求3所述的机器人,其特征在于, 在根据可动范围信息,判定为无法使上述拍摄部向与多个上述视点信息中的第i视点信息对应的上述视点位置以及上述视线方向移动的情况下,不进行基于上述第i视点信息的上述拍摄部的移动,而根据上述移动顺序中的上述第i视点信息的下一个的第j视点信息,使上述拍摄部移动,其中i、j为自然数,且i矣j。 4. The robot according to claim 3, wherein, in the movable range information, it is determined that the imaging unit can not make the i-th position of the viewpoint and the viewpoint information corresponding to a plurality of said viewpoint and said line of sight information a case where the direction of movement, does not move the imaging unit in the i-th viewpoint information based on, according to the j-th viewpoint information of the next the shift sequence in the i-th viewpoint information, so that the imaging portion moves, wherein i , j is a natural number, and carry i j.
  5. 5. 根据权利要求1~4中任一项所述的机器人,其特征在于, 上述第一检查信息包含针对上述检查对象物的相对的检查处理对象位置,并以上述检查处理对象位置为基准,设定与上述检查对象物对应的对象物坐标系,从而使用上述对象物坐标系,生成上述视点信息。 5. A robot according to any one of claims 1 to 4, wherein said first information includes opposing inspection target position check processing for the inspection object and processed at the inspection position as a reference, setting the inspection object corresponding to the object coordinate system, so that the use of the object coordinate system, generates the viewpoint information.
  6. 6. 根据权利要求5所述的机器人,其特征在于, 上述第一检查信息包含表示上述检查对象物的全局坐标系中的位置姿势的对象物位置姿势信息, 根据基于上述对象物位置姿势信息而求出的上述全局坐标系与上述对象物坐标系的相对关系,求出上述全局坐标系中的上述视点信息, 并根据上述全局坐标系中的可动范围信息与上述全局坐标系中的上述视点信息,对是否能够使上述拍摄部向上述视点位置以及上述视线方向移动进行判定。 6. The robot according to claim 5, wherein the first inspection object position information includes information of position and posture of the posture of the global coordinate system of the object to be examined in accordance with the position and orientation of the object based on the information obtaining the relationship between the global coordinate system relative to the coordinate system of the object, obtains the viewpoint information of the global coordinate system, and the information of the movable range in accordance with a global coordinate system with the viewpoint of the above-described global coordinate system information can be made whether the imaging unit determines the viewpoint position and the sight-line direction moves to.
  7. 7. 根据权利要求1所述的机器人,其特征在于, 上述检查处理是针对机器人作业的结果进行的处理,上述第一检查信息是在上述机器人作业中获取的信息。 The robot according to claim 1, wherein the inspection process is a process performed for the robot operation result of the first check information is information acquired in the operation of the robot.
  8. 8. 根据权利要求1所述的机器人,其特征在于, 上述第一检查信息是包含上述检查对象物的形状信息、上述检查对象物的位置姿势信息、以及针对上述检查对象物的相对的检查处理对象位置中的至少一个的信息。 8. The robot according to claim 1, wherein said first examination information containing information of the shape of the inspection target object, the position attitude information of the inspection object, and the relative process for the inspection of the inspection object at least one information object position.
  9. 9. 根据权利要求1所述的机器人,其特征在于, 上述第一检查信息包含上述检查对象物的三维模型数据。 9. The robot according to claim 1, wherein said first information includes three-dimensional model data to check the inspection object.
  10. 10. 根据权利要求9所述的机器人,其特征在于, 上述检查处理是针对机器人作业的结果进行的处理, 上述三维模型数据包含通过进行上述机器人作业而得到的作业后三维模型数据、与上述机器人作业前的上述检查对象物的上述三维模型数据亦即作业前三维模型数据。 10. The robot according to claim 9, wherein the inspection process is a process performed for a result of robot operation, the three-dimensional model data comprising three-dimensional model data obtained by performing the operation of the robot operation, and the robot the three-dimensional model data of object to be examined before the job before the job i.e. three-dimensional model data.
  11. 11. 根据权利要求9或10所述的机器人,其特征在于, 上述第二检查信息包含合格图像,上述合格图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的假想摄像机拍摄上述三维模型数据而得的图像。 11. The robot of claim 9 or claim 10, wherein said second inspection image information includes passing the image was taken by a qualified disposed above the viewpoint position and the sight-line direction corresponding to the viewpoint information of the virtual camera the three-dimensional model data obtained by the image.
  12. 12. 根据权利要求10所述的机器人,其特征在于, 上述第二检查信息包含合格图像与作业前图像, 上述合格图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的假想摄像机拍摄上述作业后三维模型数据而得的图像, 上述作业前图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的上述假想摄像机拍摄上述作业前三维模型数据而得的图像, 通过对上述作业前图像与上述合格图像进行比较,求出上述检查区域。 12. The robot according to claim 10, wherein said second inspection image and the job information includes image before passing the image is arranged in conformity with the above-described position of the viewpoint and the sight-line direction corresponding to the virtual viewpoint information the image captured by the camera after the three-dimensional model data obtained by the above-described operation, the above-described image pickup operation before the image is a three-dimensional model data of the working front of the viewpoint obtained by the viewpoint information corresponding to the virtual camera position, and the sight-line direction by the configuration at, by the above operation before image and the comparison image passing obtains the inspection area.
  13. 13. 根据权利要求12所述的机器人,其特征在于, 在上述比较中,求出上述作业前图像与上述合格图像的差分亦即差分图像,上述检查区域是上述差分图像中的包含上述检查对象物的区域。 13. The robot according to claim 12, wherein, in the comparison, i.e. the difference obtained difference image before passing the working image and the image of the inspection area including the inspection target is the difference image region thereof.
  14. 14. 根据权利要求10所述的机器人,其特征在于, 上述第二检查信息包含合格图像与作业前图像, 上述合格图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的假想摄像机拍摄上述作业后三维模型数据而得的图像, 上述作业前图像是由配置于与上述视点信息对应的上述视点位置以及上述视线方向的上述假想摄像机拍摄上述作业前三维模型数据而得的图像, 根据上述作业前图像与上述合格图像的相似度,设定基于上述拍摄图像与上述合格图像进行的上述检查处理所使用的阈值。 14. The robot according to claim 10, wherein said second inspection image and the job information includes image before passing the image is arranged in conformity with the above-described position of the viewpoint and the sight-line direction corresponding to the virtual viewpoint information the image captured by the camera after the three-dimensional model data obtained by the above-described operation, the above-described image pickup operation before the image is a three-dimensional model data of the working front of the viewpoint obtained by the viewpoint information corresponding to the virtual camera position, and the sight-line direction by the configuration at, the similarity before passing the working image and the image is set based on the threshold check process for the captured image with said image using qualified.
  15. 15. 根据权利要求1所述的机器人,其特征在于, 至少包括第一臂与第二臂,上述拍摄部是设置于上述第一臂以及上述第二臂的至少一方的手眼摄像机。 15. The robot according to claim 1, characterized in that at least a first arm and a second arm, the imaging section is provided in the first arm, and hand-eye cameras at least one of said second arm.
  16. 16. -种处理装置,其特征在于,是针对使用由拍摄部拍摄的检查对象物的拍摄图像而进行上述检查对象物的检查处理的装置,输出上述检查处理所使用的信息的处理装置, 并且根据第一检查信息,生成将上述检查处理的包含上述拍摄部的视点位置以及视线方向的视点信息、与上述检查处理的检查区域包含在内的第二检查信息, 并针对进行上述检查处理的上述装置输出上述第二检查信息。 16. - kind of processing apparatus, wherein a means for outputting the information processing apparatus used in the inspection process to check the inspection process for the use of the object captured by the imaging of the inspection target portion of the captured image, and according to a first inspection information, generates the inspection process comprising the viewpoint information of the viewpoint position and view direction of the imaging unit, and the inspection process comprises a second inspection region including the examination information, and perform the inspection process described above for the means for outputting the second test information.
  17. 17. -种检查方法,其特征在于,是使用由拍摄部拍摄的检查对象物的拍摄图像,而进行检查上述检查对象物的检查处理的检查方法, 在该检查方法中包括根据第一检查信息,生成将上述检查处理的包含上述拍摄部的视点位置以及视线方向的视点信息、与上述检查处理的检查区域包含在内的第二检查信息的步骤。 17. - such inspection method, wherein, using the captured image captured by the imaging portion of the object to be inspected, and the inspection process of inspection method of inspecting the inspection object, the inspection method comprising a first inspection information according to , the step of generating the inspection process viewpoint information including the view point position and view direction of the imaging unit, and the inspection process of inspection area included in the second test information.
CN 201510137541 2013-10-10 2014-10-10 Robot control system, a robot, a robot control program, and a method of CN104802166B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
JP2013212930A JP6322949B2 (en) 2013-10-10 2013-10-10 Robot controller, a robot system, the robot, the robot control method and a robot control program
JP2013226536A JP2015085450A (en) 2013-10-31 2013-10-31 Robot control system, robot, program and robot control method
JP2013228655A JP6337445B2 (en) 2013-11-01 2013-11-01 Robot, processing apparatus and inspection method
JP2013228653A JP6217322B2 (en) 2013-11-01 2013-11-01 Robot controller, a robot and a robot control method
CN 201410531769 CN104552292A (en) 2013-10-10 2014-10-10 Control system of robot, robot, program and control method of robot

Publications (2)

Publication Number Publication Date
CN104802166A true CN104802166A (en) 2015-07-29
CN104802166B true CN104802166B (en) 2016-09-28

Family

ID=53069890

Family Applications (5)

Application Number Title Priority Date Filing Date
CN 201711203574 CN108081268A (en) 2013-10-10 2014-10-10 Robot control system, a robot, a robot control program, and a method of
CN 201510137541 CN104802166B (en) 2013-10-10 2014-10-10 Robot control system, a robot, a robot control program, and a method of
CN 201510137542 CN104802174B (en) 2013-10-10 2014-10-10 Robot control system, a robot, a robot control program, and a method of
CN 201510136619 CN104959982A (en) 2013-10-10 2014-10-10 Robot control system, robot, program and robot control method
CN 201410531769 CN104552292A (en) 2013-10-10 2014-10-10 Control system of robot, robot, program and control method of robot

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN 201711203574 CN108081268A (en) 2013-10-10 2014-10-10 Robot control system, a robot, a robot control program, and a method of

Family Applications After (3)

Application Number Title Priority Date Filing Date
CN 201510137542 CN104802174B (en) 2013-10-10 2014-10-10 Robot control system, a robot, a robot control program, and a method of
CN 201510136619 CN104959982A (en) 2013-10-10 2014-10-10 Robot control system, robot, program and robot control method
CN 201410531769 CN104552292A (en) 2013-10-10 2014-10-10 Control system of robot, robot, program and control method of robot

Country Status (1)

Country Link
CN (5) CN108081268A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108081268A (en) * 2013-10-10 2018-05-29 精工爱普生株式会社 Robot control system, a robot, a robot control program, and a method of
CN104965489A (en) * 2015-07-03 2015-10-07 昆山市佰奥自动化设备科技有限公司 CCD automatic positioning assembly system and method based on robot

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5608847A (en) * 1981-05-11 1997-03-04 Sensor Adaptive Machines, Inc. Vision target based assembly
DE3405909A1 (en) * 1984-02-18 1985-08-22 Licentia Gmbh Apparatus for tracking, metrological analysis and / or control of technical verfahrensablaeufen
JPS62192807A (en) * 1986-02-20 1987-08-24 Fujitsu Ltd Robot control system
JPH03220603A (en) * 1990-01-26 1991-09-27 Citizen Watch Co Ltd Robot control method
US6718233B2 (en) * 2002-03-29 2004-04-06 Nortel Networks, Ltd. Placement of an optical component on a substrate
JP3940998B2 (en) * 2002-06-06 2007-07-04 株式会社安川電機 Robotic device
WO2008047872A1 (en) * 2006-10-20 2008-04-24 Hitachi, Ltd. Manipulator
US8864652B2 (en) * 2008-06-27 2014-10-21 Intuitive Surgical Operations, Inc. Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip
JP5509859B2 (en) * 2010-01-13 2014-06-04 株式会社Ihi Robot control apparatus and method
JP4837116B2 (en) * 2010-03-05 2011-12-14 ファナック株式会社 Robotic system equipped with visual sensors
CN102059703A (en) * 2010-11-22 2011-05-18 北京理工大学 Self-adaptive particle filter-based robot vision servo control method
CN103517789B (en) * 2011-05-12 2015-11-25 株式会社Ihi Apparatus and method for controlling the motion prediction
JP2012254518A (en) * 2011-05-16 2012-12-27 Seiko Epson Corp Robot control system, robot system and program
JP5834545B2 (en) * 2011-07-01 2015-12-24 セイコーエプソン株式会社 Robot, the robot controller, a robot control method, and a robot control program
CN102501252A (en) * 2011-09-28 2012-06-20 三一重工股份有限公司 Method and system for controlling movement of tail end of executing arm
JP6000579B2 (en) * 2012-03-09 2016-09-28 キヤノン株式会社 Information processing apparatus, information processing method
CN108081268A (en) * 2013-10-10 2018-05-29 精工爱普生株式会社 Robot control system, a robot, a robot control program, and a method of

Also Published As

Publication number Publication date Type
CN104802174B (en) 2016-09-07 grant
CN104552292A (en) 2015-04-29 application
CN104802174A (en) 2015-07-29 application
CN104802166A (en) 2015-07-29 application
CN104959982A (en) 2015-10-07 application
CN108081268A (en) 2018-05-29 application

Similar Documents

Publication Publication Date Title
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
Wijesoma et al. Eye-to-hand coordination for vision-guided robot control applications
US20080027580A1 (en) Robot programming method and apparatus with both vision and force
US20110071675A1 (en) Visual perception system and method for a humanoid robot
US20130238124A1 (en) Information processing apparatus and information processing method
Baeten et al. Hybrid vision/force control at corners in planar robotic-contour following
US20130147944A1 (en) Vision-guided alignment system and method
CN102294695A (en) Robot calibration method and calibration system
CN102120307A (en) System and method for grinding industrial robot on basis of visual information
US20130238128A1 (en) Information processing apparatus and information processing method
JP2009269110A (en) Assembly equipment
US20080249659A1 (en) Method and system for establishing no-entry zone for robot
JP2005011580A (en) Connector holding device, and connector inspection system and connector connection system equipped therewith
JP2002018754A (en) Robot device and its control method
US20130011018A1 (en) Information processing apparatus and information processing method
US20130158947A1 (en) Information processing apparatus, control method for information processing apparatus and storage medium
Andreff et al. Image-based visual servoing of a gough—stewart parallel manipulator using leg observations
Hebert et al. Fusion of stereo vision, force-torque, and joint sensors for estimation of in-hand object location.
JP2005342832A (en) Robot system
JP2012254518A (en) Robot control system, robot system and program
US20130054025A1 (en) Information processing apparatus, control method for information processing apparatus, and recording medium
Lippiello et al. A position-based visual impedance control for robot manipulators
JP2003117861A (en) Position correcting system of robot
US20090234502A1 (en) Apparatus for determining pickup pose of robot arm with camera
CN101100060A (en) Device, program, recording medium and method for preparing robot program

Legal Events

Date Code Title Description
C06 Publication
EXSB Decision made by sipo to initiate substantive examination
C14 Grant of patent or utility model