CN108081268A - Robot control system, robot, program and robot control method - Google Patents
Robot control system, robot, program and robot control method Download PDFInfo
- Publication number
- CN108081268A CN108081268A CN201711203574.9A CN201711203574A CN108081268A CN 108081268 A CN108081268 A CN 108081268A CN 201711203574 A CN201711203574 A CN 201711203574A CN 108081268 A CN108081268 A CN 108081268A
- Authority
- CN
- China
- Prior art keywords
- image
- endpoint
- robot
- mentioned
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23P—METAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
- B23P19/00—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
- B23P19/001—Article feeders for assembling machines
- B23P19/007—Picking-up and placing mechanisms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
A kind of robot control system, including:Captured image acquisition unit obtains captured image;And control unit, it controls robot according to captured image, the captured image of the assembling object and in assembled object, at least assembled object that have assembling operation is reflected in the acquisition of captured image acquisition unit, control unit is according to captured image, carry out the characteristic quantity detection process of assembled object, and according to the characteristic quantity of assembled object, move assembling object.
Description
It is October 10, Application No. 201410531769.6, entitled " machine in 2014 applying date that the application, which is,
The divisional application of the application of people's control system, robot, program and robot control method ".
Technical field
The present invention relates to robot control system, robot, program and robot control methods etc..
Background technology
In recent years, in production scene, in order to make one carried out mechanization of operation, automation, import mostly industrial
Robot.But when carrying out the positioning of robot, it is accurate be calibrated to premised on, be the obstacle that robot imports.
Here, as one of means for carrying out robot localization, there is visual servo.Existing visual servo is according to reference
Image (goal images, target image) and the difference of captured image (current image) carry out robot the skill of feedback control
Art.Certain visual servo is very useful in terms of calibration precision is not required, and as the technology for reducing robot importing obstacle
And it is concerned.
As with the relevant technology of the visual servo, for example there is the prior arts recorded in patent document 1.
Patent document 1:Japanese Unexamined Patent Publication 2011-143494 publications
Robot is made into the assembling operation be about to assemble object and be assembled in assembled object by visual servo
In the case of, the posture of assembled object can all change when carrying out assembling operation every time.In the position of assembled object
In the case of putting postural change, the posture of assembled object and the assembling object as assembled state also generates change
Change.
If at this point, carrying out visual servo using identical reference image every time, correct assembling operation can not be realized.
This is because whether the posture of assembling object for no matter becoming assembled state changes, it can all make assembling object to reflecting
The posture movement of assembling object in reference image.
As long as in addition, though in theory every time the change in location of actual assembled object when using different
Reference image, it becomes possible to assembling operation be carried out by using the visual servo of reference image, but needed in this case
Prepare substantial amounts of reference image, be unpractical.
The content of the invention
The mode of the present invention is related to a kind of robot control system, including:Captured image acquisition unit obtains shooting
Image;And control unit, robot is controlled according to above-mentioned captured image, above-mentioned captured image acquisition unit acquisition, which is reflected, assembling
The above-mentioned captured image of the assembling object of operation and assembled object in assembled object, at least the above, it is above-mentioned
Control unit carries out the characteristic quantity detection process of above-mentioned assembled object, and is assembled according to above-mentioned according to above-mentioned captured image
The characteristic quantity of object moves above-mentioned assembling object.
In the mode of the present invention, according to the characteristic quantity of the assembled object detected from captured image, make assembling
Object moves.
Even if assembling work also can be correctly carried out in the case of the posture variation of assembled object as a result,
Industry.
In addition, in the mode of the present invention, it can also be configured to, above-mentioned control unit has above-mentioned assembling object according to reflecting
And 1 or multiple captured images of above-mentioned assembled object, carry out above-mentioned assembling object and assembled pair above-mentioned
As the features described above amount detection process of object, and according to the characteristic quantity of above-mentioned assembling object and the spy of above-mentioned assembled object
Sign amount, so that the relative position posture relation of above-mentioned assembling object and above-mentioned assembled object becomes target relative position appearance
The mode of gesture relation moves above-mentioned assembling object.
Thereby, it is possible to according to the characteristic quantity for assembling object and assembled object detected from captured image
Characteristic quantity carries out assembling operation etc..
In addition, in the mode of the present invention, can also be configured to, above-mentioned control unit is according to above-mentioned assembled object
The characteristic quantity set as target signature amount in characteristic quantity and the conduct concern in the characteristic quantity of above-mentioned assembling object are special
Sign amount and the characteristic quantity set, so that above-mentioned relative position posture relation becomes the side of above-mentioned target relative position posture relation
Formula moves above-mentioned assembling object.
Thereby, it is possible to so that the assembled part of assembling object of setting and the assembled portion of the assembled object set
The relative position posture relation divided becomes the mode of target relative position posture relation, makes assembling object movement etc..
In addition, in the mode of the present invention, can also be configured to, above-mentioned control unit is so that the pass of above-mentioned assembling object
The characteristic point mode consistent or close with the target feature point of above-mentioned assembled object is noted, moves above-mentioned assembling object
It is dynamic.
Thereby, it is possible to the assembled part for assembling object is assembled in assembled part of assembled object etc..
In addition, in the mode of the present invention, can also be configured to, including reference image storage part, which deposits
The storage of storage portion is to reference image that the above-mentioned assembling object of target location posture is taken to be shown, and above-mentioned control unit is according to reflecting
There are the first captured image of above-mentioned assembling object and above-mentioned reference image, make above-mentioned assembling object to above-mentioned target location appearance
Potential shift moves, after above-mentioned assembling object is moved, according at least reflecting the second captured image for having above-mentioned assembled object, into
The features described above amount detection process of the above-mentioned assembled object of row, and according to the characteristic quantity of above-mentioned assembled object, make above-mentioned
Assemble object movement.
Thereby, it is possible to when identical assembling operation is repeated, using identical reference image, make assembling object to
It is moved near assembled object, afterwards, the detailed posture for compareing actual assembled object carries out assembling work
Industry etc..
In addition, in the mode of the present invention, it can also be configured to, above-mentioned control unit has in above-mentioned assembling operation according to reflecting
The first assembled object the first captured image, at the features described above amount detection for carrying out the above-mentioned first assembled object
Reason, and according to the characteristic quantity of the above-mentioned first assembled object, move above-mentioned assembling object, make above-mentioned assembling object
After movement, there is the second the second captured image for being assembled object according at least reflecting, carry out the above-mentioned second assembled object
Features described above amount detection process, and according to the characteristic quantity of the above-mentioned second assembled object, make above-mentioned assembling object and on
State the first assembled object movement.
As a result, when carrying out assembling operation every time, even if the first assembled object, the position of the second assembled object
Offset can also carry out assembling operation of assembling object, the first assembled object and the second assembled object etc..
In addition, in the mode of the present invention, it can also be configured to, above-mentioned control unit has in above-mentioned assembling operation according to reflecting
Above-mentioned assembling object and the first assembled object 1 or multiple first captured images, carry out above-mentioned assembling pair
As the features described above amount detection process of object and above-mentioned first assembled object, and according to the characteristic quantity of above-mentioned assembling object
And above-mentioned first assembled object characteristic quantity so that the phase that above-mentioned assembling object is assembled object with above-mentioned first
Become the mode of first object relative position posture relation to posture relation, move above-mentioned assembling object, and according to
Reflecting has the second captured image of the second assembled object, at the features described above amount detection for carrying out the above-mentioned second assembled object
Reason, and according to the characteristic quantity of the above-mentioned first assembled object and the characteristic quantity of above-mentioned second assembled object, so that on
The relative position posture relation of the first assembled object and the above-mentioned second assembled object is stated as the opposite position of the second target
The mode of posture relation is put, makes above-mentioned assembling object and the above-mentioned first assembled object movement.
Thereby, it is possible to so that the concern characteristic point and the target feature point of the first assembled object of assembling object connect
Closely, the concern characteristic point of the first assembled object mode close with the target feature point of the second assembled object, carries out
Visual servo etc..
In addition, in the mode of the present invention, it can also be configured to, above-mentioned control unit has in above-mentioned assembling operation according to reflecting
Above-mentioned assembling object, 1 or multiple captured images of the first assembled object and the second assembled object, into
The features described above amount detection of the above-mentioned assembling object of row, above-mentioned first assembled object and above-mentioned second assembled object
Processing, and according to the characteristic quantity of above-mentioned assembling object and the characteristic quantity of above-mentioned first assembled object, so that above-mentioned group
Filling the relative position posture relation of object and the above-mentioned first assembled object becomes first object relative position posture relation
Mode, move above-mentioned assembling object, and the characteristic quantity according to the above-mentioned first assembled object and above-mentioned second quilt
The characteristic quantity of object is assembled, so that the relative position appearance of the above-mentioned first assembled object and the above-mentioned second assembled object
Gesture relation becomes the mode of the second target relative position posture relation, makes the above-mentioned first assembled object movement.
Assembling operation etc. while thereby, it is possible to carry out three workpiece.
In addition, in the mode of the present invention, it can also be configured to, above-mentioned control unit has in above-mentioned assembling operation according to reflecting
The second assembled object the first captured image, at the features described above amount detection for carrying out the above-mentioned second assembled object
Reason, and according to the characteristic quantity of the above-mentioned second assembled object, makes the first assembled object movement, and according to reflect have it is mobile after
The above-mentioned first assembled object the second captured image, carry out the features described above amount detection of the above-mentioned first assembled object
Processing, and according to the characteristic quantity of the above-mentioned first assembled object, move above-mentioned assembling object.
Assembling object need not as a result, with the first assembled object moved simultaneously, and can more easily carry out
Control of robot etc..
In addition, in the mode of the present invention, it can also be configured to, above-mentioned control unit is by carrying out based on above-mentioned shooting figure
The visual servo of picture controls above-mentioned robot.
Thereby, it is possible to according to current job status, feedback control etc. is carried out to robot.
In addition, the another way of the present invention is related to a kind of robot, including:Captured image acquisition unit obtains shooting
Image;And control unit, robot is controlled according to above-mentioned captured image, above-mentioned captured image acquisition unit acquisition, which is reflected, assembling
The above-mentioned captured image of the assembling object of operation and assembled object in assembled object, at least the above, above-mentioned control
Portion processed carries out the characteristic quantity detection process of above-mentioned assembled object according to above-mentioned captured image, and according to above-mentioned assembled pair
As the characteristic quantity of object, move above-mentioned assembling object.
In addition, in the another way of the present invention, it is related to a kind of journey that computer is made to be functioned as above-mentioned each portion
Sequence.
In addition, the another way of the present invention is related to a kind of robot control method, there is assembling operation including obtaining to reflect
The step of assembling the captured image of object and assembled object in assembled object, at least the above;According to above-mentioned bat
The step of taking the photograph image, carrying out the characteristic quantity detection process of above-mentioned assembled object;And according to above-mentioned assembled object
Characteristic quantity makes the step of above-mentioned assembling object movement.
According to several aspects of the present invention, even if being capable of providing the situation in the posture variation of assembled object
Under, it also can correctly carry out robot control system, robot, program and robot control method of assembling operation etc..
In addition, another way is robot controller, which is characterized in that is possessed:The first control units, so that robot
Arm mode from the path instructed position and formed of 1 or more based on setting to target location that moved according to of endpoint, it is raw
Into command value;Image acquiring unit, obtain the image comprising above-mentioned endpoint when above-mentioned endpoint is in above-mentioned target location that is,
Target image and above-mentioned endpoint are in the image that is, present image that include above-mentioned endpoint during current location;Second control
Portion so that above-mentioned endpoint according to above-mentioned present image and above-mentioned target image from above-mentioned current location to above-mentioned target location
Mobile mode generates command value;And drive control part, using by the command value that above-mentioned the first control units generates with by upper
It states the command value of the second control unit generation and moves said arm.
According to the manner, so that the endpoint of the arm of robot instructs position and is formed according to 1 or more based on setting
The mode that is moved to target location of path, generate command value, and so that endpoint according to present image and target image from
The mode that current location is moved to target location generates command value.Then, move arm using these command values.As a result, can
The high speed of position control is enough maintained, and also can be corresponding with the situation that target location changes.
In addition, another way is robot controller, which is characterized in that is possessed:Control unit, so that the arm of robot
Endpoint and the close mode in target location generate the track of above-mentioned endpoint;And image acquiring unit, it obtains at above-mentioned endpoint
When the image comprising above-mentioned endpoint that is, present image and above-mentioned endpoint when current location are in above-mentioned target location
Image that is, target image comprising above-mentioned endpoint, above-mentioned control unit according to 1 or more based on setting instruct position and shape
Into path, with above-mentioned present image and above-mentioned target image, move said arm.The high speed of position control is maintained as a result, and
It and also can be corresponding with the situation that target location changes.
Here, above-mentioned drive control part can also be used and will generated respectively with defined component by above-mentioned the first control units
Command value and the signal being formed by stacking by the command value of above-mentioned second control unit generation, move said arm.Thereby, it is possible into
For the mode of desired track, move the track of endpoint.For example, although the track of endpoint can be made to be formed as undesirable but be
The track of object is included at the visual angle of trick video camera.
Here, above-mentioned drive control part can also be according to the difference of above-mentioned current location and above-mentioned target location, in decision
State defined component.Thereby, it is possible to accordingly continuously change component with distance, therefore being capable of successfully switching control.
Here, can also possess the input unit for inputting above-mentioned defined component.Thereby, it is possible in track desired by user
Upper suspension arm.
Here, can also possess the storage part for storing above-mentioned defined component.Thereby, it is possible to use advance initial setting
Component.
Here, above-mentioned drive control part can also be configured to, in the case of condition as defined in the satisfaction of above-mentioned current location,
Said arm is driven using the command value based on the track generated by above-mentioned the first control units, is unsatisfactory in above-mentioned current location
In the case of condition as defined in stating, using the command value based on the track generated by above-mentioned the first control units and based on by upper
The command value of the track of the second control unit generation is stated to drive said arm.Thereby, it is possible to higher speed handled.
Here, can also possess:Power test section is detected the power for being applied to above-mentioned endpoint;And the 3rd control
Portion, according to the value that above-mentioned power test section is detected, so that above-mentioned endpoint is moved from above-mentioned current location to above-mentioned target location
Mode, generate the track of above-mentioned endpoint, above-mentioned drive control part is used based on the track generated by above-mentioned the first control units
Command value, the command value based on the track generated by above-mentioned second control unit and based on being generated by above-mentioned 3rd control unit
The command value of track or using the command value based on the track generated by above-mentioned the first control units and based on by above-mentioned the
The command value of the track of three control units generation, moves said arm.Even if in the case of being moved as a result, in target location, in nothing
In the case that method confirms target location, also it is able to maintain that the high speed of position control and safely carries out operation.
In addition, another way is robot system, which is characterized in that is possessed:Robot, with arm;The first control units,
It is so that the endpoint of said arm was moved according to the path instructed position and formed of 1 or more based on setting to target location
Mode generates command value;Shoot part, the image comprising above-mentioned endpoint when being in above-mentioned target location to above-mentioned endpoint that is,
The image comprising above-mentioned endpoint when target image and above-mentioned endpoint are in the current location as the position at current time is also
I.e. present image is shot;Second control unit, so that above-mentioned endpoint is according to above-mentioned present image and above-mentioned target image
The mode moved from above-mentioned current location to above-mentioned target location generates command value;And drive control part, use is by above-mentioned
The command value of the first control units generation and the command value generated by above-mentioned second control unit, move said arm.Thereby, it is possible to tie up
The high speed of position control is held, and also can be corresponding with the situation that target location changes.
In addition, another way is robot system, which is characterized in that is possessed:Robot, with arm;Control unit, with
The endpoint of said arm and the close mode in target location is made to generate the track of above-mentioned endpoint;And shoot part, to above-mentioned endpoint
The image comprising above-mentioned endpoint that is, present image and above-mentioned end during in current location as the position at current time
The image comprising above-mentioned endpoint that is, target image when point is in above-mentioned target location are shot, and above-mentioned control unit is according to base
In the path instructed position and formed of 1 or more of setting and above-mentioned present image and above-mentioned target image, move said arm
It is dynamic.Thereby, it is possible to maintain the high speed that position controls, and also can be corresponding with the situation that target location changes.
In addition, another way is robot, which is characterized in that is possessed:Arm;The first control units, so that the end of said arm
The mode that point is moved according to the path instructed position and formed of 1 or more based on setting to target location generates command value;
Image acquiring unit, obtain the image comprising above-mentioned endpoint when above-mentioned endpoint is in above-mentioned target location that is, target image,
And image comprising above-mentioned endpoint of above-mentioned endpoint when being in current location as the position at current time that is, currently scheme
Picture;Second control unit so that above-mentioned endpoint according to above-mentioned present image and above-mentioned target image from above-mentioned current location to
The mode of above-mentioned target location movement, generates command value and drive control part, uses what is generated by above-mentioned the first control units
Command value and the command value generated by above-mentioned second control unit, move said arm.Thereby, it is possible to maintain the height that position controls
Speed, and also can be corresponding with the situation that target location changes.
In addition, another way is robot, which is characterized in that is possessed:Arm;Control unit so that the endpoint of said arm with
The close mode in target location generates the track of above-mentioned endpoint;And image acquiring unit, it obtains above-mentioned endpoint and is in present bit
The including when image comprising above-mentioned endpoint that is, present image and above-mentioned endpoint when putting are in above-mentioned target location is above-mentioned
The image that is, target image of endpoint, above-mentioned control unit according to the path instructed position and formed of 1 or more based on setting,
With above-mentioned present image and above-mentioned target image, move said arm.Thereby, it is possible to maintain the high speed that position controls, and
It also can be corresponding with the situation that target location changes.
In addition, another way is robot control method, it is characterised in that including:The endpoint for obtaining the arm of robot is in
The step of image that is, target image comprising above-mentioned endpoint during target location;Above-mentioned endpoint is obtained to be in as current time
Position current location when the image that is, present image comprising above-mentioned endpoint the step of;And with according to based on setting
The mode that the path instructed position and formed of 1 or more makes above-mentioned endpoint be moved to above-mentioned target location generates command value, and
And according to above-mentioned present image and above-mentioned target image above-mentioned endpoint to be made to be moved from above-mentioned current location to above-mentioned target location
Dynamic mode generates command value, the step for moving said arm thereby using above-metioned instruction value.Thereby, it is possible to position is maintained to control
The high speed of system, and also can be corresponding with the situation that target location changes.
In addition, another way is robot control method, which is characterized in that it has arm and obtains above-mentioned in order to control
The image comprising above-mentioned endpoint that is, present image and above-mentioned endpoint when the endpoint of arm is in current location are in target location
When the image comprising above-mentioned endpoint that is, the said arm of the robot of the image acquiring unit of target image robot controlling party
Method, and the instruction controlled using the position carried out according to the path instructed position and formed of 1 or more based on setting
The command value of value and the visual servo carried out according to above-mentioned present image and above-mentioned target image, so as to control said arm.
Thereby, it is possible to maintain the high speed that position controls, and also can be corresponding with the situation that target location changes.
In addition, another way is robot control method, which is characterized in that it has arm and obtains above-mentioned in order to control
The image comprising above-mentioned endpoint that is, present image and above-mentioned endpoint when the endpoint of arm is in current location are in target location
When the image comprising above-mentioned endpoint that is, the said arm of the robot of the image acquiring unit of target image robot controlling party
Method, and at the same time carry out the position carried out according to the path instructed position and formed of 1 or more based on setting control, with
The visual servo carried out according to above-mentioned present image and above-mentioned target image.Thereby, it is possible to maintain the high speed that position controls,
It and also can be corresponding with the situation that target location changes.
In addition, another way is robot control program, which is characterized in that arithmetic unit is made to perform following steps:It obtains
The step of image that is, target image comprising above-mentioned endpoint when the endpoint of the arm of robot is in target location, above-mentioned endpoint
The step of image that is, present image comprising above-mentioned endpoint during in current location as the position at current time;And
So that the side that above-mentioned endpoint is moved according to the path instructed position and formed of 1 or more based on setting to above-mentioned target location
Formula generate command value, and with according to above-mentioned present image and above-mentioned target image make above-mentioned endpoint from above-mentioned current location to
The mode of above-mentioned target location movement generates command value, the step for moving said arm thereby using above-metioned instruction value.As a result,
It is able to maintain that the high speed of position control, and also can be corresponding with the situation that target location changes.
In addition, another way is related to a kind of robot controller, including:Robot control unit, believes according to image
It ceases to control robot;Image feature amount variable quantity is obtained according to above-mentioned image information in variable quantity operational part;Variable quantity pushes away
Disconnected portion, according to the information as above-mentioned robot or object and as the variation of the information beyond above-mentioned image information
Deduction information is measured, the deduction amount to above-mentioned image feature amount variable quantity that is, the variable quantity progress computing of deduction image feature amount;
And abnormality determination unit, it is handled by the comparison of above-mentioned image feature amount variable quantity and above-mentioned deduction image feature amount variable quantity
To carry out unusual determination.
In this mode, according to the deduction image being obtained based on image feature amount variable quantity and variable quantity deduction information
Feature change amount use the unusual determination of the control of the robot of image information.Thereby, it is possible to use image information
Robot control in, particularly in the means for using image feature amount, suitably carry out unusual determination etc..
In addition, in another way, it can also be configured to, above-mentioned variable quantity deduction is the joint of above-mentioned robot with information
Angle information.
As a result, as variable quantity deduction information, the joint angle information of robot can be used.
In addition, in another way, it can also be configured to, above-mentioned variable quantity inferring portion passes through to above-mentioned joint angle information
Variable quantity, effect make the variable quantity of above-mentioned joint angle information and the above-mentioned corresponding Jacobian matrix of image feature amount variable quantity,
So as to carry out computing to above-mentioned deduction image feature amount variable quantity.
Thereby, it is possible to the variable quantity of joint angle information is used to infer image feature amount variable quantity with Jacobian matrix to be obtained
Deng.
In addition, in another way, it can also be configured to, above-mentioned variable quantity deduction is the end of above-mentioned robot with information
The posture information of actuator or above-mentioned object.
End effector that can be using robot as a result, as variable quantity deductions information or above-mentioned object
Posture information.
In addition, in another way, it can also be configured to, above-mentioned variable quantity inferring portion passes through to above-mentioned posture information
Variable quantity effect make the variable quantity of above-mentioned posture information and the corresponding Jacobi square of above-mentioned image feature amount variable quantity
Battle array, so as to carry out computing to above-mentioned deduction image feature amount variable quantity.
Thereby, it is possible to the variable quantity of posture information is used to infer image feature amount variation with Jacobian matrix to be obtained
Amount etc..
It in addition, in another way, can also be configured to, the first image information is obtained at the i-th (i is natural number) moment
Image feature amount f1 and the image feature amount f2 that the second image information is obtained at jth (j be meet j ≠ i the natural number) moment
In the case of, above-mentioned variable quantity operational part is using the difference of above-mentioned image feature amount f1 and above-mentioned image feature amount f2 as above-mentioned figure
It is obtained as Feature change amount, above-mentioned variable quantity inferring portion obtains at kth (k the is natural number) moment to be believed with above-mentioned first image
Corresponding above-mentioned variable quantity deduction is ceased to obtain with information p1 and at l (l the is natural number) moment and above-mentioned second image information
In the case that corresponding above-mentioned variable quantity is inferred with information p2, pushed away according to above-mentioned variable quantity deduction with information p1 and above-mentioned variable quantity
Disconnected information p2, is obtained above-mentioned deduction image feature amount variable quantity.
Thereby, it is possible to consider the moment, and corresponding image feature amount variable quantity is obtained with inferring image feature amount variable quantity
Deng.
In addition, in another way, it can also be configured to, when the above-mentioned kth moment is the acquisition of above-mentioned first image information
It carves, the above-mentioned l moment is the acquisition moment of above-mentioned second image information.
As a result, in the case of considering that the acquisition of joint angle information can be carried out at high speed, can easily it account for
The processing at moment etc..
In addition, in another way, it can also be configured to, above-mentioned abnormality determination unit carries out above-mentioned image feature amount variable quantity
Comparison with the differential information and threshold value of above-mentioned deduction image feature amount variable quantity is handled, and in above-mentioned differential information than upper
State threshold value it is big in the case of be determined as exception.
Thereby, it is possible to carry out unusual determination etc. by threshold determination.
In addition, in another way, can also be configured to, for above-mentioned abnormality determination unit, above-mentioned variable quantity computing
The difference at the acquisition moment of two above-mentioned image informations is bigger used in the computing of above-mentioned image feature amount variable quantity in portion, then
Above-mentioned threshold value is set to bigger.
Thereby, it is possible to threshold value etc. is accordingly changed with situation.
In addition, in another way, can also be configured to, in the case where detecting exception by above-mentioned abnormality determination unit,
The control that above-mentioned robot control unit stops into the above-mentioned robot of enforcement.
Thereby, it is possible to by stopping robot in abnormality detection, so as to fulfill control of the robot of safety etc..
In addition, in another way, can also be configured to, in the case where detecting exception by above-mentioned abnormality determination unit,
Above-mentioned robot control unit is skipped the computing based on the above-mentioned image feature amount variable quantity in above-mentioned variable quantity operational part and is used
Two above-mentioned image informations in, the above-mentioned image information that is obtained at the time of in time series rearward that is, unusual determination figure
As the control that information is formed, and according to the above-mentioned image information obtained more forward than above-mentioned unusual determination image information at the time of come
It is controlled.
Thereby, it is possible to control of robot using unusual determination image information etc. is skipped in abnormality detection.
In addition, another way is related to a kind of robot controller, including:Robot control unit, believes according to image
It ceases to control robot;Variable quantity operational part is obtained and represents the end effector of above-mentioned robot or the position of object
The joint of the variable quantity of the posture variable quantity of the variable quantity of pose information or the joint angle information of the above-mentioned robot of expression
Angle variable quantity;Image feature amount variable quantity is obtained according to above-mentioned image information in variable quantity inferring portion, and according to above-mentioned image
Deduction amount that is, inferred position postural change amount or the above-mentioned pass of above-mentioned posture variable quantity is obtained in Feature change amount
It saves the deduction amount of angle variable quantity that is, infers joint angle variable quantity;And abnormality determination unit, changed by above-mentioned posture
Comparison processing or above-mentioned joint angle variable quantity of the amount with above-mentioned inferred position postural change amount change with above-mentioned deduction joint angle
The comparison of amount handles to carry out unusual determination.
In addition, in another way, inferred position postural change amount or deduction are obtained according to image feature amount variable quantity
Joint angle variable quantity, and pass through posture variable quantity and the comparison of inferred position postural change amount or joint angle variable quantity
Unusual determination is carried out with inferring the comparison of joint angle variable quantity.Posture information or joint angle can also be passed through as a result,
The comparison of information, in the control of robot of image information is used, particularly in the means for using image feature amount, suitably
Ground carries out unusual determination.
In addition, in another way, it can also be configured to, above-mentioned variable quantity operational part carries out obtaining multiple above-mentioned position appearances
Processing, the acquisition of gesture information and the difference that multiple above-mentioned posture information are obtained as above-mentioned posture variable quantity are multiple
Above-mentioned posture information and be obtained according to the difference of multiple above-mentioned posture information above-mentioned joint angle variable quantity processing,
The difference for obtaining multiple above-mentioned joint angle informations and multiple above-mentioned joint angle informations being obtained as above-mentioned joint angle variable quantity
Simultaneously above-mentioned position appearance is obtained according to the difference of multiple above-mentioned joint angle informations in processing and the multiple above-mentioned joint angle informations of acquisition
Any one processing in the processing of gesture variable quantity.
Thereby, it is possible to posture variable quantity or joint angle variable quantity etc. is obtained by various means.
In addition, another way is related to a kind of robot, including:Robot control unit is controlled according to image information
Robot;Image feature amount variable quantity is obtained according to above-mentioned image information in variable quantity operational part;Variable quantity inferring portion,
Inferred according to the information as above-mentioned robot or object and the variable quantity as the information beyond above-mentioned image information
With information, deduction amount that is, deduction image feature amount variable quantity to above-mentioned image feature amount variable quantity carry out computing;It is and different
Normal determination unit handles to carry out by the comparison of above-mentioned image feature amount variable quantity and above-mentioned deduction image feature amount variable quantity
Unusual determination.
In addition, in another way, according to image feature amount variable quantity, with being obtained based on variable quantity deduction information
Infer image feature amount variable quantity, use the unusual determination of the control of the robot of image information.Thereby, it is possible to use
In the control of the robot of image information, particularly in the means for using image feature amount, unusual determination is suitably carried out.
In addition, another way is related to a kind of robot control method, it is the machine that robot is controlled according to image information
Device people's control method, the variable quantity calculation process including according to above-mentioned image information, be obtained image feature amount variable quantity
The step of;According to the information as above-mentioned robot or object and as the variation of the information beyond above-mentioned image information
Amount deduction information, deduction amount that is, deduction image feature amount variable quantity to above-mentioned image feature amount variable quantity carry out computing
The step of variable quantity inference process;And pass through above-mentioned image feature amount variable quantity and above-mentioned deduction image feature amount variable quantity
Compare processing come the step of carrying out unusual determination.
In addition, in another way, according to image feature amount variable quantity, with being obtained based on variable quantity deduction information
Infer image feature amount variable quantity, use the unusual determination of the control of the robot of image information.Thereby, it is possible to use
In the control of the robot of image information, particularly in the means for using image feature amount, unusual determination is suitably carried out.
In addition, another way is related to a kind of program, computer is made to be functioned as such as lower component:Robot controls
Portion controls robot according to image information;Image feature amount is obtained according to above-mentioned image information in variable quantity operational part
Variable quantity;Variable quantity inferring portion according to the information as above-mentioned robot or object and is used as above-mentioned image information
The variable quantity deduction information of information in addition, deduction amount that is, deduction image feature amount to above-mentioned image feature amount variable quantity
Variable quantity carries out computing;And abnormality determination unit, pass through above-mentioned image feature amount variable quantity and above-mentioned deduction image feature amount
The comparison of variable quantity handles to carry out unusual determination.
In addition, in another way, according to image feature amount variable quantity, with being obtained based on variable quantity deduction information
Infer image feature amount variable quantity, computer is made to perform the unusual determination of the control for the robot for using image information.As a result, can
Enough in the control of robot of image information is used, particularly in the means for using image feature amount, suitably carry out different
Often judge.
In this way, according to several modes, it is capable of providing in the control for the robot for suitably carrying out realizing based on image information
The control using image feature amount abnormal detection robot controller, robot and robot control method
Deng.
In addition, another way is related to robot, be using the captured image of the check object object shot by shoot part Lai
Check the robot of the inspection processing of above-mentioned check object object, and information is checked according to first, generation includes above-mentioned inspection
It investigates and prosecutes the second of the inspection area of reason and checks information, and information is checked according to above-mentioned second, carry out above-mentioned inspection processing.
In addition, in another way, information, second inspection information of the generation comprising inspection area are checked according to first.One
As, it will check which region in image used in (visual examination for narrow sense) depends on check object object for handling
The information such as shape, job content for carrying out for check object object etc., therefore, become in each check object object, job content
During change, it is necessary to reset inspection area, and cause the burden of user larger.At this point, by being checked according to first
Information and generate the second inspection information, inspection area etc. can be easily determined by.
In addition, in another way, it can also be configured to, above-mentioned second inspection information, which includes, includes multiple view informations
View information group inside, and each view information of above-mentioned view information group includes the above-mentioned shoot part in above-mentioned inspection processing
Viewpoint position and direction of visual lines.
Thereby, it is possible to generate view information group etc. as the second inspection information.
In addition, in another way, it can also be configured to, to each view information of above-mentioned view information group, setting makes
Relative importance value when stating shoot part to above-mentioned viewpoint position corresponding with above-mentioned view information and the movement of above-mentioned direction of visual lines.
Each view information that thereby, it is possible to be included to view information group, setting relative importance value etc..
It in addition, in another way, can also be configured to, according to the mobile order set based on above-mentioned relative importance value, make
Above-mentioned shoot part to above-mentioned each corresponding above-mentioned viewpoint position of view information of above-mentioned view information group and above-mentioned sight side
To movement.
It is actual to control shoot part and carry out inspection processing thereby, it is possible to use to set multiple view informations of relative importance value
Deng.
It in addition, in another way, can also be configured to, according to movable range information, be judged to that above-mentioned bat can not be made
Portion is taken the photograph to above-mentioned viewpoint position corresponding with i-th (i is natural number) view information in multiple above-mentioned view informations and above-mentioned
In the case that direction of visual lines moves, without the movement of the above-mentioned shoot part based on above-mentioned i-th view information, and according to above-mentioned shifting
Next jth (j is the natural number for meeting i ≠ j) view information of above-mentioned i-th view information in dynamic order, makes above-mentioned bat
Take the photograph portion's movement.
Thereby, it is possible to realize control of the shoot part for the movable range for considering robot etc..
In addition, in another way, it can also be configured to, above-mentioned first checks that information is included for above-mentioned check object object
Opposite inspection processing object's position, and on the basis of above-mentioned inspection department manages object's position, setting and above-mentioned check object object
Corresponding object coordinate system thereby using above-mentioned object coordinate system, generates above-mentioned view information.
Thereby, it is possible to generate view information in object coordinate system etc..
In addition, in another way, it can also be configured to, above-mentioned first, which checks that information includes, represents above-mentioned check object object
Global coordinate system in posture object posture information, according to based on above-mentioned object posture information and
The relativeness of the above-mentioned global coordinate system that is obtained and above-mentioned object coordinate system, be obtained in above-mentioned global coordinate system above-mentioned regards
Point information, and the movable range information in above-mentioned global coordinate system and the above-mentioned view information in above-mentioned global coordinate system,
To that whether above-mentioned shoot part can be made to judge to above-mentioned viewpoint position and the movement of above-mentioned direction of visual lines.
Thereby, it is possible to generate view information in global coordinate system and according to the movable of the view information and robot
Range information and control the mobile etc. of shoot part.
In addition, in another way, it can also be configured to, above-mentioned inspection processing is the result progress for robot manipulating task
Processing, it is above-mentioned first check information be the information obtained in above-mentioned robot manipulating task.
Thereby, it is possible to obtain first in robot manipulating task to check information etc..
In addition, in another way, it can also be configured to, above-mentioned first inspection information includes above-mentioned check object object
Shape information, the posture information of above-mentioned check object object and the opposite inspection for above-mentioned check object object are handled
At least one information in object's position.
Thereby, it is possible to obtain shape information, posture information as the first inspection information and check process object
At least one information of position.
In addition, in another way, it can also be configured to, above-mentioned first inspection information includes the three of above-mentioned check object object
Dimension module data.
Thereby, it is possible to obtain three-dimensional modeling data as the first inspection information.
In addition, in another way, it can also be configured to, above-mentioned inspection processing is the result progress for robot manipulating task
Processing, above-mentioned three-dimensional modeling data include the operation as obtained from carrying out above-mentioned robot manipulating task after three-dimensional modeling data,
With three-dimensional modeling data before the above-mentioned three-dimensional modeling data of the above-mentioned check object object before above-mentioned robot manipulating task that is, operation.
Thereby, it is possible to obtain the three-dimensional modeling data before and after operation as the first inspection information.
In addition, in another way, it can also be configured to, above-mentioned second checks that information includes qualified images, above-mentioned qualification
Image is clapped by being configured at the imaginary video camera of above-mentioned viewpoint position corresponding with above-mentioned view information and above-mentioned direction of visual lines
Image obtained by taking the photograph above-mentioned three-dimensional modeling data.
Thereby, it is possible to from three-dimensional modeling data and view information, qualified images etc. are obtained as the second inspection information.
In addition, in another way, it can also be configured to, above-mentioned second checks that information includes qualified images with scheming before operation
Picture, above-mentioned qualified images are the vacations by being configured at above-mentioned viewpoint position corresponding with above-mentioned view information and above-mentioned direction of visual lines
Think that video camera shoots image obtained by three-dimensional modeling data after above-mentioned operation, image is to be regarded by being configured at above-mentioned before above-mentioned operation
The above-mentioned imaginary video camera of the point corresponding above-mentioned viewpoint position of information and above-mentioned direction of visual lines shoots three-dimensional mould before above-mentioned operation
Image obtained by type data, by the way that compared with above-mentioned qualified images, above-mentioned inspection area is obtained to image before above-mentioned operation.
Thereby, it is possible to according to the three-dimensional modeling data and view information before and after operation, qualified images are obtained with scheming before operation
Picture, and processing is compared according to it and inspection area etc. is obtained.
In addition, in another way, can also be configured to, in above-mentioned comparison, be obtained before above-mentioned operation image with it is above-mentioned
The difference that is, difference image of qualified images, above-mentioned inspection area are comprising above-mentioned check object object in above-mentioned difference image
Region.
Inspection area etc. is obtained thereby, it is possible to use difference image.
In addition, in another way, it can also be configured to, above-mentioned second checks that information includes qualified images with scheming before operation
Picture, above-mentioned qualified images are the vacations by being configured at above-mentioned viewpoint position corresponding with above-mentioned view information and above-mentioned direction of visual lines
Think that video camera shoots image obtained by three-dimensional modeling data after above-mentioned operation, image is to be regarded by being configured at above-mentioned before above-mentioned operation
The above-mentioned imaginary video camera of the point corresponding above-mentioned viewpoint position of information and above-mentioned direction of visual lines shoots three-dimensional mould before above-mentioned operation
Image obtained by type data, according to the similarity of image before above-mentioned operation and above-mentioned qualified images, setting is based on above-mentioned shooting figure
The threshold value as used in the above-mentioned inspection processing carried out with above-mentioned qualified images.
Thereby, it is possible to use the similarity of image and qualified images before operation, setting checks threshold value of processing etc..
In addition, in another way, it can also be configured to, including at least the first arm and the second arm, above-mentioned shoot part is to set
It is placed in the trick video camera of at least one party of above-mentioned first arm and above-mentioned second arm.
It is checked thereby, it is possible to use the arm of 2 or more, with being arranged at least one trick video camera of the arm
Processing etc..
In addition, another way is related to processing unit, it is for the shooting for using the check object object by shoot part shooting
Image and the device for carrying out the inspection processing of above-mentioned check object object export the processing dress of information used in above-mentioned inspection processing
It puts, and information is checked according to first, generate the viewpoint position and sight that include above-mentioned shoot part for handling above-mentioned inspection
The second inspection information that the view information in direction, the inspection area handled with above-mentioned inspection are included, and it is above-mentioned for carrying out
Check that the above device output above-mentioned second of processing checks information.
In addition, in another way, second inspection information of the information generation comprising inspection area is checked according to first.Generally
Ground will check which region in image used in (being visual examination for narrow sense) depends on check object object for handling
The information such as shape, job content for carrying out for check object object etc., therefore, become every time in check object object, job content
During change, it is necessary to inspection area is reset, it is larger so as to cause the burden of user.At this point, by according to the first inspection
It looks into information generation second and checks information, inspection area can be easily determined by, so that other devices carry out inspection processing etc..
In addition, another way is related to inspection method, it is the captured image for using the check object object shot by shoot part,
And check the inspection method of the inspection processing of above-mentioned check object object, include checking according to first in the inspection method and believe
Breath, generate the viewpoint position comprising above-mentioned shoot part for handling above-mentioned inspection and direction of visual lines view information, with it is above-mentioned
The step of checking the second inspection information that the inspection area of processing is included.
In addition, in another way, second inspection information of the information generation comprising inspection area is checked according to first.Generally
Ground will check which region in image used in (being visual examination for narrow sense) depends on check object object for handling
The information such as shape, job content for carrying out for check object object etc., therefore, become every time in check object object, job content
During change, it is necessary to inspection area is reset, it is larger so as to cause the burden of user.At this point, by according to the first inspection
It looks into information generation second and checks information, inspection area etc. can be easily determined by.
In this way, according to several modes, it is capable of providing by checking that information generation checks required second inspection according to first
Information is looked into, robot, processing unit and inspection method for bearing and being easily performed inspection of user etc. can be reduced.
Description of the drawings
Fig. 1 is the definition graph of the assembling operation carried out by visual servo.
Fig. 2A, Fig. 2 B are the definition graphs of the position offset of assembled object.
Fig. 3 is the system configuration example of present embodiment.
Fig. 4 is the definition graph of the assembling operation carried out by the visual servo of the characteristic quantity based on assembled object.
Fig. 5 is an example of captured image used in the visual servo of the characteristic quantity based on assembled object.
Fig. 6 is the definition graph of assembled state.
Fig. 7 is the flow chart of the visual servo of the characteristic quantity based on assembled object.
Fig. 8 is another flow chart of the visual servo of the characteristic quantity based on assembled object.
Fig. 9 is the definition graph for the processing that assembling object is made to be moved to the surface of assembled object.
Figure 10 is the definition graph of the assembling operation carried out by two kinds of visual servos.
Figure 11 is the flow chart for being carried out continuously the processing in the case of two kinds of visual servos.
Figure 12 (A)~(D) is the definition graph of reference image and captured image.
Figure 13 A, Figure 13 B are the definition graphs of the assembling operation of three workpiece.
The definition graph for the captured image that Figure 14 (A)~(C) is used when being the assembling operation of three workpiece of progress.
The flow chart of processing when Figure 15 is the assembling operation of three workpiece of progress.
The definition graph for the captured image that Figure 16 (A)~(C) is used when being and assembling three workpiece simultaneously.
Figure 17 is the flow chart of processing when assembling three workpiece simultaneously.
The definition graph for the captured image that Figure 18 (A)~(C) is used when being and assembling three workpiece in the other order.
Figure 19 A, Figure 19 B are the configuration examples of robot.
Figure 20 is to control the configuration example of the robot control system of robot via network.
Figure 21 is the figure of an example of the composition for the robot system 1 for representing second embodiment.
Figure 22 is the block diagram of an example of the functional structure for representing robot system 1.
Figure 23 is the data flowchart of robot system 1.
Figure 24 is the figure for the hardware configuration for representing control unit 20.
The track of endpoint when Figure 25 A are to by position control and visual servo come control arm 11 illustrates
Figure, Figure 25 B are an examples of target image.
Figure 26 is the figure illustrated to component α.
Figure 27 is the flow chart of the process flow for the robot system 2 for representing third embodiment of the present invention.
Figure 28 is the figure illustrated to the track of the position of object, the position of switching point and endpoint.
Figure 29 is the figure of an example of the structure for the robot system 3 for representing the 4th embodiment of the present invention.
Figure 30 is the block diagram of an example of the functional structure for representing robot system 3.
Figure 31 is the flow chart for the process flow for representing robot system 3.
Figure 32 is the figure for representing assembling work of the robot system 3 by workpiece insertion hole H.
Figure 33 is the flow chart of the process flow for the robot system 4 for representing the 5th embodiment of the present invention.
Figure 34 is the figure for representing assembling work of the robot system 4 by workpiece insertion hole H.
Figure 35 is the configuration example of the robot controller of present embodiment.
Figure 36 is the detailed configuration example of the robot controller of present embodiment.
Figure 37 is the configuration example for the shoot part for obtaining image information.
Figure 38 is the configuration example of the robot of present embodiment.
Figure 39 is the other examples of the construction of the robot of present embodiment.
Figure 40 is the configuration example of general Visual servoing control system.
Figure 41 be to the variable quantity of the variable quantity of image feature amount variable quantity, posture information and joint angle information,
The figure illustrated with the relation of Jacobian matrix.
Figure 42 is the figure illustrated to Visual servoing control.
Figure 43 A, Figure 43 B are the definition graphs of the abnormality detection means of present embodiment.
Figure 44 is and the definition graph of the difference at image acquisition moment accordingly means of given threshold.
Figure 45 is to represent image acquisition moment, the pass for obtaining moment and image feature amount acquisition moment of joint angle information
The figure of system.
Figure 46 is to represent image acquisition moment, the pass for obtaining moment and image feature amount acquisition moment of joint angle information
Another figure of system.
Figure 47 is to combine mathematical formulae definition graph as the variable quantity and joint angle of Feature change amount, posture information
The figure of the correlation of the variable quantity of information.
Figure 48 is the flow chart that the processing to present embodiment illustrates.
Figure 49 is another detailed configuration example of the robot controller of present embodiment.
Figure 50 is the configuration example of the robot of present embodiment.
Figure 51 A, Figure 51 B are the configuration examples of the processing unit of present embodiment.
Figure 52 is the configuration example of the robot of present embodiment.
Figure 53 is other configuration examples of the robot of present embodiment.
Figure 54 is the configuration example for the check device for checking information using second.
Figure 55 is the example that the first inspection information checks information with second.
Figure 56 is the flow chart illustrated to the flow of processed offline.
Figure 57 A, Figure 57 B are the examples of shape information (three-dimensional modeling data).
Figure 58 is the example of viewpoint candidate information used in the generation of view information.
Figure 59 is the example of the coordinate value in the object coordinate system of viewpoint candidate information.
Figure 60 is the setting example based on the object coordinate system for checking processing object's position.
Figure 61 A~Figure 61 G are the examples of image and qualified images before operation corresponding with each view information.
Figure 62 A~Figure 62 D are the definition graphs of the setting means of inspection area.
Figure 63 A~Figure 63 D are the definition graphs of the setting means of inspection area.
Figure 64 A~Figure 64 D are the definition graphs of the setting means of inspection area.
Figure 65 A~Figure 65 D are the definition graphs of the similarity calculation processing before and after operation.
Figure 66 A~Figure 66 D are the definition graphs of the similarity calculation processing before and after operation.
Figure 67 A~Figure 67 E are the definition graphs of the relative importance value of view information.
Figure 68 is the flow chart illustrated to the flow handled online.
Figure 69 A, Figure 69 B are the comparisons of the view information in view information and robot coordinate system in object coordinate system
Example.
Figure 70 A, Figure 70 B are the definition graphs of image rotation angle.
Specific embodiment
Hereinafter, present embodiment is illustrated.In addition, present embodiment described below not unreasonably limits power
Present disclosure recorded in sharp claim.In addition, the entire infrastructure illustrated in present embodiment is not necessarily all institute of the present invention
Necessary constitutive requirements.
1. the means of present embodiment
First embodiment
As shown in Figure 1, here, to the assembling object WK1 held by the hand HD of robot is assembled in assembled pair
As the situation of the assembling operation of object WK2 illustrates.In addition, the hand HD of robot is arranged at the front end of the arm AM of robot.
First, the comparative example as present embodiment carries out Fig. 1 institutes in the visual servo by using above-mentioned reference image
In the case of the assembling operation shown, according to the captured image and preprepared ginseng shot by video camera (shoot part) CM
According to image, robot is controlled.Specifically, make assembling object WK1 as arrow YJ to mirroring reference image
Assembling object WK1R position movement, and be assembled in assembled object WK2.
Here, the reference image RIM used at this time is shown in fig. 2, shows mirror reference image in fig. 2b
Position on the realistic space (three dimensions) of the assembled object WK2 of RIM.In the reference image RIM of Fig. 2A, reflect have by
Assemble the assembling object WK1R (WK1R for being equivalent to Fig. 1) of object WK2 and assembled state (or state before assembling).
In the visual servo for using above-mentioned reference image RIM so that mirror the posture of the assembling object WK1 of captured image with
Mirror the consistent mode of the posture of the assembling object WK1R of the assembled state of reference image RIM, make assembling object WK1
It is mobile.
But as described above, in the case where actually carrying out assembling operation, there are the position appearances of assembled object WK2
The situation of gesture variation.For example, as shown in Figure 2 B, mirror the position of centre of gravity of the assembled object WK2 of the reference image RIM of Fig. 2A
It is GC1 on realistic space.In contrast, actual assembled object WK2 can be biased, and actual assembled object
The position of centre of gravity of WK2 is GC2.In this case, even if so that the position of the assembling object WK1R with mirroring reference image RIM
The consistent mode of posture is put, moves actual assembling object WK1, can not also be become and actual assembled object WK2
Assembled state, it is thus impossible to correctly carry out assembling operation.This is because the posture in assembled object WK2 becomes
In the case of change, the posture for becoming the assembling object WK1 of assembled state with assembled object WK2 also changes.
Therefore, even if 100 grade of robot control system of present embodiment changes in the posture of assembled object
In the case of, it also can correctly carry out assembling operation.
Specifically, the configuration example of the robot control system 100 of present embodiment is shown in FIG. 3.This embodiment party
The robot control system 100 of formula includes obtaining the captured image acquisition unit 110 of captured image and according to bat from shoot part 200
Image is taken the photograph to control the control unit 120 of robot 300.In addition, robot 300 has end effector (hand) 310 and arm
320.In addition, the structure of shoot part 200 and robot 300 is described in detail later.
First, captured image acquisition unit 110 obtain reflect have it is in assembling object and the assembled object of assembling operation,
The captured image of at least assembled object.
Then, control unit 120 carries out the characteristic quantity detection process of assembled object according to captured image, and according to quilt
The characteristic quantity of object is assembled, moves assembling object.In addition, in the processing for moving assembling object, also including defeated
Go out processing of control information (control signal) of robot 300 etc..In addition, the function of control unit 120 passes through various processors
The hardware or program etc. such as (CPU etc.), ASIC (gate array etc.) and can realize.
In this way, in the visual servo (comparative example) for using above-mentioned reference image, according to the assembling object of reference image
Characteristic quantity, move assembling object, in contrast, in the present embodiment, according to mirroring assembled pair of captured image
As the characteristic quantity of object, move assembling object.For example, as shown in figure 4, in the captured image shot by video camera CM, examine
The characteristic quantity of the workpiece WK2 as assembled object is surveyed, and according to the characteristic quantity of the workpiece WK2 detected, is made as assembling
The workpiece WK1 of object is mobile as shown in arrow YJ.
Here, in the captured image shot by video camera CM, have current time (at the time of shooting) assembled pair is reflected
As object WK2.Therefore, it is possible to which assembling object WK1 is made to be moved to the position of the assembled object WK2 at current time.As a result, can
It enough prevents from the failure example (the problem of comparative example illustrated in fig. 1) of the visual servo as used above-mentioned reference image, making group
Dress object WK1 can't be situation about moving the position of assembled state to current time.Further, since it is assembled every time
During operation, the new target location of visual servo is set according to captured image, so even if in the position of assembled object WK2
In the case of postural change, correct target location can be also set.
As described above, even if in the case of the posture variation of assembled object, group also can be correctly carried out
Pretend industry.Also, in the present embodiment, without reference image is prepared in advance, so as to reduce the preparation of visual servo
Cost.
In addition, in this way, control unit 120 carries out visual servo according to captured image, so as to control robot.
Thereby, it is possible to according to current job status, feedback control etc. is carried out to robot.
In addition, robot control system 100 is not limited to the structure of Fig. 1, and can carry out omitting an above-mentioned part
The various modifications such as inscape or additional other inscapes are implemented.In addition, as be described hereinafter shown in Figure 19 B, present embodiment
Robot control system 100 is included in robot 300, and can also be integrally formed with robot 300.Also, as after
It states shown in Figure 20, the function of robot control system 100 can also pass through terminal possessed by server 500, each robot 300
Device 330 is realized.
In addition, for example in robot control system 100 and shoot part 200 by including wired and wireless at least one party
Network and in the case of connecting, captured image acquisition unit 110 can also be that the communication unit to communicate with carrying out shoot part 200 (connects
Oral area).Also, in the case where robot control system 100 includes shoot part 200, captured image acquisition unit 110 can also be
Shoot part 200 itself.
Here, captured image refers to the image obtained from shoot part 200 is shot.In addition, captured image can also be
The image for being stored in external storage part, the image obtained via network.Captured image is, for example, the image shown in aftermentioned Fig. 5
PIM etc..
In addition, assembling operation refers to the operation for assembling multiple operation objects, specifically, refer to that object group will be assembled
Operation loaded on assembled object.Assembling operation is, for example, the work that workpiece WK1 is placed on workpiece WK2 to (or side)
Industry is by the operation (embedded operation is fitted together to operation) of workpiece WK1 insertions (chimeric) workpiece WK2 or by workpiece WK1 and workpiece
Operation that WK2 is bonded, connected, assembling, melting (bonding operation, connection operation, melt operation at assembling work) etc..
Also, assembling object refers to the object assembled in assembling operation for assembled object.For example,
It is workpiece WK1 in the example in fig. 4.
On the other hand, assembled object refers in assembling operation for the object of assembling object assembling.For example, scheming
It is workpiece WK2 in 4 example.
2. processing is detailed
Next, the processing of present embodiment is described in detail.
2.1. the visual servo of the characteristic quantity based on assembled object
The captured image acquisition unit 110 of present embodiment, which obtains, reflects 1 that has assembling object and assembled object
Or multiple captured images.Then, control unit 120 is according to 1 or multiple captured images of acquisition, carry out assembling object with
And the characteristic quantity detection process of assembled object.Also, control unit 120 is according to the characteristic quantity for assembling object and is assembled
The characteristic quantity of object, so that assembling object and the relative position posture relation of assembled object become target relative position
The mode of posture relation moves assembling object.
Here, target relative position posture relation refers to, by visual servo carry out assembling operation when as target,
Assemble the relative position posture relation of object and assembled object.For example, in the example in fig. 4, workpiece WK1 and workpiece
Relative position posture relation during the hole HL contacts (adjoining) of the triangle of WK2 is target relative position posture.
Thereby, it is possible to the features of characteristic quantity and assembled object according to the assembling object detected from captured image
Amount carries out assembling operation etc..In addition, to 1 acquired in captured image acquisition unit 110 or multiple captured images in back segment
It is described in detail.
In addition, in most assembling operations, the portion for being assembled in assembled object in assembling object is determined mostly
Divide the part (assembled part) for assembling object assembling in (assembled part) and assembled object.For example, in Fig. 4
Example in, assemble the assembled part in object and refer to the bottom surface BA of workpiece WK1, the assembled part of assembled object is
Refer to the hole HL of the triangle of workpiece WK2.It is that assembled part BA is embedded in assembled part in the example of the assembling operation of Fig. 4
Hole HL, such as even if it is also null(NUL) that the side SA of workpiece WK1 is assembled in hole HL.It is therefore preferable that preset assembling
The assembled part of the assembled part of object and assembled object.
Therefore, control unit 120 is according to the feature set as target signature amount in the characteristic quantity of assembled object
It measures, the characteristic quantity set as concern characteristic quantity in the characteristic quantity with assembling object, so as to assemble object and by group
Filling the relative position posture relation of object becomes the mode of target relative position posture relation, moves assembling object.
Here, characteristic quantity for example refer to the characteristic point of image, mirror in image detection object (assembling object and
Assembled object etc.) contour line etc..Moreover, characteristic quantity detection process refers to the processing of the characteristic quantity in detection image, example
Such as refer to characteristic point detection process, silhouettes detection processing.
Hereinafter, the situation that characteristic point is detected as characteristic quantity is illustrated.Characteristic point is to refer to dash forward from image
Go out the point of observation.For example, in the captured image PIM11 shown in Fig. 5, the characteristic point of the workpiece WK2 as assembled object,
Detect characteristic point P1~P10, the characteristic point of the workpiece WK1 as assembling object detects characteristic point Q1~Q5.In addition,
In the example of fig. 5, for the ease of illustration and explanation, it is illustrated that only detect the characteristic point of P1~P10 and Q1~Q5
Appearance, but detect characteristic point more than that in actual captured image.It is but even if more than that detecting
In the case of characteristic point, the content of the following description processing is also unchanged.
In addition, in the present embodiment, as the detection method (characteristic point detection process) of characteristic point, use Corner Detection
Method etc., but other general corner detection methods (eigenvalue, FAST features detect) can also be used, can also use with
SIFT(Scale Invariant Feature Transform:Scale invariant feature is converted) it is retouched into the local feature amount of representative
State son, SURF (Speeded Up Robust Feature:Fast robust feature) etc..
Moreover, in the present embodiment, being set as target signature amount in the characteristic quantity of assembled object
Characteristic quantity, with the characteristic quantity that sets as concern characteristic quantity in the characteristic quantity of assembling object, carry out visual servo.
Specifically, in the example of fig. 5, in characteristic point P1~P10 of workpiece WK2, set as target signature amount
The characteristic point that sets the goal P9 and target feature point P10.On the other hand, it is special as concern in characteristic point Q1~Q5 of workpiece WK1
Sign amount and set concern characteristic point Q4 and concern characteristic point Q5.
Moreover, control unit 120 is so that the concern characteristic point of assembling object and the target feature point one of assembled object
Cause or close mode move assembling object.
That is, so that concern characteristic point Q4 and target feature point P9 is approached, paid close attention to characteristic point Q5 and target feature point P10 is approached
Mode, make assembling object WK1 mobile as shown in arrow YJ.
Here, target signature amount refers to represent in the characteristic quantity of assembled object, makes assembling pair by visual servo
Become clarification of objective amount when being moved as object.In other words, target signature amount is the feature of the assembled part of assembled object
Amount.In addition, target feature point refers to the feature set in the case where carrying out characteristic point detection process as target signature amount
Point.As described above, it in the example of fig. 5, is set with as target feature point corresponding with the hole HL of the triangle of workpiece WK2
Characteristic point P9 and characteristic point P10.
On the other hand, concern characteristic quantity refers to represent in the characteristic quantity for assembling object or assembled object, court
On realistic space corresponding with target signature amount point (in the example of fig. 5 for workpiece WK2 triangle hole HL) it is mobile
, represent realistic space on point (being in the example of fig. 5 the bottom surface of workpiece WK1) characteristic quantity.In other words, characteristic quantity is paid close attention to
It is the characteristic quantity for the assembled part for assembling object.In addition, concern characteristic point refers to carrying out the situation of characteristic point detection process
The lower characteristic point set as concern characteristic quantity.As described above, it in the example of fig. 5, is set with as concern characteristic point
Characteristic point Q4 and characteristic point Q5 corresponding with the bottom surface of workpiece WK1.
In addition, target signature amount (target feature point) and concern characteristic quantity (concern characteristic point) can be that director (makes
User) the preset or characteristic quantity (characteristic point) that is set according to given algorithm.For example, it is also possible to according to
The deviation of the characteristic point detected and the relative position relation with target feature point set target feature point.Specifically, exist
, can also be in captured image PIM11 in the example of Fig. 5, it will be in the deviation of characteristic point P1~P10 for representing workpiece WK2
Characteristic point P9 and characteristic point P10 settings near the heart is as target feature point.In addition, it in addition can also represent by group
Fill CAD (the Computer Aided Design of object:CAD) in data, preset and target signature
The corresponding point of point, and carry out CAD data in captured image and matched with CAD, so as to matched as a result, from by group according to CAD
Among the characteristic point for filling object, the characteristic point that (detection) is set as target feature point is determined.Paying close attention to characteristic quantity, (concern is special
Sign point) it is also same.
In addition, in the present embodiment, control unit 120 is so that concern characteristic point and the assembled object of assembling object
The consistent or close mode of target feature point, move assembling object, but due to assembled object and assembling pair
As object is visible object, so actually can't detect target feature point and concern characteristic point in identical point.That is, with consistent
Mode makes the movement of assembling object be that the point for detecting concern characteristic point is instigated to be moved to the point for detecting target feature point eventually
The meaning.
Thereby, it is possible to so that the assembled part of assembling object of setting and the assembled portion of the assembled object set
The relative position posture relation divided becomes the mode of target relative position posture relation, makes assembling object movement etc..
Moreover, the assembled part for assembling object can be assembled in assembled part of assembled object etc..Such as
It, can be using the hole HL as the bottom surface BA of the assembled part of workpiece WK1 insertions as the assembled part of workpiece WK2 shown in Fig. 6.
Next, the process flow of present embodiment is illustrated with the flow chart of Fig. 7.
First, captured image acquisition unit 110 obtains the captured image PIM11 (S101) for example shown in Fig. 5.In the shooting figure
As PIM11 reflects the both sides for having assembling object WK1 and assembled object WK2.
Next, captured image PIM11 of the control unit 120 according to acquisition, carries out characteristic quantity detection process, so as to detect quilt
Target signature amount FB, the concern characteristic quantity FA (S102, S103) with assembling object WK1 for assembling object WK2.
Then, as described above, control unit 120 makes assembling according to the concern characteristic quantity FA detected and target signature amount FB
Object WK1 moves (S104), and the relative position posture relation to assembling object WK1 and assembled object WK2 is
It is no to be judged (S105) as target relative position posture relation.
Finally, it is being judged to assembling relative position posture relation such as Fig. 6 institutes of object WK1 and assembled object WK2
In the case of becoming target relative position posture relation with showing, end processing is being judged to assembling object WK1 and assembled couple
In the case of not becoming target relative position posture relation as the relative position posture relation of object WK2, step S101 is returned to, and
Processing is repeated.The above are the process flows of present embodiment.
In addition, captured image acquisition unit 110 can also obtain multiple captured images.In this case, captured image obtains
Taking portion 110 that can obtain multiple reflect has assembling object and the captured image of the both sides of assembled object, can also obtain only
It reflects the captured image for having assembling object and only reflects the captured image for having assembled object.
Here, show that the acquisition of the latter reflects to have respectively in the flow chart of figure 8 and assemble object and assembled object
Multiple captured images situation process flow.
First, captured image acquisition unit 110 obtains the captured image PIM11 at least reflecting and having assembled object WK2
(S201).In addition, captured image PIM11, which can also reflect, assembling object WK1.Then, control unit 120 is from captured image
The target signature amount FB (S202) of the assembled object WK2 of PIM11 detections.
Next, captured image acquisition unit 110 obtains the captured image PIM12 at least reflecting and having assembling object WK1
(S203).In addition, identical with step S201, captured image PIM12, which can also reflect, assembled object WK2.Then, control
The concern characteristic quantity FA (S204) of assembling object WK1 is detected from captured image PIM12 in portion 120.
Then, it is identical with the process flow that Fig. 7 illustrates below, control unit 120 according to the concern characteristic quantity FA that detects with
Target signature amount FB makes assembling object WK1 move (S205), and to assembling object WK1 and assembled object WK2's
Whether relative position posture relation is judged (S206) as target relative position posture relation.
Finally, it is being judged to assembling relative position posture relation such as Fig. 6 institutes of object WK1 and assembled object WK2
In the case of becoming target relative position posture relation with showing, end processing is being judged to assembling object WK1 and assembled couple
In the case of not becoming target relative position posture relation as the relative position posture relation of object WK2, step S203 is returned to, and
Processing is repeated.The above are obtain to reflect respectively to have assembling object and the situation of multiple captured images of assembled object
Process flow.
In addition, in the above example, object actual assembled will be assembled in assembled object by visual servo, still
The present invention is not limited to this, can also be formed by visual servo and assembling object is assembled in before assembled object
State.
That is, control unit 120 can also be determined and assembled object according to the characteristic quantity (characteristic point) of assembled object
Object is in the image-region of given position relationship, and so that the concern characteristic point of assembling object and the image-region determined
Consistent or close mode moves assembling object.In other words, control unit 120 can also be according to assembled object
Characteristic quantity determines the point being in assembled object on the realistic space of given position relationship, and make assembling object to
The point movement determined.
For example, in captured image PIM shown in Fig. 9, as the assembled part for representing assembled object WK2 that is,
The characteristic point of the hole HL of triangle, detects characteristic point P8~P10.In this case, in captured image PIM, determine
Image-region R1~R3 of given position relationship is in characteristic point P8~P10.Then, so that the pass of assembling object WK1
Note characteristic point Q4 (close) consistent with image-region R2, the concern characteristic point Q5 and image-region R3 of assembling object unanimously (connect
Mode closely) moves assembling object WK1.
Thereby, it is possible to be formed such as the state before assembling operation.
In addition, it is not necessarily required to carry out the processing for detecting the characteristic quantity for assembling object as shown in above-mentioned example.For example,
The characteristic quantity of assembled object can be detected, and according to the characteristic quantity of the assembled object detected, deduction and robot
The posture of opposite assembled object, so as to so that the hand for holding assembling object and assembled pair be inferred to
As the mode being closely located to of object, control robot etc..
2.2. the assembling operation carried out by two kinds of visual servos
Next, to persistently carrying out the visual servo (First look servo) using reference image and using assembled pair
Make the situation of visual servo (the second visual servo) both visual servos of assembling object movement as the characteristic quantity of object
Processing illustrates.
For example, in Fig. 10, by using the First look servo of reference image carry out assembling object WK1 from position
GC1 towards position GC2 movement (movement shown in arrow YJ1), by using the second of the characteristic quantity of assembled object WK2
Visual servo carries out the movement (movement shown in arrow YJ2) from position GC2 towards position GC3 of assembling object WK1.This
Outside, position GC1~GC3 is the position of centre of gravity for assembling object WK1.
In the case of processing as progress, as shown in figure 3, the robot control system 100 of present embodiment also wraps
Include reference image storage part 130.Reference image storage part 130, which stores, shows the assembling object for taking target location posture
The reference image shown.The reference image e.g. as be described hereinafter image RIM shown in Figure 12 (A).In addition, reference image storage part 130
Function can pass through RAM (Random Access Memory:Random access memory) etc. memories, HDD (Hard Disk
Drive:Hard disk drive) etc. realize.
Moreover, for control unit 120, as the First look servo shown in the arrow YJ1 such as above-mentioned Figure 10, according to
At least reflecting has the first captured image and reference image of assembling object, and assembling object is made to be moved to target location posture.
Also, control unit 120 carries out the second vision as shown in the arrow YJ2 of Figure 10 after above-mentioned First look servo
Servo.That is, control unit 120 is after assembling object is moved, according at least reflecting the second captured image for having assembled object,
The characteristic quantity detection process of assembled object is carried out, and according to the characteristic quantity of assembled object, moves assembling object.
Here, more specifical process flow is illustrated with the flow chart of Figure 11, Figure 12 (A)~Figure 12 (D).
First, the preparation as First look servo makes the hand HD of robot hold assembling object WK1, and makes group
Object WK1 is filled to the target location posture movement (S301), utilizes (the video camera CM of Figure 10) photographic subjects of shoot part 200 position
The assembling object WK1 of posture is put, so as to obtain reference image (target image) RIM (S302) as shown in Figure 12 (A).So
Afterwards, the characteristic quantity F0 (S303) of object WK1 is assembled from the reference image RIM detections of acquisition.
Here, target location posture refers to the position appearance of the assembling object WK1 as target in First look servo
Gesture.For example, in Fig. 10, position GC2 is target location posture, in the reference image RIM of Figure 12 (A), reflecting has assembling object
Object WK1 is located at the appearance of target location posture GC2.The target location posture (is made by director when generating reference image
User) setting posture.
In addition, the reference image RIM of reference image such as Figure 12 (A) is such, refer under above-mentioned target location posture
Reflecting has the image of mobile object that is, assembling object WK1 in First look servo.In addition, in the reference image of Figure 12 (A)
In RIM, also reflecting has assembled object WK2, and but not has to mirror assembled object WK2.In addition, reference image
The image that can be stored in external storage part, the image obtained via network, the image generated according to CAD model data
Deng.
Next, carry out First look servo.First, captured image acquisition unit 110 obtains first as shown in Figure 12 (B)
Captured image PIM101 (S304).
Here, the captured image PIM101 of the first captured image such as Figure 12 (B) in this example is such, and referring to reflect has group
Pretend the captured image at least assembling object WK1 in the assembling object WK1 and assembled object WK2 of industry.
Then, control unit 120 detects the characteristic quantity F1 (S305) of assembling object WK1 from the first captured image PIM101,
And according to features described above amount F0 and characteristic quantity F1, make assembling object WK1 mobile (S306) as shown in the arrow YJ1 of Figure 10.
Then, whether control unit 120 in target location posture GC2 is judged (S306) to assembling object WK1,
In the case of being determined as that assembling object WK1 is in target location posture GC2, the second visual servo is moved to.On the other hand, sentencing
Be set to assembling object WK1 be not in target location posture GC2 in the case of, return to step S304, and be repeated first
Visual servo.
In this way, in First look servo, while comparing the assembling pair in reference image RIM and the first captured image PIM101
As object WK1 characteristic quantity each other, side control robot.
Next, carry out the second visual servo.In the second visual servo, first, captured image acquisition unit 110 obtains such as
The second captured image PIM21 (S308) shown in Figure 12 (C).Here, the second captured image refers to for the second visual servo
Captured image.In addition, in the second captured image PIM21 of this example, reflecting has assembling object WK1 and assembled object WK2
Both sides.
Then, control unit 120 detects the target signature amount FB of assembled object WK2 from the second captured image PIM21
(S309).Such as in this example, as shown in Figure 12 (C), target feature point GP1 and mesh are detected as target signature amount FB
Mark characteristic point GP2.
Equally, control unit 120 detects the concern characteristic quantity FA of assembling object WK1 from the second captured image PIM21
(S310).Such as in this example, as shown in Figure 12 (C), concern characteristic point IP1 and pass are detected as concern characteristic quantity FA
Note characteristic point IP2.
Next, control unit 120 moves assembling object WK1 according to concern characteristic quantity FA and target signature amount FB
(S312).I.e., identically with the example hereinbefore illustrated with Fig. 5, so that concern characteristic point IP1 and target feature point GP1 approach,
And concern characteristic point IP2 and mode close target feature point GP2 is made to move assembling object WK1.
Then, whether control unit 120 is in target relative position appearance to assembling object WK1 and assembled object WK2
Gesture relation is judged (S312).Such as in the captured image PIME shown in Figure 12 (D), due to concern characteristic point IP1 and mesh
Characteristic point GP1 adjoinings are marked, concern characteristic point IP2 and target feature point GP2 is abutted, it is determined that being assembling object WK1 and quilt
Assembling object WK2 is in target relative position posture relation, so as to terminate to handle.
On the other hand, it is being determined as that assembling object WK1 and assembled object WK2 is not in target relative position posture
In the case of relation, step S308 is returned to, and the second visual servo is repeated.
As a result, when identical assembling operation is repeated every time, using identical reference image, and make assembling object
It moves near assembled object, afterwards, can be engaged with the detailed posture of actual assembled object
Ground carries out assembling operation etc..That is, even if the posture of the assembled object when generating reference image, the assembling with reality
Assembled object during operation posture offset (difference) in the case of, also due in the second visual servo with by group
The position offset for filling object corresponds to, so in First look servo, can use identical reference image every time, without
Use different reference images.As a result, the setting up cost of reference image can be inhibited etc..
In addition, in above-mentioned steps S310, the concern feature of assembling object WK1 is detected from the second captured image PIM21
FA is measured, but not necessarily implying that must be detected from the second captured image PIM21.Such as in the second captured image PIM21 not
Situation for having assembling object WK1 etc. is reflected, it can also be from reflecting other second captured images PIM22 detection groups for having assembling object
Fill characteristic quantity of object WK1 etc..
2.3. the assembling operation of three workpiece
Next, as shown in Figure 13 A and Figure 13 B, the situation of the assembling operation to carrying out three workpiece WK1~WK3
Processing illustrates.
In this assembling operation, as shown in FIG. 13A, the assembling object WK1 that will be held by the first hand HD1 of robot
(workpiece WK1 is, for example, driver) is assembled in the first assembled object WK2 (works by the second hand HD2 holdings of robot
Part WK2 is, for example, screw), and second will be assembled in workpiece WK1 as the workpiece WK2 of assembled state on operation post
Assembled object WK3 (workpiece WK3, such as screw hole).Then, after assembling operation, assembling shape as shown in Figure 13 B is become
State.
Specifically, in the case of processing as progress, as shown in Figure 14 (A), control unit 120 has according at least reflecting
The first captured image PIM31 of the first assembled object WK2 in assembling operation, carries out the first assembled object WK2's
Characteristic quantity detection process.The first captured image in this example refers to carrying out the assembled objects of assembling object WK1 and first
The captured image used during the assembling operation of WK2.
Then, control unit 120 makes assembling object WK1 such as Figure 13 A according to the characteristic quantity of the first assembled object WK2
Arrow YJ1 shown in move.
Next, as shown in Figure 14 (B), control unit 120 has the after assembling object WK1 is moved, according at least reflecting
The second captured image PIM41 of two assembled object WK3 carries out the characteristic quantity detection process of the second assembled object WK3.
The second captured image in this example refers to carrying out the assembling of the first assembled assembled object WK3 of object WK2 and second
The captured image used during operation.
Then, control unit 120 makes assembling object WK1 and first according to the characteristic quantity of the second assembled object WK3
Assembled object WK2 is mobile as shown in the arrow YJ2 of Figure 13 A.
As a result, when carrying out assembling operation every time, even if in the first assembled object WK2, the second assembled object
In the case of the position offset of WK3, assembling object, the first assembled object and second assembled pair can be also carried out
Assembling operation as object etc..
Next, with the flow chart of Figure 15 to the processing in the assembling operation of three workpiece shown in Figure 13 A and Figure 13 B
Flow is described in detail.
First, captured image acquisition unit 110 obtains the assembling object WK1 and the first quilt at least reflecting and having in assembling operation
Assemble 1 or multiple first captured images of object WK2.Then, control unit 120 carries out group according to the first captured image
Fill the characteristic quantity detection process that object WK1 and first is assembled object WK2.
In this example, first, captured image acquisition unit 110 obtains the first shooting reflected and have the first assembled object WK2
Image PIM31 (S401).Then, control unit 120 carries out the first assembled object WK2's according to the first captured image PIM31
Characteristic quantity detection process, so as to detect first object characteristic quantity FB1 (S402).Here, examined as first object characteristic quantity FB1
Measure the target feature point GP1 and target feature point GP2 as shown in Figure 14 (A).
Next, captured image acquisition unit 110 obtains the first captured image PIM32 for reflecting and having assembling object WK1
(S403).Then, control unit 120 carries out the characteristic quantity detection process of assembling object WK1 according to the first captured image PIM32,
So as to detect the first concern characteristic quantity FA (S404).Here, concern characteristic point IP1 is detected as the first concern characteristic quantity FA
With paying close attention to characteristic point IP2.
In addition, in step S401~S404, acquisition is reflected respectively has assembling object WK1 and first to be assembled object
The example of multiple first captured images (PIM31 and PIM32) of WK2 is illustrated, but as shown in Figure 14 (A), in group
It, can also be from the case that the assembled object WK2 of dress object WK1 and first mirror the first identical captured image PIM31
First captured image PIM31 detection assemblings object WK1 and first is assembled the characteristic quantity of the both sides of object WK2.
Next, control unit 120 is according to the characteristic quantity (the first concern characteristic quantity FA) of assembling object WK1 and the first quilt
The characteristic quantity (first object characteristic quantity FB1) of object WK2 is assembled, so that assembling object WK1 and first is assembled object
The relative position posture relation of WK2 becomes the mode of first object relative position posture relation, moves assembling object WK1
(S405).Specifically, in captured image, so that concern characteristic point IP1 and target feature point GP1 is approached and is made concern special
Point IP2 and mode close target feature point GP2 are levied, moves assembling object WK1.In addition, the movement is equivalent to Figure 13 A's
The movement of arrow YJ1.
Then, whether control unit 120 is in first object phase to assembling object WK1 and the first assembled object WK2
Posture relation is judged (S406).It is being determined as that assembling object WK1 and the first assembled object WK2 is not in
In the case of first object relative position posture relation, step S403 is returned to, and re-starts processing.
On the other hand, it is assembled object WK2 with first and is in the opposite position of first object being judged to assembling object WK1
In the case of putting posture relation, from the second concern characteristic quantity of the first assembled object WK2 of the first captured image PIM32 detections
FB2(S407).Specifically, as be described hereinafter shown in Figure 14 (B), control unit 120 detects pass as the second concern characteristic quantity FB2
Note characteristic point IP3 and concern characteristic point IP4.
Next, captured image acquisition unit 110, which obtains at least reflecting as shown in Figure 14 (B), the second assembled object
The second captured image PIM41 (S408) of WK3.
Then, control unit 120 carries out the characteristic quantity inspection of the second assembled object WK3 according to the second captured image PIM41
Survey is handled, so as to detect the second target signature amount FC (S409).Specifically, as shown in Figure 14 (B), control unit 120 is as the
Two target signature amount FC and detect target feature point GP3 and target feature point GP4.
Next, control unit 120 according to the characteristic quantity (the second concern characteristic quantity FB2) of the first assembled object WK2, with
The characteristic quantity (the second target signature amount FC) of second assembled object WK3, so that the first assembled object WK2 and the second quilt
Assembling the relative position posture relation of object WK3 becomes the mode of the second target relative position posture relation, makes assembling object
Object WK1 and first is assembled object WK2 movements (S410).
Specifically, in captured image, so that concern characteristic point IP3 and target feature point GP3 is approached and is made concern special
Point IP4 and mode close target feature point GP4 are levied, assembling object WK1 and first is made to be assembled object WK2 movements.This
Outside, which is equivalent to the movement of the arrow YJ2 of Figure 13 A.
Then, the object WK2 and second assembled to first of control unit 120 is assembled whether object WK3 is in second
Target relative position posture relation is judged (S411).It is being determined as the first assembled assembled couple of object WK2 and second
In the case of being not in the second target relative position posture relation as object WK3, step S408 is returned to, and re-starts processing.
On the other hand, the captured image PIME as shown in Figure 14 (C), it is being determined as the first assembled object WK2
In the case of being in assembled state with the second assembled object WK3, be in the second target relative position posture relation, terminate
Processing.
In such manner, it is possible to so that the concern characteristic point (IP1 and IP2) of assembling object WK1 and the first assembled object
The target feature point (GP1 and GP2) of WK2 it is close and make the concern characteristic point of the first assembled object WK2 (IP3 and
IP4) the mode close with the target feature point (GP3 and GP4) of the second assembled object WK3, carries out visual servo etc..
Alternatively, it is also possible to be assembled in order to assembling object WK1 and first shown in the flow chart unlike Figure 15
Object WK2 is assembled, and assembles three workpiece simultaneously as shown in Figure 16 (A)~Figure 16 (C).
Process flow at this time is shown in the flow chart of Figure 17.First, the acquisition of captured image acquisition unit 110, which is reflected, group
Pretend the assembling object WK1 in industry, the first assembled object WK2 and second is assembled 1 of object WK3 or more
A captured image (S501).In this example, the captured image PIM51 shown in Figure 16 (A) is obtained.
Next, control unit 120 according to 1 or multiple captured images, carries out assembling object WK1, first assembled
Object WK2 and second is assembled the characteristic quantity detection process (S502~S504) of object WK3.
In this example, as shown in Figure 16 (A), target spy is detected as the characteristic quantity of the second assembled object WK3
Levy point GP3 and target feature point GP4 (S502).Then, mesh is detected as the characteristic quantity of the first assembled object WK2
Mark characteristic point GP1 and target feature point GP2, concern characteristic point IP3 and concern characteristic point IP4 (S503).Also, as group
It fills the characteristic quantity of object WK1 and detects concern characteristic point IP1 and concern characteristic point IP2 (S504).In addition, in three works
In the case that part mirrors each different captured images, it can also be carried out respectively at characteristic quantity detection in different captured images
Reason.
Next, 120 one side of control unit is according to the characteristic quantity of assembling object WK1 and the first assembled object WK2
Characteristic quantity so that assembling object WK1 and first is assembled the relative position posture relation of object WK2 as first object
The mode of relative position posture relation moves assembling object WK1, while the feature according to the first assembled object WK2
The characteristic quantity of amount and the second assembled object WK3, so that the first assembled object WK2 and second is assembled object
The relative position posture relation of WK3 becomes the mode of the second target relative position posture relation, makes the first assembled object WK2
Mobile (S505).
That is, so that concern characteristic point IP1 and target feature point GP1 is approached, made concern characteristic point IP2 and target feature point
GP2 is approached, is approached concern characteristic point IP3 and target feature point GP3 and meets concern characteristic point IP4 and target feature point GP4
Near mode makes assembling object WK1 be moved simultaneously with the first assembled object WK2.
Then, captured image acquisition unit 110 reacquires captured image (S506), and control unit 120 according to obtaining again
The captured image taken, assembling object WK1, the first assembled object WK2 and second are assembled object WK3 this three
Whether a workpiece is judged (S507) in target relative position posture relation.
For example, the captured image obtained in step S506 is the captured image PIM52 as shown in Figure 16 (B), judging
For three workpiece still not at target relative position posture relation in the case of, return to step S503, and processing be repeated.
In addition, according to the captured image PIM52 of reacquisition, the processing of below step S503 is carried out.
On the other hand, the captured image obtained in step S506 is the feelings of the captured image PIME as shown in Figure 16 (C)
Under condition, it is determined as that three workpiece are in target relative position posture relation, so as to terminate to handle.
Thereby, it is possible to be carried out at the same time assembling operation of three workpiece etc..As a result, the assembling of three workpiece can be shortened
Activity duration of operation etc..
It also, can also be suitable according to the assembling shown in the flow chart with Figure 15 in the assembling operation of three workpiece of progress
The opposite order of sequence carries out assembling operation.That is, as shown in Figure 18 (A)~Figure 18 (C), object can also be assembled by first
After object WK2 is assembled in the second assembled object WK3, assembling object WK1 is assembled in the first assembled object WK2.
In this case, as shown in Figure 18 (A), control unit 120 has second in assembling operation by group according at least reflecting
The first captured image PIM61 of dress object WK3, the characteristic quantity detection process of the assembled object WK3 of progress second, and according to
The characteristic quantity of second assembled object WK3 makes the first assembled object WK2 movements.Further, since characteristic quantity detection process
Detailed content it is identical with the example illustrated with Figure 16 (A), so the description thereof will be omitted.
Next, as shown in Figure 18 (B), control unit 120 is assembled object WK2 according at least reflect after having movement first
The second captured image PIM71, carry out the characteristic quantity detection process of the first assembled object WK2, and assembled according to first
The characteristic quantity of object WK2 in a manner of forming the assembled state such as Figure 18 (C), moves assembling object WK1.
Assembling object WK1 need not as a result, moved simultaneously with the first assembled object WK2, and can be more easily
Carry out control of robot etc..In addition, even if not being the robot of the robot of multi-arm but single armed, three works can be also carried out
Assembling operation of part etc..
In addition, the shoot part (video camera) 200 used in above present embodiment is for example including CCD (charge-
coupled device:Charge coupled cell) etc. capturing elements and optical system.Shoot part 200 for example ceiling, operation post it
It is first-class, with the detection object (end effector of assembling object, assembled object or robot 300 in visual servo
310 etc.) configured into such angle in the visual angle of shoot part 200.Moreover, shoot part 200 by the information of captured image to
100 grade of robot control system exports.Wherein, in the present embodiment, by the information of captured image keep intact to machine
People's control system 100 exports, but it's not limited to that.For example, shoot part 200 can fill used in including image procossing etc.
It puts (processor).
3. robot
Next, in Figure 19 A and Figure 19 B, the machine of the robot control system 100 using present embodiment is shown
The configuration example of device people 300.Under the either case of Figure 19 A and Figure 19 B, robot 300 all has end effector 310.
End effector 310 be in order to hold, lift, sling, adsorb workpiece (operation object), to workpiece implement process
And it is installed on the component of the endpoint of arm.End effector 310 for example can be hand (handle part), can be hook portion, can also
It is sucker etc..Also, 1 support arm can also be directed to, multiple end effectors are set.In addition, arm is the component of robot 300, and
It is the movable member for including more than one joint.
For example, the robot of Figure 19 A is robot body 300 (robot) and robot control system 100 independently structure
Into.In this case, some or all functions of robot control system 100 for example pass through PC (Personal
Computer:Personal computer) it realizes.
In addition, the robot of present embodiment is not limited to the structure of Figure 19 A or as shown in Figure 19 B machine
What device human agent 300 was integrally formed with robot control system 100.That is, robot 300 can also include control system of robot
System 100.Specifically, as shown in Figure 19 B, robot 300 can also be with robot body (with arm and end effector
310) and the base unit portion of support robot body, and robot control system 100 is accommodated in the base unit portion.
In the robot 300 of Figure 19 B, be formed as in the knot that base unit portion is provided with wheel etc. and robot can integrally move
Structure.In addition, Figure 19 A are the examples of single armed type, and robot 300 can also be the machine of the multi-arms type such as Dual-arm as shown in Figure 19 B
Device people.In addition, robot 300 can be the robot that is moved by human hand or set the motor of driving wheel and
The motor is controlled so as to the robot moved it using robot control system 100.In addition, it is not limited to such as Figure 19 B institutes
Robot control system 100 is set in the base unit portion being arranged under robot 300 with showing.
In addition, as shown in figure 20, the function of robot control system 100 can also be by via including wired and wireless
At least one party network 400 and the server 500 that is communicated to connect with robot 300 is realized.
Or in the present embodiment, the machine of the present invention can also be carried out by the robot control system of 500 side of server
A part for the processing of device people's control system.In this case, by being with being arranged at the robot of 300 side of robot and controlling
The decentralized processing of system, so as to fulfill the processing.In addition, the robot control system of 300 side of robot is for example by being arranged at machine
The terminal installation 330 (control unit) of device people 300 is realized.
Moreover, in this case, the robot control system of 500 side of server carries out the control system of robot of the present invention
The processing of robot control system in each processing of system, being allocated in server 500.On the other hand, it is arranged at robot 300
The robot control system robot control system that carries out the present invention each processing in, machine that be allocated in robot 300
The processing of people's control system.In addition, each processing of the robot control system of the present invention can be allocated in 500 side of server
It handles or is allocated in the processing of 300 side of robot.
Can to carry out treating capacity more for the higher server 500 of processing capacity for example compared with terminal installation 330 as a result,
Processing etc..Also, such as server 500 can control the action of each robot 300 together, and can easily make multiple machines
300 coordination of people etc..
In addition, in recent years, the situation for manufacturing multi items and the component of minority has increased trend.Moreover, it is manufactured in change
Component species in the case of, it is necessary to change robot progress action.If structure as shown in figure 20, even if not weighing then
New to carry out instructing operation for multiple robots 300 are respective, server 500 can also change robot 300 and be carried out together
Action etc..
Also, if structure as shown in figure 20, then with setting a robot control system for each robot 300
100 situation is compared, and the trouble the etc. during software upgrading for carrying out robot control system 100 can be greatly reduced.
In addition, robot control system and robot of present embodiment etc. can also realize above-mentioned place by program
A part or major part for reason.In this case, the processors such as CPU perform program, so as to fulfill the machine of present embodiment
Device people control system and robot etc..Specifically, the program for being stored in information storage medium, and the processing such as CPU are read
Device performs the program read.Here, information storage medium (medium that can be read by computer) is to store program, data etc.
Medium, function can pass through CD (DVD, CD etc.), HDD (hard disk drive) or memory (card type reservoir, ROM
Deng) etc. realize.Moreover, the processors such as CPU carry out present embodiment according to the program (data) for being stored in information storage medium
Various processing.That is, store that computer is made (to possess operation portion, processing unit, storage part, output section in information storage medium
Device) program that is functioned as each portion of present embodiment is (for the journey for the processing that computer is made to perform each portion
Sequence).
Present embodiment is described in detail above, but can not depart from substantially the present invention new content and
Under conditions of effect, carry out diversified change, this it will be apparent to those skilled in the art that.Therefore, this
Kind changes example and is also all contained in the scope of the present invention.For example, in specification or attached drawing, at least once with more broad sense or together
The term that the different terms of justice are described together, any position in specification or attached drawing can be substituted for difference use
Language.In addition, robot control system, robot and program structure, action be also not limited to what is illustrated in present embodiment
Structure, action, and various modifications implementation can be carried out.
Second embodiment
Figure 21 is that the system of an example of the structure for the robot system 1 for representing one embodiment of the present invention is formed
Figure.The robot system 1 of present embodiment mainly possesses robot 10, control unit 20, the first shoot part 30 and the second shooting
Portion 40.
Robot 10 is the arm type robot with the arm 11 for including multiple connectors (joint) 12 and multiple connecting rods 13.Machine
Device people 10 is handled according to the control signal from control unit 20.
Connector 12 is provided with the actuator for being acted them (not shown).Actuator for example possesses servo horse
It reaches, encoder etc..The encoder values of encoder output are used for the feedback control of the robot 10 carried out by control unit 20.
Trick video camera 15 is provided near the front end of arm 11.Trick video camera 15 is front end of the shooting in arm 11
Object and the unit for generating image data.As trick video camera 15, such as visible light camera, infrared pick-up can be used
Machine etc..
The region of fore-end as arm 11, by (removing is not described later with other regions of robot 10
Hand 14) region of connection is defined as the endpoint of arm 11.In the present embodiment, the position of the point E shown in Figure 21 is made to be located at end
The position of point.
In addition, for the structure of robot 10, when the feature to present embodiment is illustrated to primary structure
It is illustrated, and is not limited to said structure.It is not precluded from the structure that general holding robot possesses.For example,
The arm of 6 axis is shown in Figure 21, but the number of axle (piece-ups) can also be made further to increase, its reduction can also be made.It can also
Increase and decrease the quantity of connecting rod.Alternatively, it is also possible to suitably change the shape of the various parts such as arm, connecting rod, connector, size, configuration, structure
It makes.
Control unit 20 carries out the whole processing of control robot 10.Control unit 20 can be arranged at away from robot 10
The place of main body can also be built in robot 10.The feelings in the place of the main body away from robot 10 are arranged in control unit 20
Under condition, control unit 20 is connected in wired or wireless manner with robot 10.
First shoot part 30 and the second shoot part 40 are that the operating area of arm 11 is nearby carried out from different perspectives respectively
It shoots and generates the unit of image data.First shoot part 30 and the second shoot part 40 are set for example including video camera
In operation post, ceiling, wall etc..As the first shoot part 30 and the second shoot part 40, visible light camera, infrared can be used
Line video camera etc..First shoot part 30 and the second shoot part 40 are connected with control unit 20, and are inputted to control unit 20 by first
The image of 30 and second shoot part 40 of shoot part shooting.In addition, the first shoot part 30 and the second shoot part 40 can not also
It is connected with control unit 20 with robot 10, robot 10 can also be built in.At this point, via robot 10 and to control unit 20
Input the image shot by the first shoot part 30 and the second shoot part 40.
Next, the function configuration example of robot system 1 is illustrated.Figure 22 represents the functional block of robot system 1
Figure.
Robot 10 possesses according to sensor values of the encoder values of actuator and sensor etc. come the dynamic of control arm 11
Make control unit 101.
Operation control part 101 is according to the biography of information, the encoder values of actuator and the sensor exported from control unit 20
Sensor value etc. in a manner of making arm 11 to the position movement exported from control unit 20, drives actuator.It can be connect according to being arranged at
The current location of endpoint is obtained in encoder values of actuator of first 12 grade etc..
Control unit 20 mainly possesses position control section 2000, visual servo portion 210 and drive control part 220.It controls position
Portion 2000 processed mainly possesses path acquisition unit 201 and the first control units 202.Visual servo portion 210 mainly possesses image acquisition
Portion 211,212 and second control unit 213 of image processing part.
Position control section 2000 performs the position for moving arm 11 along preset defined path and controls.
Path acquisition unit 201 obtains and the relevant information in path.Path is according to position is instructed to be formed, e.g. with pre-
The defined order first set, by set beforehand through guidance 1 or more instruct position link up so as to formed.This
Outside, information relevant with path, such as denotation coordination, the information of order in path are held in memory 22 and (are carried out below
Illustrate, with reference to Figure 24 etc.).Be held in memory 22 can also be defeated via 25 grade of input unit with the relevant information in path
Enter.In addition, with the relevant information in path also comprising endpoint final position, i.e. with the relevant information in target location.
The first control units 202 according to obtained by path acquisition unit 201 with the relevant information in path, set next guidance
Position, the track for setting endpoint.
In addition, track of the first control units 202 according to endpoint, determines next shift position of arm 11, determines to set
In the target angle of each actuator of connector 12.In addition, the generation of the first control units 202 moves arm 11, target angle is such to be referred to
Value is made, and it is exported to drive control part 220.Further, since the processing that the first control units 202 carries out is general content,
So omit detailed description.
Visual servo portion 210 performs the image according to the first shoot part 30 and the shooting of the second shoot part 40, will be with object
The change of opposite position be turned to visual information to measure, and used as feedback information, so as to track object
Control means that is, so-called visual servo, and move arm 11.
In addition, as visual servo, the methods of method, the method for character references of position reference can be used, upper rheme
It puts the method for benchmark and robot is controlled according to the three dimensional local information of object, the three dimensional local information of above-mentioned object is by making
By the use of as generating parallax two images and using image as stereo-picture come identify stereogram the methods of calculate;
The method of features described above benchmark is so that by two shoot parts from the image that orthogonal direction is shot and each shoot part for keeping in advance
The difference of target image be that the mode of zero (error matrix of the pixel quantity of each image is zero) controls robot.For example, at this
In embodiment, using the method for character references.In addition, although the method for character references can be carried out using 1 shoot part,
But in order to improve precision, it is preferable to use 2 shoot parts.
Image acquiring unit 211 obtains the image (hereinafter referred to as the first image) and second captured by the first shoot part 30
Image (hereinafter referred to as the second image) captured by shoot part 40.The first image and second that image acquiring unit 211 is obtained
Image is exported to image processing part 212.
Image processing part 212 is according to the first image and the second image obtained from image acquiring unit 211, from the first image
And second image identification endpoint front end, and extract and include the image of endpoint.Further, since what image processing part 212 carried out
Image recognition processing can use general various technologies, so the description thereof will be omitted.
Second control unit 213 is according to image (hereinafter referred to as present image), the Yi Jiduan extracted by image processing part 212
Point is located at the image (hereinafter referred to as target image) during target location, sets track, the i.e. amount of movement of endpoint and the shifting of endpoint
Dynamic direction.In addition, for target image, the information obtained in advance is stored in 22 grade of memory.
In addition, amount of movement and moving direction of second control unit 213 according to endpoint, determine to be arranged at each rush of connector 12
The target angle of dynamic device.Also, the second control unit 213 generation make arm 11 move target angle as command value, and by its to
Drive control part 220 exports.Further, since the processing that the second control unit 213 carries out is general content, so omitting in detail
Explanation.
In addition, in having articulated robot 10, if determining the angle in each joint, the position of endpoint passes through forward direction
Kinematics is handled and uniquely determined.That is, in N articulated robots, since a target position can be showed by N number of joint angles
It puts, if so make N number of joint angles is combined as a target joint angle, the track of endpoint can be thought of as joint
The set of angle.The command value exported as a result, from 202 and second control unit 213 of the first control units can be related to position
Value (target location) or with the angle in joint it is relevant value (target angle).
Drive control part 220 is according to the information obtained from 202 and second control unit 213 of the first control units, so that endpoint
Position, i.e. by make arm 11 move in a manner of, to 101 output indication of operation control part.Later to the institute of drive control part 220 into
The detailed content of capable processing is described in detail.
Figure 23 is the data flowchart of robot system 1.
In position control section 2000, transfer and be useful for controlling each joint and the target angle for making robot by position
Close feedback control loop.The information in preset path includes and the relevant information in target location.For the first control units 202
For, if obtaining with the relevant information in target location, basis information relevant with target location and by path acquisition unit 201
The current location of acquisition generates track and command value (being target angle here).
In visual servo portion 210, transmission is useful for using the letter from the first shoot part 30 and the second shoot part 40
It ceases and the visual feedback loop close with target location.Second control unit 213 is obtained as with the relevant information in target location
Target image.For the second control unit 213, since the target location on present image and present image is on image
Coordinate system represent, so transform it into the coordinate system of robot.Second control unit 213 is according to current after conversion
Present image and target image generate track and command value (being target angle here).
Drive control part 220 is to the output of robot 10 according to the command value exported from the first control units 202 and from second
Command value that control unit 213 exports and the command value that is formed.Specifically, drive control part 220 will be defeated from the first control units 202
The command value gone out is multiplied by this coefficient of α, and the command value exported from the second control unit 213 is multiplied by this coefficient of 1- α, and to machine
Device people 10 exports the value for forming the synthesis of above-mentioned value.Here, α is than the 0 big, real number smaller than 1.
In addition, according to the command value exported from the first control units 202 and the command value exported from the second control unit 213
And it's not limited to that for the mode of the command value formed.
Here, in the present embodiment, from the first control units 202 with constant interval (such as every 1 millisecond (msec))
Output order value, from the second control unit 213 with the interval longer than the output gap from the first control units 202 (such as every
30msec) output order value.Therefore, drive control part 220, will in the case of not from 213 output order value of the second control unit
The command value exported from the first control units 202 is multiplied by this coefficient of α, and the command value finally exported from the second control unit 213 is multiplied
With this coefficient of 1- α, and the value for forming the synthesis of above-mentioned value is exported to robot 10.For drive control part 220, most
The command value exported afterwards from the second control unit 213 is temporarily stored in the storage devices such as memory 22 (Figure 24 references), drive control
Portion 220 from storage device reads it and use.
Operation control part 101 obtains command value (target angle) from control unit 20.Operation control part 101 is connect according to being arranged at
Encoder values of actuator of first 12 grade etc., obtain the current angular of endpoint, and calculate the difference of target angle and current angular
(misalignment angle).In addition, operation control part 101 for example according to misalignment angle, to calculate the translational speed of arm 11, (get over by misalignment angle
Big translational speed is faster), and the misalignment angle that 11 mobile computing of arm goes out is made with the translational speed calculated.
Figure 24 is the block diagram of an example of the brief configuration for representing control unit 20.As shown in the figure, by such as computer etc.
The control unit 20 of composition possesses central processing unit (the Central Processing Unit as arithmetic unit:)21;By conduct
RAM (the Random Access Memory of the storage device of volatibility:Random access memory), with depositing as non-volatile
ROM (the Read only Memory of storage device:Read-only memory) form memory 22;External memory 23;With machine
The communicator 24 that device outside 10 grade of people communicates;The input units such as mouse or keyboard 25;The output devices such as display
26;And the interface (I/F) 27 for being connected control unit 20 with other units.
Above-mentioned each function part is, for example, to read and perform in memory 22 by CPU21 to be stored in as defined in memory 22
Program so as to fulfill.In addition, regulated procedure can for example be installed on memory 22 in advance, it can also be via communicator 24
And it downloads to install or update from network.
For the structure of above robot system 1, when the feature to present embodiment is illustrated to main
Structure is illustrated, and is not limited to said structure.In addition, however not excluded that possess the structure of general robot system.
Next, the processing of the feature of the robot system 1 being made of said structure of present embodiment is illustrated.
In the present embodiment, trick video camera 15 to be used visually to be examined to object O1, O2, O3 as shown in figure 21 in order
It is illustrated exemplified by the operation looked into.
If through not shown button etc., input control starts to indicate, control unit 20 is controlled by position and vision
Servo and control arm 11.Drive control part 220 in the situation from 213 input instruction value of the second control unit (in the present embodiment
It is every 30 times carry out 1 time) under, use will be exported from the first control units 202 value (hereinafter referred to as based on position control instruction
Value), synthesized with arbitrary component with the value (the hereinafter referred to as command value of view-based access control model servo) exported from the second control unit 213
The command value formed, and to 101 output indication of operation control part.Drive control part 220 is not referring to from the input of the second control unit 213
Make in the situation (in the present embodiment every 30 times carry out 29 times) of value, using from the first control units 202 export based on position
The command value of control, with finally being exported from the second control unit 213 and being temporarily stored in the command value of memory 22 etc., and to action
101 output indication of control unit.
The track of endpoint when Figure 25 A are to by position control and visual servo and control arm 11 illustrates
Figure.Position 1 in Figure 25 A is provided with object O1, and position 2 is provided with object O2, and position 3 is provided with object
O3.In Figure 25 A, object O1, O2, O3 are generally aligned in the same plane on (X/Y plane), and trick video camera 15 is located at constant Z-direction
Position.
Track shown in solid is using only the endpoint in the case of the command value based on position control in Figure 25 A
Track.The track becomes by the track on position 1,2,3, in object O1, O2, O3 always with identical position, posture
In the case of setting, only it can control to visually inspect object O1, O2, O3 by position.
In contrast, in Figure 25 A, the feelings that object O2 is moved from the position 2 on solid line to the position 2 after movement are considered
Condition.Since endpoint moves on object O2 shown in solid, so in the feelings using only the command value controlled based on position
Under condition, it is contemplated that the situation that the precision of the inspection of object O2 is reduced or can not checked.
It is mobile corresponding due to the position with object, so applying visual servo well.If using visual servo,
Even if the position offset of object, endpoint can also moved in the surface of object.For example, if object O2 is located at movement
Position 2 afterwards, then in the case where giving the image shown in Figure 25 B as target image, it is assumed that using only view-based access control model servo
Command value, then endpoint pass through on the track shown in the dotted line in Figure 25 A.
Visual servo be can highly useful control method corresponding with the offset of object, but because of the first shoot part
30 or second frame rate of shoot part 40, the image processing time etc. of image processing part 212, so as to the situation phase with position control
Than, there are problems that reach target location until time spend it is more this.
Therefore, by (being carried out at the same time position control to watch with vision using position control and the command value of visual servo simultaneously
Clothes, i.e. parallel control), so as to which the position offset with object O1, O2, O3 accordingly ensures to check precision, while watched with vision
Clothes are compared and moved at high speed.
In addition, so-called while be not limited to identical time, moment.For example, it is controlled simultaneously using position with regarding
Feel that the situation of the command value of servo refers to, also include the feelings of the command value of outgoing position control simultaneously and the command value of visual servo
The concept of the situation of the command value of the command value and visual servo of condition, with staggering tiny time outgoing position control.For small
For time, as long as the time of the processing identical with situation simultaneously can be carried out, it is possible to it is the time of random length.
Particularly in the case of visual inspection, since the visual angle of trick video camera 15 can (object comprising object O2
Object O2 need not be located at the center at visual angle), so also out of question not on object O2 even if track.
Therefore, in the present embodiment, drive control part 220 includes object O2 to form the visual angle of trick video camera 15
Track mode, the command value based on the command value that position controls and view-based access control model servo is synthesized.In Figure 25 A by
Chain-dotted line represents track at this time.The track can ensure to check essence to greatest extent not by the surface of object O2
The position of degree.
In addition, except the installation position of object deviate in addition to, each component of the arm 11 as caused by temperature change etc. it is swollen
Swollen grade also becomes an important factor for offset of the position and the position of actual object on path, but in this case,
It also can be by controlling with the command value of visual servo to solve using position simultaneously.
The position of the track shown in chain-dotted line in Figure 25 A can change due to component α.Figure 26 is that component α is carried out
The figure of explanation.
Figure 26 A are the figures of the relation represented to the distance and component α of target (being object O1, O2, O3 here).Line A is
It is unrelated with the distance to target location and be the situation of constant component α.Line B is and the accordingly stage of the distance to target location
Reduce to property the situation of component α.Line C, D are the situations for accordingly continuously reducing component α with the distance to target location, line C
It is the situation about becoming smaller for proportionally making component α with distance, line D is the distance situation proportional to component α.Wherein, divide
Amount α is 0 < α < 1.
In the case of line B, C, D for Figure 26 A, so that as the distance to target location becomes near, the finger of position control
The proportion of value is made to reduce, the increased mode of the proportion of the command value of visual servo, setting component α.It is moved as a result, in target location
In the case of, track can be generated in a manner of being more nearly endpoint and target location.
Further, since setting component α so that position control be superimposed with each command value of visual servo, so as to
The distance of target location accordingly continuously changes component α.By accordingly continuously changing component with distance, can will control
Successfully switch from control of the control for the arm that position controls to the arm for being generally based on visual servo is generally based on.
In addition, as shown in fig. 26, component α is not limited to by the distance to target (being object O1, O2, O3 here)
Defined situation.It as shown in fig. 26b, can also be by the distance regulation component α that is left from starting position.That is, drive control part 220
Can component α be determined according to the difference of current location and target location.
In addition, range-to-go, the distance left from starting position can be according to the roads acquired in path acquisition unit 101
Footpath and be obtained, can also be obtained according to present image with target image.For example, according to path and in the case of being obtained, energy
It is enough according to the starting position included with the relevant information in path, target, the coordinate of position etc. of object, order and to work as
The coordinate of front position is sequentially calculated.
For the difference of current location as shown in figure 26 and target location and the relation of component, due to user
Desired track carrys out control arm 11, thus such as can via input unit 25 input unit and input.In addition, current location with
The difference of target location and the relation of component are pre-stored within the storing mechanisms such as memory 22, thereby using it.In addition, it deposits
It can be via input unit and in inputting to be stored in the current location of storing mechanism and the difference of target location and the relation of component
Appearance or the in advance content of initial setting.
According to the present embodiment, due to the use of at a constant ratio by position control synthesized with each command value of visual servo
The command value formed carrys out control arm (trick video camera), so even if in the case where generating the position offset of object, also can
It is enough precisely to carry out high speed inspection.Particularly, according to the present embodiment, can make speed and position control it is equal (with regarding
Feel servo compared at a high speed), and can be directed to position offset compared with position controls and carry out robustness inspection.
In addition, in the present embodiment, usually by the command value controlled based on position and the command value of view-based access control model servo
Synthesized, but for example in the case where the position offset of object O2 is bigger than defined threshold value, can also Jin Shiyong be based on
The command value of visual servo and move arm 11.The position that object O2 is obtained according to present image for second control unit 213 is inclined
It whether bigger than defined threshold value moves.
In addition, in the present embodiment, drive control part 220 determines point according to the difference of current location and target location
α is measured, but it's not limited to that for the method for decision component α.For example, drive control part 220 can also make component α with the time
By and change.In addition, drive control part 220 can also to by making component α with time going by until certain time and
Variation, changes component α according to the difference of current location and target location afterwards.
3rd embodiment
The first embodiment of the present invention is usually using at a constant ratio by position control and each instruction of visual servo
The command value that value synthesis forms carrys out control arm, but it's not limited to that for the scope of application of the present invention.
Third embodiment of the present invention is each command value that will accordingly be controlled with the position of object using only position
Situation, with using the situation of command value at a constant ratio synthesizing position control with each command value of visual servo
The mode being combined.Hereinafter, the robot system 2 of third embodiment of the present invention is illustrated.Further, since machine
The structure of people's system 2 is identical with the robot system 1 of second embodiment, so the explanation of the structure of robot system 2 is omitted,
And the processing of robot system 2 is illustrated.In addition, for the part identical with second embodiment, mark identical attached
Icon is remembered, and the description thereof will be omitted.
Figure 27 is the flow chart of the flow for the control process for representing the arm 11 of the present invention.The processing is, for example, by via not
Button of diagram etc. and input control start instruction to start.In the present embodiment, the visual of object O1, O2 is carried out
It checks.
If the process is started, then position control section 2000 controls (step S1000) into row position.That is, the first control units
202 according to obtained by path acquisition unit 201 with path relevant information and generate command value, and by it to drive control part
220 outputs.Drive control part 220 will be exported from the command value that the first control units 202 exports to robot 10.In this way, action control
Portion 101 processed makes arm 11 (i.e. endpoint) mobile according to command value.
Next, the first control units 202 is to controlling the result for moving endpoint by position, whether endpoint being passed through
Switching point 1 is judged (step S1002).Represent the information of the position of switching point 1 included in preset related to path
Information in.
Figure 28 is the figure illustrated to the track of the position of object O1, O2, the position of switching point and endpoint.At this
In embodiment, switching point 1, which is arranged on, to be started between place and object O1.
In endpoint not by the way that in the situation (being no in step S1002) of switching point 1, step is repeated in control unit 20
The processing of S1000.
In a case where the endpoint passes through the switching point 1 (Yes in step S1002), drive control part 220 is controlled using position
And visual servo carrys out control arm 11 (step S1004).That is, the first control units 202 according to obtained by path acquisition unit 201 with
The relevant information in path and generate command value, and it is exported to drive control part 220.In addition, the second control unit 213 according to by
Present image and the target image of the processing of image processing part 212 and generate command value, and it is exported to drive control part 220.
Drive control part 220 carries out interim switching with time going by and to component α, and using the component α after switching, will be from the
The command value of one control unit 202 output is synthesized with the command value exported from the second control unit 213, and defeated to robot 10
Go out.In this way, operation control part 101 makes arm 11 (i.e. endpoint) mobile according to command value.
Hereinafter, the processing of step S1004 is specifically described.Before the processing of step S1004 is carried out, i.e. in step
In the processing of S1000, without using the command value from visual servo portion 210.Therefore, the command value from position control section 2000
Component α be 1 (component 1- α=0 of the command value from visual servo portion 210).
After the processing of step S1004 starts, if having passed through certain time (such as 10msec), drive control part 220
The component α of command value from position control section 2000 is switched to 0.9 from 1.In this way, the instruction from visual servo portion 210
The component 1- α of value become 0.1.Then, drive control part 220 makes the component α of the command value from position control section 2000 be
0.9 and make the component 1- α of the command value from visual servo portion 210 to be closed to their command value on the premise of 0.1
Into, and exported to robot 10.
Afterwards, if the certain time has elapsed, then drive control part 220 by the instruction from position control section 2000
The component α of value switches to 0.8 from 0.9, and the component 1- α of the command value from visual servo portion 210 are switched to from 0.1
0.2.In this way, periodically switch component α with the process of certain time, and using the component after switching, it will be from the first control
The command value that portion 202 processed exports is synthesized with the command value exported from the second control unit 213.
Position control section 2000 becomes 0.5, from visual servo in the component α of the command value from position control section 2000
Before the component 1- α of the command value in portion 210 become 0.5, the switching of component α and the synthesis of command value is repeated.Driving
Control unit 220 becomes the 0.5, command value from visual servo portion 210 in the component α of the command value from position control section 2000
Component 1- α become 0.5 after, in a manner of not switching component α and maintain component α, the synthesis of command value is repeated.
It can also be visually inspected even if in the case of the change in location of object O1 as a result,.In addition, beyond institute
Ground is needed only to move endpoint away from the case of object by position control, thus allow for high speed processing.In addition,
Endpoint is moved by position control and visual servo when close with object, so as to also can be with the position of object
The situation of variation corresponds to.Also, by slowly switching component α, it is capable of unexpected action, the vibration of preventing arm 11.
In addition, in the processing of step S1004, during the switching and the synthesis of command value for carrying out component α, endpoint
In the case of (the step S1006, be described in detail later) of switching point 2, without dividing before component α becomes 0.5
The switching of α and the synthesis of command value are measured, and enters step S1006.
Next, the first control units 202 by position control and visual servo to making result that endpoint moves, i.e. pair
Whether endpoint is judged (step S1006) by switching point 2.Represent that the information of the position of switching point 2 is included in and path phase
In the information of pass.As shown in figure 28, switching point 2 is set in object O1.
In endpoint not by the way that in the situation (being no in step S1006) of switching point 2, step is repeated in control unit 20
The processing of S1004.
In endpoint by the situation (being yes in step S1006) of switching point 2, drive control part 220 so that component α with
The process of time and interim increased mode switches over component α, will be from the first control and using the component α after switching
The command value that portion 202 exports is synthesized with the command value exported from the second control unit 213, and is exported to robot 10.Action
Control unit 101 makes arm 11 (i.e. endpoint) mobile (step S1008) according to command value.
Hereinafter, the processing of step S1008 is specifically described.Before the processing of step S1008 is carried out, i.e. in step
In the processing of S1006, drive control part 220 makes the component α of the command value from position control section 2000 be 0.5, and makes to come from
The component 1- α of the command value in visual servo portion 210 are for 0.5 so as to synthetic instruction value.
After the processing of step S1008 starts, if having passed through certain time (such as 10msec), drive control part 220
The component α of command value from position control section 2000 is switched to 0.6 from 0.5.In this way, the finger from visual servo portion 210
The component 1- α of value is made to become 0.4.Then, drive control part 220 makes the component α of the command value from position control section 2000
For 0.6, on the premise of the component 1- α of the command value from visual servo portion 210 are 0.4, their command value is closed
Into, and exported to robot 10.
Afterwards, if the certain time has elapsed, then drive control part 220 by the instruction from position control section 2000
The component α of value switches to 0.7 from 0.6, and the component 1- α of the command value from visual servo portion 210 are switched to from 0.4
0.3.In this way, periodically switch component α with the process of certain time, and using the component after switching, it will be from the first control
The command value that portion 202 processed exports is synthesized with the command value exported from the second control unit 213.
The switching of component α is repeated in drive control part 220 before component α becomes 1.Become 1 situation in component α
Under, the component 1- α of the command value from visual servo portion 210 are 0.Therefore, drive control part 220 to robot 10 export from
The command value that the first control units 202 exports.In this way, operation control part 101 makes the mobile (step of arm 11 (i.e. endpoint) according to command value
Rapid S1010).As a result, it controls to move endpoint by position.The processing of step S1010 is identical with step S1000.
In this way, in the stage by object O1, control to move endpoint by position, thus allow at a high speed
Reason.In addition, by slowly switching component α, it is capable of unexpected action, the vibration of preventing arm 11.
Next, the first control units 202 is to controlling the result for moving endpoint by position, whether endpoint being passed through
Switching point 3 is judged (step S1012).Represent the information of the position of switching point 3 included in preset related to path
Information in.As shown in figure 28, switching point 3 is arranged between object O1 (switching point 2) and object O2.
In endpoint not by the way that in the situation (being no in step S1012) of switching point 3, step is repeated in control unit 20
The processing of S1010.
In endpoint by the way that in the situation (being yes in step S1012) of switching point 3, drive control part 220 is with the warp of time
Cross and interim switching carried out to component α, and using the component α after switching, by the command value exported from the first control units 202 with
The command value exported from the second control unit 213 is synthesized, and is exported to robot 10.In this way, operation control part 101 is according to finger
It makes value and makes arm 11 (i.e. endpoint) mobile (step S1014).The processing of step S1014 is identical with step S1004.
Next, the first control units 202 by position control and visual servo to making result that endpoint moves, i.e. pair
Whether endpoint is judged (step S1016) by switching point 4.Represent that the information of the position of switching point 4 is included in and path phase
In the information of pass.As shown in figure 28, switching point 4 is set in object O2.
In endpoint not by the way that in the situation (being no in step S1016) of switching point 4, step is repeated in control unit 20
The processing of S1014.
In endpoint by the situation (being yes in step S1016) of switching point 4, drive control part 220 so that component α with
The process of time and interim increased mode switches over component α, will be from the first control and using the component α after switching
The command value that portion 202 exports is synthesized with the command value exported from the second control unit 213, and is exported to robot 10.Action
Control unit 101 makes arm 11 (i.e. endpoint) mobile (step S1018) according to command value.The processing of step S1018 and step S1008
It is identical.
The switching of component α is repeated in drive control part 220 before component α becomes 1.If component α becomes 1, drive
Control unit 220 exports the command value exported from the first control units 202 to robot 10.In this way, operation control part 101 is according to instruction
It is worth and makes arm 11 (i.e. endpoint) mobile (step S1020).The processing of step S1020 is identical with step S1010.
Next, the first control units 202 is to controlling the result for moving endpoint by position, whether endpoint being reached
Objective is judged (step S1022).Represent the position of objective information be included in it is preset with path phase
In the information of pass.
In the case where endpoint does not reach the situation (being no in step S1022) of objective, step is repeated in control unit 20
The processing of S1020.
In the situation (being yes in step S1022) for reaching objective in endpoint, 220 end of drive control part processing.
According to the present embodiment, when close with object, move endpoint by position control and visual servo,
Thus also can be corresponding with the situation of the change in location of object.In addition, in endpoint (current location) beyond as needed away from right
In the case of as object, in the case where meeting condition as defined in situation that endpoint (current location) passes through object etc., only pass through
Position controls and moves endpoint, thus allows for high speed processing.
In addition, according to the present embodiment, to being controlled based on position control and the control of visual servo and based on position
When the control of system switches over, by slowly switching component α, it is capable of unexpected action, the vibration of preventing arm.
In addition, in the present embodiment, in the case where slowly switching component α, when passing through certain time every time, it will divide
Amount α periodically switches 0.1, but slowly it's not limited to that for the method for switching component α.For example, as shown in figure 26, it can also
With to object (target location being equivalent in Figure 26 (A)) position, from object (being equivalent to the starting position in Figure 26 B)
It positions away from accordingly changing component α.In addition, as shown in figure 26, component α can also continuously change (with reference to Figure 26 A, figure
Line C, D of 26B etc.).
In addition, in the present embodiment, using the situation of the command value of position control and visual servo (step S1004,
S1008, S1014, S1018) under, it is 0.5,0.6,0.7,0.8,0.9 to make component α, as long as but component α is bigger than 0 smaller than 1
Real number can be arbitrary value.
4th embodiment
Second, third embodiment of the present invention is visually inspected using trick video camera, but the present invention's is suitable
With scope, it's not limited to that.
The 4th embodiment of the present invention is the assembling works such as the insertion for the apertures for applying the present invention to object
Mode.Hereinafter, the 4th embodiment of the present invention is illustrated.In addition, for implementing with second embodiment and the 3rd
The identical part of mode, marks identical reference numeral, and the description thereof will be omitted.
Figure 29 is that the system of an example of the structure for the robot system 3 for representing one embodiment of the present invention is formed
Figure.The robot system 3 of present embodiment mainly possesses robot 10A, control unit 20, the first shoot part 30 and the second shooting
Portion 40.
Robot 10A is the arm type robot with the arm 11A for including multiple connectors (joint) 12 and multiple connecting rods 13.
It is provided in the front end of arm 11A and holds workpiece W, the hand 14 (so-called end effector) of utensil.The position of the endpoint of arm 11A
It is the position of hand 14.In addition, end effector is not limited to hand 14.
The arm segment of arm 11A is provided with force sensor 102 (not shown, reference Figure 30 in Figure 29).Power, which is felt, to be passed
Sensor 102 is the sensing that power, the torque being subject to the reaction force opposite as the power exported with robot 10A are detected
Device.As force sensor, for example, the power ingredient that can detect 3 direction of principal axis of translation simultaneously can be used and around 3 axis of rotation
6 axis force sensors of 6 ingredients of torque ingredient.In addition, physical quantity used in force sensor is electric current, voltage, charge
Amount, inductance, deformation, resistance, electromagnetic guide, magnetic, air pressure, light etc..Force sensor 102 is by the way that desired physical quantity is converted
For electric signal, so as to detect 6 ingredients.In addition, force sensor 102 is not limited to 6 axis, such as can also be 3 axis.
Next, the function configuration example of robot system 3 is illustrated.Figure 30 represents the functional block of robot system 3
Figure.
Robot 10A possesses according to sensor values of the encoder values of actuator and sensor etc. come the dynamic of control arm 11A
Make control unit 101 and force sensor 102.
Control unit 20A mainly possesses position control section 2000, visual servo portion 210, image processing part 212, drive control
Portion 220 and power control unit 230.
Power control unit 230 carries out power according to the sensor information (force information, moment information) from force sensor 102
Control (power feels control).
In the present embodiment, controlled as power and carry out impedance control.Impedance control is in order to will be externally to machine
The mechanical impedance (inertia, attenuation coefficient, rigidity) generated in the case of sharp (hand 14) applied force of the hand of people is set as that target is made
The position being suitably worth under industry and the control means of power.Specifically, it is end effector portion quality of connection, viscous in robot
In property coefficient and the model of elastic element, connect with being set as the quality of target, viscosity and coefficient of elasticity with object
Tactile control.
Power control unit 230 determines the moving direction of endpoint, amount of movement by impedance control.In addition, power control unit 230
According to the moving direction of endpoint, amount of movement, determine to be arranged at the target angle of each actuator of connector 12.In addition, power control unit 230
Generation makes command value as arm 11A movement target angles, and it is exported to drive control part 220.Further, since power controls
The processing that portion 230 carries out is general content, so omitting detailed description.
In addition, power control is not limited to mixing control, and it is dry to use compliance control etc. that can dexterously control
Disturb the control method of power.In addition, in order to carry out power control, it is necessary to detect the power for being applied to 14 grade end effectors of hand, still
The situation using force sensor is not limited to the method that the power for being applied to end effector is detected.For example, also can
Enough each axis torque values from arm 11A infer the external force that end effector is subject to.Therefore, in order to carry out power control, as long as arm 11A has
There is the mechanism for directly or indirectly obtaining the power for being applied to end effector.
Next, the processing of the feature of the robot system 3 being made of said structure of present embodiment is illustrated.
Figure 31 is the flow chart of the flow for the control process for representing the arm 11A of the present invention.The processing is, for example, through not shown button
Deng and input control start instruction so as to start.In the present embodiment, as shown in figure 32, by the dress of workpiece W insertions hole H
With being illustrated exemplified by operation.
If through not shown button etc., input control starts to indicate, the first control units 202 by position control and
Control arm 11, and endpoint is made to move (step S130).The processing of step S130 is identical with step S1000.
In the present embodiment, the component of the command value controlled based on position is set as α, by the finger of view-based access control model servo
The component of value is made to be set as β, and the component of the command value controlled based on power is set as γ.Component α, β and γ be set as, it
Add up to 1.In step s 130, α 1, β and γ are 0.
Next, the first control units 202 is to controlling the result for moving endpoint by position, whether endpoint being passed through
Switching point 1 is judged (step S132).The processing of step S132 is identical with step S1002.Represent the letter of the position of switching point 1
Breath is included in the preset and relevant information in path.
Figure 32 is the figure illustrated to the track of endpoint and the position of switching point.In the present embodiment, switching point 1
The pre-determined defined position being arranged in working space.
In endpoint not by the way that in the situation (being no in step S132) of switching point 1, step is repeated in the first control units 202
The processing of S130.
In endpoint by the way that in the situation (being yes in step S132) of switching point 1, drive control part 220 is with time going by
And interim switching is carried out to component α and β, and using the component α and β after switching, by what is exported from the first control units 202
Command value is synthesized with the command value exported from the second control unit 213, and is exported to robot 10.In this way, operation control part
101 make arm 11 (i.e. endpoint) mobile (step S134) according to command value.That is, in step S134, by position control and
Visual servo and move endpoint.
Hereinafter, the processing of step S134 is specifically described.Before the processing of step S134 is carried out, i.e. in step
In the processing of S132, drive control part 220 makes the component α of the command value from position control section 200 be 1, makes to watch from vision
The component β of the command value in portion 210 is taken as 0, and the component γ of the command value for the control unit 230 that makes to rely on oneself is 0, so as to synthetic instruction
Value.
After the processing of step S134 starts, if having passed through certain time (such as 10msec), drive control part 220 will
The component α of command value from position control section 2000 switches to 0.95 from 1, and by the command value from visual servo portion 210
Component β switch to 0.05.Then, drive control part 220 make the command value from position control section 2000 for component 0.95,
And make the component of the command value from visual servo portion 210 to be synthesized to their command value on the premise of 0.05, and to
Robot 10 exports.
Afterwards, if the certain time has elapsed, then drive control part 220 by the instruction from position control section 2000
The component α of value switches to 0.9 from 0.95, and the component β of the command value from visual servo portion 210 is switched to 0.1 from 0.05.
It, will be from this way, periodically switch component α and β with the process of certain time, and using component after switching
The command value that the first control units 202 exports is synthesized with the command value exported from the second control unit 213.Drive control part 220
Become being repeated the switching of above-mentioned component before 0.05, component β becomes 0.95 in component α.As a result, it is controlled by position
And visual servo and move endpoint.In addition, in step S134, due to being controlled without using power, so component γ keeps former
It is 0 sample.
In addition, the ratio, α of final component α, β:β is not limited to 0.05:0.95.Component α, β can take component α, β's
With the various values for 1.But in this operation, since the position of hole H is not limited to constant, it is advantageous to make visual servo
Component β it is bigger than the component α that position controls.
In addition, slowly it's not limited to that for the method for switching component α.It, can also be with for example, as shown in Figure 26 A, Figure 26 B
It positions away from accordingly changing component α to the position of object, from object.In addition, as shown in line C, D of Figure 26, component α
Can also continuously it change.
Next, the second control unit 213 by position control and visual servo to making result that endpoint moves, i.e. pair
Whether endpoint is judged (step S136) by switching point 2.
Switching point 2 is determined by the relative position from hole H.For example, switching point 2 be from the opening portion center of hole H leave away from
Position from L (for example, 10cm).The position that distance L is left from the opening portion center of hole H can be set as in x, y, z space
It is hemispherical.In Figure 32, the position for leaving distance L in the z-direction from the opening portion center of hole H is exemplified.
Image processing part 212 is from the image of front end and hole H of the present image extraction comprising workpiece W, and to the second control unit
213 outputs.In addition, image processing part 212 is according to the first shoot part 30 or the camera parameters (focal length of the second shoot part 40
Deng) relation of the distance in the distance and realistic space in image is calculated, and exported to the second control unit 213.Second
Whether control unit 213 passes through endpoint according to the difference of the center of the front position and hole H of the workpiece W in the image of extraction
Switching point 2 is judged.
In endpoint not by the situation (being no in step S136) of switching point 2, the first control units 202, the second control unit
213 and drive control part 220 processing of step S134 is repeated.
In endpoint by the way that in the situation (being yes in step S136) of switching point 2, drive control part 220 will be from the first control units
The command value of 202 outputs is synthesized with the command value exported from power control unit 230, and is exported to robot 10.Action control
Portion 101 makes arm 11 (i.e. endpoint) mobile (step S138) according to command value.
Hereinafter, the processing of step S138 is specifically described.Before the processing of step S138 is carried out, i.e. in step
In the processing of S134, drive control part 220 makes the component α of the command value from position control section 2000 be 0.05, and makes to come from
The component β of the command value in visual servo portion 210 is 0.95, so as to synthetic instruction value.
After the processing of step S138 starts, drive control part 220 divides the command value from position control section 2000
Amount α switches to 0.5 from 0.05.In addition, the component γ of the command value for control unit 230 of relying on oneself in the future switches to 0.5 from 0.Its result
It is that drive control part 220 is 0.5, from visual servo portion 210 making the component α of the command value from position control section 2000
Command value component β be 0, be 0.5 come the component γ of the command value for control unit 230 of relying on oneself on the premise of, the instruction to them
Value is synthesized, and is exported to robot 10.In addition, in step S138, due to without using visual servo, so component β is protected
Hold is 0 as former state.In addition it is also possible to periodically switch component α, γ.
Next, power control unit 230 by visual servo and power to controlling the result for moving endpoint, i.e. to endpoint
Objective whether is reached to be judged (step S140).It can judge whether to reach mesh according to the output of force sensor 102
Mark place.
In the case where endpoint does not reach the situation (being no in step S140) of objective, position control section 200, power control unit
230 and drive control part 220 processing of step S138 is repeated.
In the situation (being yes in step S140) for reaching objective in endpoint, 220 end of drive control part processing.
According to the present embodiment, it is able to maintain that the high speed of position control, and can be corresponding from different target locations.Separately
Outside, even if in the case where that can not confirm that target location etc. can not use visual servo, also it is able to maintain that the high speed of position control
And safely carry out operation.
In addition, in the present embodiment, switching point 1 is pre-set in any position of working space, switching point 2 is set in
From hole, H leaves the position of defined distance, but it's not limited to that for the position of switching point 1,2.It can also utilize from defined
The position for the elapsed time setting switching point 1,2 that position starts.Specifically, for example, the position of switching point 2 can be set in it is logical
After spending 30 after switching point 1 seconds.Alternatively, it is also possible to utilize the position of the distance setting switching point 1,2 left from defined position.
Specifically, for example, the position of switching point 1 can be set in the position that distance X is left from beginning place.It also, can also root
According to from the position of external signal input (for example, input signal from input unit 25) setting switching point 1,2.
5th embodiment
The 5th embodiment of the present invention controls to carry out object to assemblings such as the insertions in hole with power by position control
Operation, but it's not limited to that for the scope of application of the present invention.
The 5th embodiment of the present invention be controlled by position, the control of visual servo and power applies the present invention to
Object is to the mode of the assembling works such as the insertion in hole.Hereinafter, the 5th embodiment of the present invention is illustrated.Due to the 5th
The structure of the robot system 4 of embodiment is identical with robot system 3, so the description thereof will be omitted.In addition, in robot system
In 4 processing carried out, for the part identical with second embodiment, the 3rd embodiment and the 4th embodiment, mark
Identical reference numeral, and omit detailed description.
The processing of the feature of the robot system 4 of present embodiment is illustrated.Figure 33 represents robot system 4
The flow chart of the flow of the control process of arm 11A.The processing be, for example, through not shown button etc. and input control starts to refer to
Show what is started.In the present embodiment, as shown in figure 34, the assembling that workpiece W insertions are formed to the hole H of mobile station is made
It is illustrated exemplified by industry.
If through not shown button etc., input control starts to indicate, the first control units 202 by position control and
Control arm 11A, and endpoint is made to move (step S130).
Next, the first control units 202 is to controlling the result for moving endpoint by position, whether endpoint being passed through
Switching point 1 is judged (step S132).
In endpoint not by the way that in the situation (being no in step S132) of switching point 1, step is repeated in the first control units 202
The processing of S130.
In endpoint by the way that in the situation (being yes in step S132) of switching point 1, drive control part 220 is with time going by
And interim switching is carried out to component α and β, and using the component α and β after switching, by what is exported from the first control units 202
Command value is synthesized with the command value exported from the second control unit 213, and is exported to robot 10.In this way, operation control part
101 make arm 11A (i.e. endpoint) mobile (step S134) according to command value.
Next, the second control unit 213 by position control and visual servo to making result that endpoint moves, i.e. pair
Whether endpoint is judged (step S136) by switching point 2.
In endpoint not by the situation (being no in step S136) of switching point 2, the first control units 202, the second control unit
213 and drive control part 220 processing of step S134 is repeated.
In endpoint by the way that in the situation (being yes in step S136) of switching point 2, drive control part 220 will be from the first control units
The command value of 202 outputs is carried out from the second control unit 213 command value exported and the command value exported from power control unit 230
Synthesis, and exported to robot 10.Operation control part 101 makes arm 11A (i.e. endpoint) mobile (step S139) according to command value.
Hereinafter, the processing of step S139 is specifically described.Before the processing of step S139 is carried out, i.e. in step
In the processing of S134, drive control part 220 makes the component α of the command value from position control section 200 be 0.05, and makes to consider oneself as
The component β for feeling the command value in servo portion 210 is 0.95, so as to synthetic instruction value.
After the processing of step S139 starts, drive control part 220 divides the command value from position control section 2000
Amount α switches to 0.34 from 0.05.In addition, drive control part 220 by the component β of the command value from visual servo portion 210 from
0.95 switches to 0.33.Also, rely on oneself for the 220 future component γ of the command value of control unit 230 of drive control part is switched to from 0
0.33.As a result, drive control part 220 makes the component α of the command value from position control section 2000 be 0.34, consider oneself as
The component β for feeling the command value in servo portion 210 is 0.33, the premise for being 0.33 come the component γ of the command value for control unit 230 of relying on oneself
Under their command value is synthesized, and exported to robot 10.
In addition, the ratio, α of component α, β, γ:β:γ is not limited to 0.34:0.33:0.33.Component α, β, γ can be with
Operation accordingly set component α, β, γ and for 1 various values.Alternatively, it is also possible to slowly switch component α, β, γ.
Next, power control unit 230 by visual servo and power to controlling the result for moving endpoint, i.e. to endpoint
Objective whether is reached to be judged (step S140).
In the case where endpoint does not reach the situation (being no in step S140) of objective, position control section 2000, visual servo
The processing of step S139 is repeated in portion 210, power control unit 230 and drive control part 220.
In the situation (being yes in step S140) for reaching objective in endpoint, 220 end of drive control part processing.
According to the present embodiment, it is able to maintain that the high speed of position control, and endpoint can be made to be moved to different target locations
It is dynamic.Particularly, even if in the case of being moved in target location and in the case where that can not confirm target location, due to passing through
Position control, visual servo, power are controlled and controlled, so being also able to maintain that the high speed of position control and safely being made
Industry.
In addition, in the present embodiment, by be carried out at the same time position control, visual servo, power control (parallel control) and
Control arm, but in the 5th embodiment, by being carried out at the same time position control, power control (parallel control) and control arm.It drives
Dynamic control unit 220 can according to could visually the confirming of workpiece W, hole H etc., whether there is it is mobile etc. as defined in condition, choose whether root
Position control, visual servo, power control, Huo Zhetong are carried out at the same time according to preset condition for being stored in 22 grade of memory etc.
When into row position control, power control.
In the above-described embodiment, the situation for using one armed robot is illustrated, but also can be by the present invention
Applied to the situation for using tow-armed robot.In the above-described embodiment, endpoint is provided with to the front end of the arm in robot
Situation is illustrated, but the so-called robot that is arranged at is not limited to be arranged at arm.For example, it is also possible to it is set in robot
Be made of multiple connectors and connecting rod and by making the mobile manipulator and mass activity of connector, and using the front end of manipulator as
Endpoint.
In addition, in the above-described embodiment, possess the first shoot part 30 and second shoot part 40 the two shoot parts, but
It is that shoot part can also be one.
More than, with embodiment, the present invention is described, but the scope of the technology of the present invention is not limited to
State the scope recorded in embodiment.Various changes or improvement can be applied to the above embodiment, this is for this field skill
Art personnel are obvious.In addition, it can be seen from the description of the claims that, it is applied with such change or improved side
Formula can be also included in the range of the technology of the present invention.Particularly, the present invention, which can provide, is respectively arranged with robot, control
Portion and the robot system of shoot part can provide robot of the robot including control unit etc., can also provide only by controlling
Portion processed or the robot controller being made of control unit and shoot part.In addition, the present invention is also capable of providing control machine
The program of people etc., the storage medium for storing program.
Sixth embodiment
1. the means of present embodiment
The well-known robot control having using image information.Image information is continuously obtained for example, it is known that having, and it is right
The Visual servoing control that the result of information and the comparison processing of the information as target from the image information acquisition is fed back.
In visual servo, the direction controlling machine that becomes smaller of difference of the information to the information from newest image information acquisition and as target
Device people.Specifically, controlled as follows:It is obtained with target close to variable quantity of such joint angle etc., and according to the variation
Amount etc. drives joint.
It is giving as the posture of the hand point of robot of target etc., and the side to form the posture of the target
In the means of formula control robot, it is not easy to improve positioning accuracy, is not easy to make hand sharp (hand) etc. correctly to the position of target
Posture moves.Ideally, however, it is determined that then hand point posture can be uniquely obtained according to the model in the model of robot.
Here model for example refers to the construction (rotation in joint of the length of the frame being arranged between two joints (connecting rod), joint
Direction, with the presence or absence of biasing etc.) etc. information.
But robot includes various errors.Such as the length of connecting rod deviation, as caused by gravity flexure etc..Due to
These error components, so as to the control of given posture taken (such as to determine the control of the angle in each joint into exercising robot
System) in the case of, preferable posture can become different values from actual posture.
At this point, in Visual servoing control, since pair processing result image opposite with captured image carries out instead
Feedback, so identically with the situation that people can be finely adjusted the moving direction of arm, hand when observing job status with eyes,
Even if current posture and the posture of target deviate, it can also identify and correct the offset.
It, can as above-mentioned " from the information of image acquisition " and " information for becoming target " in Visual servoing control
The three-dimensional posture information of hand point using robot etc. can also use the image feature amount from image acquisition without inciting somebody to action
It is converted to posture information.The visual servo for using posture information is known as to the visual servo of position reference, will be made
It is known as the visual servo of characteristic quantity benchmark with the visual servo of image feature amount.
In order to suitably carry out visual servo, it is necessary to from image information precisely test position pose information or figure
As characteristic quantity.It, can the current state of wrong identification if the precision of the detection process is relatively low.Therefore, feed back in control loop
Information can not realize the higher machine of precision also without becoming the information that the state of robot is made to be suitably close to dbjective state
Device people controls.
Imagine posture information, image feature amount is obtained by some detection process (such as matching treatment) etc.
, but the precision of the detection process may not be enough.This is because robot is actual acted in the environment of, in shooting figure
As in, not only object (such as hand of robot) of the shooting as identification object, can also shoot workpiece, fixture or configuration
In object of operating environment etc..Since various objects are shining into the background of image, so as to cause the accuracy of identification (inspection of desired object
Survey precision) it reduces, and the posture information, the precision of image feature amount that are obtained also are lower.
In patent document 1, means are disclosed directly below:In the visual servo of position reference, to the sky calculated from image
Between position or translational speed compared with the position in space calculated from encoder or translational speed, it is different so as to detect
Often.Further, since the position in space is included in the information in posture information, and translational speed is also according to position appearance
The information that the variable quantity of gesture information is obtained, so below saying the position in space or translational speed as posture information
It is bright.
The means by using patent document 1 are considered, so as to be generated in the posture information being obtained according to image information
In the case that large error etc., visual servo generate some exceptions, the exception can be detected.If abnormal detection can be realized,
The control of robot then can be stopped or re-start the detection of posture information, so as at least inhibit to protect in control
It holds as former state using the situation of abnormal information.
But the means of patent document 1 are premised on the visual servo of position reference.If position reference, then such as
It is upper described, carry out the posture information being easily obtained according to the information of encoder etc. and the position appearance being obtained according to image information
The comparison processing of gesture information, therefore realize easily.On the other hand, in the visual servo of characteristic quantity benchmark, in robot
Control in use image feature amount.Moreover, even if the sky of the hand point of robot etc. is easily obtained according to the information of encoder etc.
Between position, the relation with image feature amount can not directly be obtained.That is, in the situation for the visual servo for imagining characteristic quantity benchmark
Under, it is difficult to using the means of patent document 1.
Therefore, present applicant has proposed following means:In the control for using image feature amount, believed using reality from image
It ceases the image feature amount variable quantity obtained and the deduction image inferred according to the information that the result controlled from robot obtains is special
Sign amount variable quantity, so as to detect exception.Specifically, as shown in figure 35, the robot controller 1000 of present embodiment wraps
It includes according to image information to control the robot control unit 1110 of robot 20000;Characteristics of image is obtained according to image information
Measure the variable quantity operational part 1120 of variable quantity;Believe according to the information as robot 20000 or object and as image
The variable quantity deduction information of information beyond breath, the deduction amount to image feature amount variable quantity that is, deduction characteristics of image quantitative change
Change amount carries out the variable quantity inferring portion 1130 of computing;And by image feature amount variable quantity with inferring image feature amount variable quantity
Comparison handle to carry out the abnormality determination unit 1140 of unusual determination.
Here, image feature amount is region, area, the length of line segment, the position of characteristic point represented in image as described above
The amount of features such as put, image feature amount variable quantity is the multiple figures represented from multiple (being two for narrow sense) image information acquisitions
As the information of the variation between characteristic quantity.As image feature amount, if the two-dimensional position on the image for using 3 characteristic points
Example, then image feature amount is the vectors of 6 dimensions, and image feature amount variable quantity for two 6 n dimensional vector ns difference, be with vector
Each element difference be element 6 n dimensional vector ns.
In addition, variable quantity deduction is with the information that information is for the deduction of image feature amount variable quantity, and it is image letter
Information beyond breath.Variable quantity deduction information for example can be the information that the result controlled from robot obtains (actual measurement), have
It can also be the joint angle information of robot 20000 for body.Joint angle information can drive from the joint for measuring, controlling robot
The encoder of the action for the motor (being in the broadest sense actuator) employed obtains.Alternatively, variable quantity deduction information can also be
Believed by the posture of the object of the end effector 2220 or operation carried out by robot 20000 of robot 20000
Breath.Posture information be, for example, comprising object datum mark three-dimensional position (x, y, z), with compared with benchmark posture around each
6 n dimensional vector ns of the rotation (R1, R2, R3) of axis.Consider the means of the various posture information that object is obtained, but for example using
Following means:It is detected using the range determination means of ultrasonic wave, using the means of measuring instrument, in hand point setting LED etc.
The LED so as to measure means, using means of mechanical three-dimensional analyzer etc..
In this way, in the robot for using image feature amount controls (visual servo of the amount of being characterized benchmark for narrow sense),
Exception can be detected.At this point, carry out be obtained according to the image information actually obtained image feature amount variable quantity, with according to from
The comparison for the deduction image feature amount variable quantity being obtained different from the variable quantity deduction information that the viewpoint of image information obtains
Processing.
In addition, the control of the robot carried out according to image information is not limited to visual servo.For example, in visual servo
In, it is carried out continuously and is fed back for information of the control loop based on image information, but can also will carry out 1 image information
Obtain and be obtained according to the image information amount of movement for target location posture so as to according to the amount of movement come into row position
The visual manner of control is used as the control of the robot based on image information.In addition, except visual servo, vision side
It is different as detecting in detection of the information from image information etc. in the control of robot of image information is used outside formula
Normal means can also apply the means of present embodiment.
But as described later in the means of present embodiment, it is contemplated that inferring the computing of image feature amount variable quantity makes
Use Jacobian matrix.And Jacobian matrix is the letter of the relation of the variable quantity for the variable quantity and other values for representing specified value
Breath.For example, even if first information x and the second information y are non-linear relation (g in y=g (x) is nonlinear function), also can
Consider near specified value, the variation delta x of the first information and the variation delta y of the second information are linear relationship (Δ y=
H (h in Δ x) is linear function), and Jacobian matrix represents the linear relationship.I.e., in the present embodiment, it is contemplated that
In processing without using image feature amount in itself, image feature amount variable quantity is used.It is answered as a result, by the means of present embodiment
In the case of for the control beyond visual manner etc., visual servo, the point that should be noticed is, without using an only image of acquisition
The means of information, and need to use the hand at least obtaining 2 images above information in a manner of image feature amount variable quantity is obtained
Section.If such as the means of present embodiment are applied to visual manner, need to carry out the acquisition of multiple image information and into
For the computing of the amount of movement of target.
Hereinafter, after the system configuration example to the robot controller 1000 of present embodiment, robot illustrates,
The summary of visual servo is illustrated.Under the premise of this, the abnormality detection means of present embodiment are illustrated, finally
Variation is illustrated.In addition, the following control as the robot for using image information and by taking visual servo as an example, still
The following description can expand as the control of the robot using other image informations.
2. system configuration example
The detailed system configuration example of the robot controller 1000 of present embodiment is shown in FIG. 36.But
Robot controller 1000 is not limited to the structure of Figure 36, and can carry out omitting an above-mentioned part inscape or
The various modifications such as additional others inscape are implemented.
As shown in figure 36, robot controller 1000 includes target signature amount input unit 111, target track generating unit
112nd, joint angle control unit 113, driving portion 114, joint angle test section 115, image information obtaining section 116, image feature amount fortune
Calculation portion 117, variable quantity operational part 1120, variable quantity inferring portion 1130 and abnormality determination unit 1140.
Target signature amount input unit 111 is to the input of target track generating unit 112 as the image feature amount fg of target.Target
Characteristic quantity input unit 111 is such as can also be used as the interface of input for the target image characteristics amount fg for receiving to be carried out as user
To realize.In robot control, into the target image exercised the image feature amount f being obtained according to image information with inputted here
Characteristic quantity fg is close to the control of (make for narrow sense them consistent).In addition it is also possible to obtain image letter corresponding with dbjective state
It ceases (reference image, target image), and target image characteristics amount fg is obtained according to the image information.Alternatively, it can not also keep
Reference image, and directly receive the input of target image characteristics amount fg.
Target track generating unit 112 is according to target image characteristics amount fg and the image feature amount being obtained from image information
F, generation make the target track that robot 20000 is acted.Specifically, it is obtained to make robot 20000 and mesh
The processing of the variation delta θ g of the close joint angle of mark state (state corresponding with fg).Δ θ g become the tentative of joint angle
Desired value.In addition, in target track generating unit 112, the drive volume of joint angle per unit time can also be obtained from Δ θ g
(band point θ g in Figure 36).
Joint angle control unit 113 carries out joint according to the desired value Δ θ g of the joint angle and value θ of current joint angle
The control at angle.For example, due to the variable quantity that Δ θ g are joint angle, so using θ and Δ θ g, carrying out joint angle is obtained can be why
The processing of value.Driving portion 114 follows the control of joint angle control unit 113, is driven the control in the joint of robot 20000.
Joint angle test section 115 is detected the processing why joint angle of robot 20000 is worth.Specifically, logical
Cross the drive control carried out by driving portion 114 make joint angle change after, detect the value of the joint angle after the variation, and make current
The value of joint angle exports for θ to joint angle control unit 113.Joint angle test section 115 specifically can also be used as acquisition and compile
Interface of information etc. of code device is realized.
Image information obtaining section 116 carries out the acquisition of image information from shoot part etc..Here shoot part can be such as figure
The shoot part of environment is configured at shown in 37 or is arranged at shoot part (such as the hand of 2210 grade of arm of robot 20000
Eye camera).It is special to carry out image for image information of the image feature amount operational part 117 according to acquired in image information obtaining section 116
The calculation process of sign amount.In addition, according to image information to image feature amount carry out computing means it is known that there are edge detection process,
The various means such as matching treatment, and it can be widely applied in the present embodiment, therefore omit detailed description.By scheming
As the image feature amount that characteristic quantity operational part 117 is obtained is as newest image feature amount f, and to target track generating unit 112
Output.
Variable quantity operational part 1120 keeps the image feature amount calculated by image feature amount operational part 117, and according to mistake
Remove the image feature amount f obtainedoldAnd the image feature amount f (being newest image feature amount for narrow sense) as process object
Difference, to image feature amount variation delta f carry out computing.
Variable quantity inferring portion 1130 keeps the joint angle information detected by joint angle test section 115, and according to obtaining in the past
The joint angle information θ takenoldAnd the difference of the joint angle information θ (being newest joint angle information for narrow sense) as process object
Point, computing is carried out to the variation delta θ of joint angle information.Also, it according to Δ θ, is obtained and infers image feature amount variation delta fe.
In addition, in Figure 36, variable quantity deduction with the example that information is joint angle information is illustrated, but as described above, is made
For variable quantity deduction information, the end effector 2220 of robot 20000 or the posture of object can also be used
Information.
In addition, the robot control unit 1110 of Figure 35 can also be and the target signature amount input unit 111 of Figure 36, target track
Road generating unit 112, joint angle control unit 113, driving portion 114, joint angle test section 115, image information obtaining section 116 and
117 corresponding control unit of image feature amount operational part.
In addition, as shown in figure 38, the means of present embodiment can be applied to comprising the robot formed as follows:Include root
Robot (the specifically robot body including arm 2210 and end effector 2220 is controlled according to image information
3000) robot control unit 1110;The variable quantity operational part 1120 of image feature amount variable quantity is obtained according to image information;
Believed according to the information as robot 20000 or object and the variable quantity deduction as the information beyond image information
Breath, the variable quantity inferring portion that computing is carried out to the deduction amount that is, deduction image feature amount variable quantity of image feature amount variable quantity
1130;The abnormality determination unit of unusual determination is carried out with inferring the comparison processing of image feature amount variable quantity by image feature amount
1140。
As shown in Figure 19 A, Figure 19 B, robot here can also include control device 600 and robot body
300 robot.If the structure of Figure 19 A, Figure 19 B, then control device 600 includes robot control unit 1110 of Figure 38 etc..
In such manner, it is possible to it carries out according to the action formed based on the control of image information, so as to realize in Automatic Detection and Control
Abnormal robot.
In addition, the configuration example of the robot of present embodiment is not limited to Figure 19 A, Figure 19 B.For example, as shown in figure 39,
Robot can also include robot body 3000 and base unit portion 350.The robot of present embodiment can also be such as figure
It is tow-armed robot shown in 39, in addition to being equivalent to the part on head, trunk, further includes the first arm 2210-1 and the second arm
2210-2.In Figure 39, the first arm 2210-1 be by joint 2211,2213 and the frame 2215 being arranged between joint,
2217 compositions, the second arm 2210-2 is also likewise, still it's not limited to that.In addition, it is shown in FIG. 39 with two
The example of the tow-armed robot of support arm, but the robot of present embodiment can also have the arm of 3 or more.
Base unit portion 350 is arranged at the lower part of robot body 3000, and supporting machine human agent 3000.In Figure 39
Example in, base unit portion 350 is provided with wheel etc., so as to be formed as the structure that robot can integrally move.But
It can also be that base unit portion 350 does not have wheel etc., and be fixed on the structure on ground etc..It is not shown with scheming in Figure 39
The 600 corresponding device of control device of 19A, Figure 19 B, but in the robot system of Figure 39, by base unit portion 350
Control device 600 is stored, so that robot body 3000 is formed with control device 600 as one.
Alternatively, the machine of specific control as control device 600, can also be not provided with, and by being built in machine
The substrate (being more specifically arranged at IC on substrate etc.) of device people, realizes above-mentioned robot control unit 1110 etc..
In addition, as shown in figure 20, the function of robot controller 1000 can also be by via including wired and nothing
The network 400 of at least one party of line and the server 500 that is connected with robot communication are realized.
Or in the present embodiment, can also be configured to, the server 500 as robot controller carries out this hair
A part for the processing of bright robot controller.At this point, pass through the robot controller with being arranged at robot side
Decentralized processing, so as to fulfill the processing.
Moreover, in this case, the server 500 as robot controller carries out the robot control of the present invention
The processing of robot control system in each processing of device, being allocated in server 500.On the other hand, it is arranged at robot
The robot controller robot controller that carries out the present invention each processing in, the robot control that is allocated in robot
The processing of device processed.
It is handled for example, the robot controller of the present invention carries out the first~the M (M is integer), consideration can be so that the
One processing realizes and makes second processing by subprocessing 2a and subprocessing 2b come real by subprocessing 1a and subprocessing 1b
Each processing of the first~the M is divided into the situation of multiple subprocessings by existing mode.In this case, it is considered as robot
The server 500 of control device carries out subprocessing 1a, subprocessing 2a, subprocessing Ma, is arranged at the machine of robot side
People's control device carries out subprocessing 1b, subprocessing 2b, subprocessing Mb this decentralized processing.At this point, present embodiment
Robot controller, the i.e. robot controller of the first~the M of execution processing can perform subprocessing 1a~subprocessing
The robot controller of Ma can be robot controller or the execution for performing subprocessing 1b~subprocessing Mb
Whole robot controllers of subprocessing 1a~subprocessing Ma and subprocessing 1b~subprocessing Mb.Furthermore, originally
The robot controller of embodiment is the robot control of each processing at least one subprocessing of execution to the first~the M processing
Device processed.
Energy is handled for example compared with the terminal installation (such as control device 600 of Figure 19 A, Figure 19 B) of robot side as a result,
The higher server 500 of power can carry out high processing of processing load etc..Also, server 500 can control each machine together
The action of people, so as to such as easily making multiple robot coordinated actions.
In addition, in recent years, the situation for manufacturing multi items and the component of minority has increased trend.Moreover, it is manufactured in change
Component species in the case of, it is necessary to change robot progress action.If structure as shown in figure 20, even if not weighing then
The new each robot for be directed to multiple robots instructs operation, and server 500 also can change robot together and be carried out
Action etc..Also, compared with the situation of a robot controller 1000 is set for each robot, can significantly it subtract
Trouble during software upgrading of progress robot control system 1000 etc. less.
3. Visual servoing control
Before the abnormality detection means to present embodiment illustrate, general Visual servoing control is illustrated.
The configuration example of general Visual servoing control system is shown in FIG. 40.As shown in Figure 40, this embodiment party shown in Figure 36
In the case that the robot controller 1000 of formula is compared, be formed as removing variable quantity operational part 1120, variable quantity deduction
Portion 1130 and the structure of abnormality determination unit 1140.
In the case where the dimension for being used for the image feature amount of visual servo to be set to n (n is integer), image feature amount f
It is showed using the image feature amount vector as f=[f1, f2, fn] T.Each element of f is for example using characteristic point
Coordinate value of the image at (control point) etc..In this case, the target image spy inputted from target signature amount input unit 111
Sign amount fg similarly, shows as fg=[fg1, fg2, fgn] T.
In addition, the joint number that joint angle is also served as with robot 20000 includes (by arm 2210 for narrow sense) is corresponding
The joint angle vector of dimension and show.For example, if arm 2210 is the arm of the 6DOF with 6 joints, joint angle is sweared
Amount θ shows as θ=[θ 1, θ 2, θ 6] T.
In visual servo in the case where obtaining current image feature amount f, by image feature amount f and target figure
As characteristic quantity fg action from difference to robot feed back.Specifically, robot is made to reduction image feature amount f and target figure
As the direction of the difference of characteristic quantity fg is acted.For that purpose it is necessary to it is movable to know how to make joint angle θ, image feature amount f with regard to why
It is relational to change this.Usually this it is relational be formed as non-linear, such as f1=g's (θ 1, θ 2, θ 3, θ 4, θ 5, θ 6)
In the case of, function g is nonlinear function.
Therefore, in visual servo, the well-known means having using Jacobian matrix J.Even if two spaces are in non-
Linear relationship can be also showed with linear relationship between the variable quantity of the pettiness in each space.Jacobian matrix J be by
Above-mentioned pettiness variable quantity establishes the matrix of contact each other.
Specifically, the situation for being X=[x, y, z, R1, R2, R3] T in the posture X of the hand point of robot 20000
Under, the Jacobian matrix Ja between the variable quantity of joint angle and the variable quantity of posture is showed with following formula (1), position appearance
Jacobian matrix Ji between the variable quantity of gesture and image feature amount variable quantity is showed with following formula (2).
Numerical expression 1
Numerical expression 2
Moreover, by using Ja, Ji, the relation of Δ θ, Δ X, Δ f can be stated as shown in following formula (3), (4).Ja mono-
As be referred to as robot Jacobian matrix, and if have a mechanism information of the length of connecting rod of robot 20000, rotation axis etc.,
It being capable of analytical Calculation Ja.On the other hand, Ji can variation micro from the posture of the hand point for making robot 20000 in advance when
Variation of image feature amount etc., which deduces, to be come, and also proposed the means for inferring Ji at any time in action.
Δ X=Ja Δs θ (3)
Δ f=Ji Δs X (4)
Also, by using above formula (3), (4), can be showed as shown in following formula (5) image feature amount variation delta f with
The relation of the variation delta θ of joint angle.
Δ f=Jv Δs θ (5)
Here, Jv=JiJa, and represent the Jacobi square between the variable quantity of joint angle and image feature amount variable quantity
Battle array.In addition, Jv is also expressed as image jacobian matrix.The relational of above formula (3)~(5) is shown in Figure 41.
According to more than content, the driving of joint angle is obtained using the difference of f and fg as Δ f in target track generating unit 112
Measure (variable quantity of joint angle) Δ θ.In such manner, it is possible to the change for making image feature amount f and joint angle close fg is obtained
Change amount.Specifically, in order to which Δ θ is obtained from Δ f, the both sides of above formula (5) are multiplied by the inverse matrix Jv of Jv from the left side- 1, but
It is the control gain further contemplated as λ, the variation delta θ g of the joint angle as target is obtained using following formula (6).
Δ θ g=- λ Jv- 1(f-fg) (6)
In addition, the inverse matrix Jv of Jv is obtained in above formula (6)- 1, but Jv be not obtained- 1In the case of, it can also make
With generalized inverse matrix (doubtful inverse matrix) Jv# of Jv.
By using above formula (6), so as to which whenever new image is obtained, new Δ θ g be obtained.Thereby, it is possible to use to obtain
Image, while be modernized into the joint angle of target, side carries out close with dbjective state (image feature amount becomes the state of fg)
Control.The flow is shown in Figure 42.If image feature amount f is obtained from m-1 images (m is integer)M-1, then shape is passed through
Into the f=f of above formula (6)M-1, Δ θ g can be obtainedM-1.Then, m-1 images and next image that is, m images it
Between, the Δ θ g that will be obtainedM-1The control of robot 20000 is carried out as target.Then, if obtaining m images, from
Image feature amount fg is obtained in the m images, and calculates the Δ θ g as new target by the use of above formula (6)m.In m images and m
Between+1 image, the Δ θ g calculated are used in controlm.Hereinafter, (filled before the processing is terminated in image feature amount and fg
Before tap is near), persistently carry out the processing.
In addition, although the variable quantity of the joint angle as target is obtained, it is not necessarily required to make joint angle variation targets amount.
For example, between m images and m+1 images, by Δ θ gmIt is controlled, but is also considered mostly as follows as desired value
Situation:Do not become Δ θ g also in actual variable quantitymWhen, next image that is, m+1 images are obtained, and is calculated newly by it
Desired value Δ θ gm+1。
4. abnormality detection means
The abnormality detection means of present embodiment are illustrated.As shown in Figure 43 A, in the joint angle of robot 20000
For θ p when, obtain pth image information, and image feature amount fp calculated according to the pth image information.Then, believe than pth image
The acquisition moment of breath rearward at the time of, when the joint angle of robot 20000 is θ q, obtain q image informations, and according to this
Q image informations calculate image feature amount fq.Here, pth image information and q image informations can be adjoinings in time series
Image information or not adjacent (after the acquisition of pth image information, before the acquisition of q image informations, obtain other
Image information) image information.
In visual servo, the calculating of Δ θ g is used for using the difference of fp, fq and fg as Δ f as described above, still
The difference fq-fp of fp and fq is nothing more than being also image feature amount variable quantity.Further, since joint angle θ p, joint angle θ q are by closing
Angle test section 115 is saved from acquisitions such as encoders, so as to being obtained as measured value, the difference θ q- θ p of θ p and θ q are to close for institute
Save the variation delta θ at angle.That is, for two image informations, in order to which corresponding image feature amount f and joint angle are obtained respectively
θ is obtained image feature amount variable quantity as Δ f=fq-fp, the variable quantity of corresponding joint angle is also obtained as Δ θ=θ q-
θp。
Moreover, as shown in above formula (5), there are the relations of Δ f=Jv Δs θ.If that is, using actual measurement Δ θ=θ q- θ p with
Jacobian matrix Jv and Δ fe=Jv Δ θ are obtained, then it is real under the preferable environment that the Δ fe being obtained should be with not generating error completely
The Δ f=fq-fp of survey is consistent.
Variable quantity inferring portion 1130 is by making joint angle information special with image the variable quantity effect of joint angle information as a result,
The Jacobian matrix of sign amount corresponding (variable quantity for specifically making joint angle information is corresponding with image feature amount variable quantity)
Jv, so as to inferring that image feature amount variation delta fe carries out computing.As described above, if it is an ideal environment, then what is be obtained pushes away
Disconnected image feature amount variation delta fe should be with the image feature amount that is obtained in variable quantity operational part 1120 as Δ f=fq-fp
Variation delta f is consistent, otherwise for, in the case where Δ f and Δ fe have relatively big difference, can determine that generate some different
Often.
Here, the factor of error is generated as Δ f and Δ fe, considers to carry out computing to image feature amount according to image information
When error, the errors that are included of Jacobian matrix Jv when reading the value of joint angle of error, encoder etc..But it is encoding
When device reads the value of joint angle, the possibility for generating error is relatively low when compared with other two kinds.In addition, Jacobian matrix Jv institutes
Comprising error nor very big error.In contrast, due to shooting most objects of non-identifying object in the picture,
The generation frequency of error during so as to cause according to image information to image feature amount progress computing is higher.In addition, in image
In the case of generating exception in characteristic quantity computing, there is a possibility that error becomes very large.If it for example, is identified from image uncommon
If the identifying processing failure of the object of prestige, then there is the position on the image different with original object space, misidentify
To there is a possibility that object.As a result, in the present embodiment, the exception in the computing of image feature amount is predominantly detected.But
The error caused by other factors can also be detected as abnormal.
In unusual determination, such as carry out the determination processing using threshold value.Specifically, abnormality determination unit 1140 into
The comparison of differential informations and threshold value of the row image feature amount variation delta f with inferring image feature amount variation delta fe is handled, and
And in the case where differential information is bigger than threshold value, it is determined as exception.Such as given threshold value Th is set, meeting following formula (7)
In the case of, it is judged to generating exception.In such manner, it is possible to detect exception using the easy computing such as following formula (7).
| Δ f- Δs fe | > Th (7)
In addition, threshold value Th need not be fixed value, change can also with situation accordingly its value.For example, abnormality determination unit
1140 can also be configured to, two image letters used in the computing of the image feature amount variable quantity in variable quantity operational part 1120
The difference at the acquisition moment of breath is bigger, then is set to threshold value bigger.
As shown in Figure 41 etc., Jacobian matrix Jv is the matrix for contacting Δ θ and Δ f foundation.And as shown in figure 44, i.e.,
Just in the case where acting on identical Jacobian matrix Jv, compared with the Δ fe obtained from act on Δ θ, act on than Δ θ
Δ fe ' side's variable quantities are larger obtained from the big Δ θ ' of variable quantity.At this time, it is difficult to consider that Jacobian matrix Jv is not generated completely
Error, so as to joint angle changes delta θ, Δ θ ' in the case of the preferable variation delta fi of image feature amount, Δ fi ' phases
Than as shown in figure 44, Δ fe, Δ fe ' generate deviation.Moreover, it was found from the comparison of the A1 and A2 of Figure 44, variable quantity is bigger, should
Deviation is bigger.
If hypothesis does not generate error completely in image feature amount computing, the image feature amount being obtained according to image information
Variation delta f is equal with Δ fi, Δ fi '.In this case, the left side of above formula (7) represents the error generated by Jacobian matrix,
And as Δ θ, Δ fe etc. in the case that variable quantity it is smaller become with the comparable values of A1, as Δ θ ', Δ fe ' etc.
Become and the comparable values of A2 in the case that variable quantity is larger.But as described above, refined gram used in two sides of Δ fe, Δ fe '
It is identical than matrix J v, although the value on the left side of above formula (7) becomes larger, is determined as that mono- sides of A2 are comparably abnormality degree more with A1
High state is unsuitable.Above formula (7) (not being determined as exception) is unsatisfactory in situation corresponding with A1 i.e. it is impossible to say,
It is appropriate to meet above formula (7) (being determined as exception) in situation corresponding with A2.Therefore, in abnormality determination unit 1140, if
The variable quantities such as Δ θ, Δ fe are bigger, then are further also set to threshold value Th bigger.In this way, due to the situation phase corresponding to A1
Than, it is big corresponding to the threshold value Th under the situation of A2, so as to carrying out appropriate unusual determination.Two image informations of consideration (if
Then pth image information and q image informations in Figure 43 A) the acquisition moment difference it is bigger, Δ θ, Δ fe etc. are also bigger, because
This in processing, for example, and the image acquisition moment difference accordingly given threshold Th.
Additionally, it is contemplated that the various controls in the case of exception is detected in abnormality determination unit 1140.For example, by sentencing extremely
Determine in the case that portion 1140 detects exception, robot control unit 1110 can also be into the control exercised robot 20000 and stopped.
As described above, detect that abnormal situation is, for example, that the computing of the image feature amount from image information generates the feelings of large error
Condition etc..That is, come the control for carrying out robot 20000, deposited if using the image feature amount (being fq if if the example of Figure 43 A)
Make what robot 20000 moved to image feature amount and the far apart direction in direction close target image characteristics amount fg
Possibility.Because of the situation, 2210 grade of arm can probably be made to collide with other objects, and make hand due to taking unreasonable posture
The object that portion etc. is held is fallen.One example of control when as a result, as exception considers to make the dynamic of robot 20000
Stop in itself, and without the larger action of such risk.
In addition, if being inferred to image feature amount fq generates large error, without being desired with the control using fq, then also may be used
Stop robot motion immediately, and fq is not used to control.Thus for example, being detected by abnormality determination unit 1140
In the case of exception, robot control unit 1110 can also be skipped based on the image feature amount variation in variable quantity operational part 1120
Image informations in two image informations used in the computing of amount, being obtained at the time of in time series rearward that is, exception
Judge the control realized of image information, and carry out based on the image letter obtained more forward than unusual determination image information at the time of
The realized control of breath.
If the example of Figure 43 A, then unusual determination image information is q images.In addition, in the example of Figure 42, use
Adjacent two image informations carry out unusual determination, and are determined as in m-2 image informations and m-1 image informations
It is without exception, in m-1 image informations with without exception in m image informations, in m image informations with having in m+1 image informations
It is abnormal.In this case, consider to understand fM-1And fmThere is no exception, and fm+1There are exception, so as to Δ θ gM-1、ΔθgmEnergy
It is enough in control, but by Δ θ gm+1It is unsuitable for controlling.Originally, schemed in m+1 image informations and next m+2
By Δ θ g between picture informationm+1For controlling, but here, due to the control it is inappropriate thus without.In this case, exist
Between m+1 image informations and m+2 image informations, the Δ θ g that are also obtained before usemAnd acted robot 20000
.Due to Δ θ gmAt least in fmThe calculating moment to target direction make robot 20000 move information, so even if
fm+1Calculating after continue with, it is also difficult to think that larger error can be generated.In this way, even if in the case where detecting exception,
Also can with pervious information herein, particularly obtain more forward than the abnormality detection moment at the time of and be not detected abnormal
Information is substantially controlled, so that the action of robot 20000 continues.Afterwards, if obtain new image information (if
Example for Figure 42 is then m+2 image informations), then utilize the new image feature amount that is obtained from the new image information come into
Row control.
In the flow chart of Figure 48, the process flow of the present embodiment considered until showing when abnormality detection.If
The processing is proceeded by, then carries out the acquisition for the image realized by image information obtaining section 116 first, with being transported by image feature amount
The computing for the image feature amount that calculation portion 117 realizes, and image feature amount variable quantity is transported in variable quantity operational part 1120
It calculates (S10001).In addition, the detection for the joint angle realized by joint angle test section 115 is carried out, and in variable quantity inferring portion
To inferring that image feature amount variable quantity is inferred (S10002) in 1130.Then, according to image feature amount variable quantity with inferring
Whether the difference of image feature amount variable quantity below threshold value carries out unusual determination (S10003).
In the case of (being yes in S10003), exception is not generated below threshold value in difference, thereby using being asked in S10001
The image feature amount gone out is controlled (S10004).Then, carry out current image feature amount whether with the image as target
Characteristic quantity is substantial access to the judgement of (being consistent for narrow sense), in the case of to be, normally reaches target and end and handles.
On the other hand, in the case of being no in S10005, action does not generate exception in itself, but does not reach target, so as to return to
S10001 and continue to control.
In addition, in image feature amount variable quantity with inferring the differential ratio threshold value of image feature amount variable quantity greatly (in S10003
It is no) in the case of, it is judged to generating abnormal.Then, whether it is that n times are carried out continuously judgement (S10006) to abnormal generation,
In the case where continuously generating, not preferably continue to the exception of the degree of action, so as to stop acting.On the other hand, in exception
Generation be not in the case of n times are continuous, using past and be judged to not generating abnormal image feature amount being controlled
It makes (S10007), and returns to S10001 and continue the image procossing at next moment.In the flow chart of Figure 48, as above institute
It states, before exception to a certain extent (being here continuous N-1 times following abnormal generation), not stopping immediately acting, and into
The control on direction that enforcement action continues.
In addition, in the above description, not special consideration should be given to image information obtain the moment, joint angle information acquisition when
It carves, the time difference between the acquisition moment (computing finish time) of image feature amount.But in fact, as shown in figure 45, even if
Image information is obtained given at the time of, the joint angle information when encoder reads the acquisition of the image information and will be read
Information be sent to before joint angle test section 115 and can also generate time lag.Further, since characteristics of image is carried out after image acquisition
The computing of amount, so time lag is also generated at this, and due to the computational load because making image feature amount during the difference of image information
Difference, so the length of time lag is also different.For example, it is being single completely without the object beyond shooting identification object and background
When one plain color, the computing of image feature amount can be carried out at high speed, but in the case where shooting has various objects etc.,
The computing of image feature amount needs the time.
That is, in Figure 43 A, simply to pth image information and the unusual determination of q image informations is used to be said
It is bright, but actually as shown in figure 45, it is necessary to consider from the acquisition for getting corresponding joint angle information of pth image information
Time lag t θ p, the time lag tfp terminated with the computing for getting image feature amount from pth image information, q image informations are also same
Sample needs to consider t θ q and tfq.
Unusual determination processing is, for example, to start at the time of the image feature amount fq of q image informations is obtained, but must
Must suitably judge the fp as the object for obtaining difference be how long preceding acquisition image feature amount or θ q and θ p acquisition when
When quarter is.
Specifically, the image feature amount f1 of the first image information is obtained at the i-th (i is natural number) moment and in jth
In the case that (j is the natural number for the meeting j ≠ i) moment obtains the above-mentioned image feature amount f2 of the second image information, variable quantity fortune
The difference of image feature amount f1 and image feature amount f2 is obtained as image feature amount variable quantity in calculation portion 1120, and variable quantity pushes away
Disconnected portion 1130 obtained at kth (k the is natural number) moment variable quantity deduction information p1 corresponding with the first image information and
In the case that l (l the is natural number) moment obtains variable quantity deduction information p2 corresponding with the second image information, according to variation
Deduction information p1 and variable quantity deduction information p2 are measured, is obtained and infers image feature amount variable quantity.
If the example of Figure 45, then image feature amount, joint angle information are the information obtained at the various moment, but with
In the case of on the basis of the acquisition moment (such as jth moment) of fq, image feature amount fp corresponding with pth image information be by
Before (tfq+ti-tfp) at the time of the image feature amount that obtains, i.e. determine that the i-th moment is more forward (tfq+ti- than the jth moment
tfp).Here, ti represents the difference at image acquisition moment as shown in figure 45.
Equally, at the time of the l moment for being determined as the acquisition moment of θ q is (tfq-t θ q) more forward than the jth moment, make
At the time of the kth moment for the acquisition moment of θ p is (tfq+ti-t θ p) more forward than the jth moment.In the means of present embodiment
In, it is necessary to obtain Δ f corresponding with Δ θ, specifically, if Δ f is obtained according to pth image information with q image informations,
Then Δ θ is also required to corresponding with pth image information and q image informations.If not, the deduction image being obtained according to Δ θ is special
Sign amount variation delta fe becomes do not have correspondence, the meaning without being compared processing with Δ f at all.Thus as above institute
It states, it may be said that the correspondence for determining the moment is important.In addition, in Figure 45, due to very at a high speed and high-frequency carry out
The driving of joint angle in itself, so being handled as continuous process.
In addition, in existing robot 20000 and robot controller 1000, it can be considered that image information obtains
Take the difference at moment and the acquisition moment of corresponding joint angle information sufficiently small.As a result, it is also contemplated that the kth moment is the first image
The acquisition moment of information, l moment are the acquisition moment of the second image information.In this case, since the t in Figure 45 can be made
θ p, t θ q are 0, so easily being handled.
In addition, as a more specific example, consider at the time of image feature amount is calculated according to previous image information into
The means of the acquisition of the next image information of row.The example of the situation is shown in Figure 46.The longitudinal axis of Figure 46 is image feature amount
Value, " actual characteristic quantity " assumes that obtain image feature amount corresponding with the joint angle information at the moment in the case of
Value, and can not confirm in processing.It was found from actual characteristic quantity successfully elapses this case, it may be considered that the drive of joint angle
Dynamic is continuous.
In this case, since image feature amount corresponding with the image information obtained at the time of B1 is after t2
B2 at the time of obtain, so the actual characteristic quantity of B1 is corresponding with the image feature amount of B2 (consistent if not having error).And
And next image information is obtained at the time of B2.
Equally, for the image feature amount of the image information obtained in B2, terminate computing in B3, and obtained in B3
Next image information.Same as belowly, for the image feature amount of the image information obtained in B4, after t1
B5 terminates computing, and obtains next image information in B5.
If the example of Figure 46, then the image feature amount calculated at the time of by B5, the figure with being calculated at the time of B2
In the case of being handled as characteristic quantity for unusual determination, the acquisition moment of image information is respectively B4 and B1.As described above, scheming
As the difference at the acquisition moment for obtaining moment and corresponding joint angle information of information it is sufficiently small in the case of, the use of joint angle information
Information at the time of information and B1 at the time of B4.I.e. as shown in figure 46, using the difference of B2 and B5 as Ts, and the moment is made
Benchmark in the case of B5, the image feature amount as comparison other uses characteristic quantity at the time of forward Ts.In addition, it is asking
Information and forward Ts+t2 at the time of the two joint angle informations used during the difference for going out joint angle information use forward t1
At the time of information.In addition, the difference at the acquisition moment of two image informations is (Ts+t2-t1).As a result, according to image
In the case that the difference at the acquisition moment of information carrys out decision threshold Th, the value of (Ts+t2-t1) is used.
Additionally, it is contemplated that the acquisition moment of various information, but as described above, can be with so as to exist between Δ f and Δ θ pair
The mode that should be related to carry out the moment it is definite on this point be identical.
5. variation
In the above description, Δ f and Δ θ is obtained, and is obtained according to Δ θ and infers image feature amount variation delta fe, from
And to Δ f compared with Δ fe.But it's not limited to that for the means of present embodiment.Such as above-mentioned mensuration means that
Sample can also be obtained the posture information of the hand point of robot 20000 by some means or be held etc. by hand point
Object posture information.
In this case, posture information X is obtained as variable quantity deduction information, therefore its variation can be obtained
Measure Δ X.And as shown in above formula (4), by acting on Jacobian matrix Ji to Δ X, it can be obtained and push away identically with the situation of Δ θ
Disconnected image feature amount variation delta fe.If Δ fe is obtained, after processing it is identical with above-mentioned example.That is, variable quantity inferring portion
1130 by making posture information is corresponding with image feature amount (specifically to make the variable quantity effect of position pose information
The variable quantity of posture information is corresponding with image feature amount variable quantity) Jacobian matrix, and to infer image feature amount
Variable quantity carries out computing.In Figure 43 B, the flow of the processing is shown correspondingly with Figure 43 A.
Here, as posture information, using, the hand of robot 20000 is sharp (hand or end effector 2220)
Posture in the case of, Ji be the posture information for making hand point variable quantity and image feature amount variable quantity it is corresponding
Information.In addition, as posture information, in the case where using the posture of object, Ji is the position for making object
The variable quantity of pose information and the corresponding information of image feature amount variable quantity.Alternatively, if known utilize end effector with assorted
Opposite this information of posture holding object of sample, then due to end effector 2220 posture information with it is right
As the posture information of object corresponds, so the information of a side can be also converted to the information of the opposing party.That is, consider
After the posture information of end effector 2220 is obtained, the posture information of object is converted into, is used afterwards
Make the variable quantity of the posture information of object that Δ be obtained with the corresponding Jacobian matrix Ji of image feature amount variable quantity
The various embodiments such as fe.
In addition, the comparison processing of the unusual determination of present embodiment is not limited to use image feature amount variation delta f
It is carried out with inferring image feature amount variation delta fe.Image feature amount variation delta f, posture information variation delta X,
And the variation delta θ of joint angle information can by using Jacobian matrix, Jacobian matrix inverse matrix (in the broadest sense for
Generalized inverse matrix) that is, against Jacobian matrix so as to mutually convert.
I.e. as shown in figure 49, the means of present embodiment can be applied to comprising the robot controller formed as follows:
Include the robot control unit 1110 that robot 20000 is controlled according to image information;The end for representing robot 20000 is obtained
The posture variable quantity of the variable quantity of the posture information of actuator 2220 or object or expression robot
The variable quantity operational part 1120 of the joint angle variable quantity of the variable quantity of 20000 joint angle information;It is obtained according to image information
Image feature amount variable quantity, and the deduction amount of posture variable quantity that is, deduction position are obtained according to image feature amount variable quantity
It puts the deduction amount of postural change amount or joint angle variable quantity that is, infers the variable quantity inferring portion 1130 of joint angle variable quantity;
And it is handled or by joint angle variable quantity by posture variable quantity and the comparison of inferred position postural change amount with pushing away
The comparison of disconnected joint angle variable quantity handles to carry out the abnormality determination unit 1140 of unusual determination.
In Figure 49 in the case of compared with Figure 36, be formed as variable quantity operational part 1120 and variable quantity is inferred
The structure that portion 1130 replaces.It (is that joint angle becomes here that i.e. variable quantity is obtained according to joint angle information for variable quantity operational part 1120
Change amount or posture variable quantity), variable quantity inferring portion 1130 is inferred variable quantity and (is obtained according to the difference of image feature amount
Infer joint angle variable quantity or inferred position postural change amount).In addition, in Figure 49, variable quantity operational part 1120 is formed as
The component of joint angle information is obtained, but as described above, in variable quantity operational part 1120, can also be come using measurement result etc.
Obtain posture information.
Specifically, in the case where obtaining Δ f and Δ θ, the following formula (8) being obtained from above formula (5) can also be utilized to ask
Go out to infer joint angle variation delta θ e, handled so as to carry out the comparison of Δ θ and Δ θ e.Specifically, using given threshold value
Th2 is determined as exception in the case where following formula (9) is set up.
Δ θ e=Jv- 1Δf·····(8)
| Δ θ-Δ θ e | > Th2 (9)
Alternatively, using mensuration means as described above in the case of obtaining Δ f and Δ X, can also utilize from above formula
(4) following formula (10) being obtained is obtained inferred position postural change amount Δ Xe, is handled so as to carry out the comparison of Δ X and Δ Xe.Tool
For body, using given threshold value Th3, in the case where following formula (11) is set up, it is determined as exception.
Δ Xe=Ji- 1Δf·····(10)
| Δ X- Δs Xe | > Th3 (11)
In addition, also it is not limited to situation about being compared using the information being directly obtained.For example, obtaining Δ f and Δ θ's
In the case of, can also inferred position postural change amount Δ Xe be obtained using above formula (10) according to Δ f, and according to Δ θ using upper
Formula (3) is obtained posture variation delta X (strictly speaking Δ X is nor measured value but inferred value), so as into enforcement
With the judgement of above formula (11).
Alternatively, in the case where obtaining Δ f and Δ X, can also deduction joint angle be obtained using above formula (8) according to Δ f
Variation delta θ e, and utilize the following formula (12) being obtained from above formula (3) that joint angle variation delta θ is obtained (strictly speaking according to Δ X
Δ θ is nor measured value but inferred value), so as to carry out the judgement using above formula (9).
Δ θ=Ja- 1ΔX·····(12)
That is, variable quantity operational part 1120 is carried out obtaining multiple posture information and is obtained as posture variable quantity
The processing of the difference of multiple posture information obtains multiple posture information and according to the difference of multiple posture information
And the processing of joint variable quantity is obtained, obtains multiple joint angle informations and multiple joints is obtained as above-mentioned joint angle variable quantity
Simultaneously position is obtained according to the difference of multiple joint angle informations in the processing of the difference of angle information and the multiple joint angle informations of acquisition
Any one processing in the processing of postural change amount.
In Figure 47, by mark together the numerical expression in this specification number in a manner of summarize Δ f illustrated above, Δ X,
The relation of Δ θ.That is, for the means of present embodiment, if obtaining any two information in Δ f, Δ X and Δ θ,
It is compared by converting them to any one information in Δ f, Δ X, Δ θ, so as to realize present embodiment
Means, and various modifications implementation can be carried out to information, the information for comparing of acquisition.
In addition, robot controller 1000 of present embodiment etc. can also realize one of its processing using program
Divide or most of.At this point, program is performed by processors such as CPU, so as to fulfill the robot controller of present embodiment
1000 etc..Specifically, the program for being stored in non-transitory information storage medium is read, and the processors such as CPU perform reading
Program.Here, information storage medium (medium that computer can be utilized to read) is the medium for storing program, data etc.,
Function can be by CD (DVD, CD etc.), HDD (hard disk drive) or memory (card type reservoir, ROM etc.) etc. come real
It is existing.Moreover, the processors such as CPU according to the program (data) for being stored in information storage medium, carry out the various places of present embodiment
Reason.That is, information storage medium store for make computer (possess operation portion, processing unit, storage part, output section device) make
The program functioned for each portion of present embodiment (for the program for the processing that computer is made to perform each portion).
In addition, present embodiment is described in detail above, but the new of the present invention can not be departed from substantially
Under conditions of content and effect, diversified change is carried out, this is readily appreciated that those skilled in the art.
Therefore, this change example is also all contained in the scope of the present invention.For example, in specification or attached drawing, at least once with more
The term that broad sense or synonymous different terms are described together, any position in specification or attached drawing can be substituted for
The difference term.In addition, the structure of 1000 grade of robot controller, action are also not limited to the knot illustrated in present embodiment
Structure, action, and various modifications implementation can be carried out.
7th embodiment
1. the means of present embodiment
The means of present embodiment are illustrated first.In most cases using the inspection for check object object
(particularly visual examination).For visual examination (visual inspection), the inspection being watched and observed with the eyes of people
Method is basic, but from the point of view of the viewpoints such as high precision int of Labor-saving, inspection from the user of inspection, it is proposed that utilize inspection
Device makes the means that inspection automates.
Here check device can also be dedicated device, such as dedicated check device, as shown in figure 54, examine
Considering includes the device of shoot part CA, processing unit PR and interface portion IF.In this case, check device, which obtains, uses shoot part
CA and the captured image of check object object OB shot, and in processing unit PR inspection processing is carried out using captured image.It examines
Consider the content of various inspections processing here, but for example, obtain be determined as qualification in advance as qualified images in inspection
State check object object OB image (can be captured image, can also be made of model data), and carry out the conjunction
Table images and the comparison of actual captured image are handled.If captured image is approached with qualified images, the bat can determine that
It is qualified to take the photograph the check object object OB captured by image, if captured image and the difference of qualified images are big, can determine that inspection
It is unqualified there are some problems to look into object OB.In addition, it in patent document 1, discloses and is filled by the use of robot as inspection
The means put.
But from the above example of qualified images, it can be seen that, in order to be checked using check device, it is necessary to be preset
For the information of the inspection.Although for example, the configuration mode depending on check object object OB, needs are preset from what
The information of direction observation check object object OB etc.
Usually, how to observe check object object OB (is in which kind of shape, ruler for narrow sense when shooting is in captured image
It is very little) because check object object OB with observe position, direction opposite relation due to change.Hereinafter, check object object OB will be observed
Position be expressed as viewpoint position, to represent the meaning of the position of the configuration of shoot part for viewpoint position narrow sense.In addition, it will see
The direction for examining check object object OB is expressed as direction of visual lines, is the shooting direction (light for representing shoot part for direction of visual lines narrow sense
The direction of axis) the meaning.If being not provided with the benchmark of viewpoint position, direction of visual lines, due to the check object object in each check
The view mode of OB may change, and judge that check object object OB's is normal so may not be performed in accordance with view mode
Abnormal visual examination.
In addition, as being determined as the qualified images of benchmark without exception to check object object OB, can not determine can
To keep the image from what viewpoint position, direction of visual lines.That is, if the locality observed when checking is not known,
It is not known compared with the comparison other (inspection benchmark) of the captured image obtained when checking then, so as to which appropriate inspection can not be carried out yet
It looks into.As long as it in addition, keeps in the case of being determined as qualified check object object OB from all viewpoint positions, direction of visual lines
Image can then avoid the situation of no qualified images.But viewpoint position, direction of visual lines in this case can become to compare
It is huge, also become huger therefore unrealistic so as to the number of qualified images.Based on the above points, it is also desirable to keep in advance
Qualified images.
Also, usually, qualified images, captured image can also include unnecessary information in inspection, therefore use
When image integrally carries out inspection processing (comparing processing), the precision probably checked can be lower.For example, in captured image except inspection
It looks into beyond object, there is also the situations of the instrument that taken, fixture etc., preferably are not used to check by above- mentioned information.In addition,
In the case that a part for check object object is check object, probably also can because check object object be not check object region
Information and make inspection precision reduction.Specifically, as with Figure 64 A~Figure 64 D carry out it is aftermentioned, considering for larger
Object A and in the case of assembling the operation of the object B smaller than object A, the object of inspection should be around assembled object B,
And check that the necessity of object A entirety is relatively low, and because of the possibility for making A generally check objects that can also improve misinterpretation
Property.If considering to improve the precision for checking processing as a result, inspection area also becomes important information in inspection.
But it is previous, above-mentioned inspection area, viewpoint position, direction of visual lines or qualified images etc are used for what is checked
Information is set by the user of the professional knowledge with image procossing.Be because while be by image procossing come
The comparison for carrying out qualified images and captured image is handled, but also requirement and the specific content of the image procossing is accordingly changed
Check the setting of required information.
For example, using the image procossing at the edge in image, all using the image procossing of pixel value, using bright
Whether a kind of situation in degree, the image procossing of aberration form and aspect or other image procossings is suitable for qualified images and shooting
The comparison processing (being the determination processing of similarity for narrow sense) of image, may be with shape, tone, the texture of check object object OB
Deng accordingly changing.Thus if the inspection for the content that can change image procossing, then the user checked must be appropriate
Which kind of image procossing ground is set for.
In addition, even if in the case where having set the content of image procossing or in the higher image procossing of versatility
Content is previously set finish in the case of, user is also required to suitably grasp the content of the image procossing.If this is because figure
The specific content of picture processing changes, then with checking mutually suitable viewpoint position, direction of visual lines it can also happen that variation.Example
It such as, can will be to answering in the shape of check object object OB in the case of being compared processing using marginal information
The locality that miscellaneous part is observed is set to viewpoint position, direction of visual lines, and the locality for observing flat part is
Viewpoint position direction is unsuitable.If in addition, using pixel value come be compared processing, preferably will be to because of tone
It changes greatly or locality that the region that can be observed brightly due to the light from light source fully irradiates is observed is made
For viewpoint position, direction of visual lines.That is, in existing means, including comprising viewpoint position or direction of visual lines, qualified images
It checks in the setting of required information, it is necessary to the professional knowledge of image procossing.Furthermore, if the content of image procossing not
With, it is necessary to which the benchmark that the comparison of qualified images and captured image is handled is made also to change.For example, it is desired to in image procossing
Hold and accordingly determine that qualified images and captured image how many degree similar then qualification, how many degree difference be then underproof base
Standard, if but without the professional knowledge of image procossing, can not also set the benchmark.
That is, that is, allow to automate inspection by using robot etc., it is also difficult to the required information of the inspection is set,
For not having the user of the knowledge of specialty, it cannot be said that the automation for realizing inspection is easy.
In addition, when the robot that the applicant imagines easily carries out execution robot manipulating task for paired user by being
Guidance, and the robot for being provided with various sensors etc. and making robot that can identify operating environment in itself can be flexible
Ground carries out various operation.Such robot be suitable for multi items manufacture (be for narrow sense an average kind manufacture compared with
Few multi items manufacture on a small quantity).But even if easily carrying out the guidance at manufacture moment etc., if the system being easily manufactured into
The inspection of product also becomes another problem.This is because the position for the object that should be checked if product difference is also different, as
As a result, in captured image and qualified images, the inspection area that should become comparison other is also different for each product.That is,
It is relevant with the setting processing if the setting of inspection area is entrusted to user in the case where imagining multi items manufacture
It bears larger, and causes productive reduction.
Therefore, the applicant proposes following means:It generates to check that the second of processing examines according to the first inspection information
Information is looked into, so as to reduce the burden of the user in inspection processing, and productivity when improving robot manipulating task.Specifically,
The robot 30000 of present embodiment is to use the check object object shot by shoot part (such as shoot part 5000 of Figure 52)
Captured image, and the robot that the inspection checked check object object is handled, information, generation bag are checked according to first
Information is checked containing second including the inspection area for checking processing, and inspection processing is carried out according to the second inspection information.
Here, the first inspection information is the letter that robot 30000 can obtain at the time of checking that processing is forward than performing
Breath, and represent information used in the second generation for checking information.Since the first inspection information is the information that obtains in advance, also can
Enough show as prior information.In the present embodiment, the first inspection information can be inputted by user, can also be in robot
It is generated in 30000.Even if in the case where checking information by user's input first, this first does not check information in input also not
It is required that the professional knowledge of image procossing, and as the information that can be easily inputted.Specifically, can be comprising inspection pair
The posture information of shape information, check object object OB as object OB and the opposite inspection for check object object OB
Deal with objects at least one information in position.
As described later, by using shape information (being three-dimensional modeling data for narrow sense), posture information, inspection
The information of position is dealt with objects, the second inspection information can be generated.And shape information is as CAD data etc., is usually prior
It obtains, when user inputs shape information, selects existing information.For example, it is maintaining as check object object
Under the situation of the data of the various objects of the candidate of OB, user selects check object object OB among the candidate.In addition,
For posture information, if knowing when checking, check object object OB is (such as being matched somebody with somebody with which kind of posture of how configuring
Be placed in go operation post where), then also can easily setting position pose information, the input of posture information should not
Seek the professional knowledge of image procossing.In addition, check that processing object's position is to represent to be intended to the position checked in check object object OB
The information put, if such as the breakage of the given part of check object object OB is checked, be to represent the given part
Position information.In addition, using the check object object OB that object B is assembled for object A as object, checking whether normally
In the case of the assembling for carrying out object A and object B, rigging position (contact surface, contact point, the insertion position of object A and object B
Deng) become inspection processing object's position.Check that processing object's position in the same manner, can also hold if it will appreciate that the content of inspection
It is easily inputted, and the professional knowledge of image procossing is not required in input.
In addition, the means of present embodiment are not limited to automatically generate the whole of the second inspection information.For example, it is also possible to
The part in information is checked using the means generation second of present embodiment, and others second check that information passes through user
It is manually entered.In this case, user is not the input that can be omitted completely the second inspection information, but at least can oneself
On this point dynamic generation is difficult to view information set etc., can easily carry out checking this using the means of present embodiment
Advantage is constant.
In addition, inspection processing is the result for the robot manipulating task based on robot 30000 and the processing that carries out, so as to
First checks that information can also be the information obtained in robot manipulating task.
Here robot manipulating task refers to the operation carried out by robot, consider to be fastened by screw, weld, pressure welding, snap shot
(Snapshot) combination of formation and the various operations such as deformation using hand, instrument, fixture such as.Make for robot
The result of industry and in the case of carrying out inspection processing, inspection processing determines whether to be normally carried out robot manipulating task.In the feelings
Under condition, in order to start to perform robot manipulating task, it is necessary to obtain and check object object OB, the relevant various information of job content.Example
Such as, operation object (all or part of check object object OB) configured before operation where, with which kind of posture,
Variation becomes known information for which kind of posture after operation.If in addition, carry out screw fastening, welding, manipulating object
The position of trip bolt in object, the position welded are known.Equally, if combining multiple objects, object A is at what
Position is combined into known information from what direction and what object, is deformed if applying to operation object, operation object
In deformation position and deformed shape also all be known information.
That is, robot manipulating task become object in the case of, for above-mentioned shape information, posture information, with
And for checking the information that the corresponding information of processing object's position, other the first inspection information are included, completing robot
On the premise of operation, sizable part (the according to circumstances whole of the first required inspection information) becomes known.That is, in this reality
In the robot 30000 for applying mode, first checks that information diverts unit (such as the processing unit of Figure 50 for the control for carrying out robot
Etc. 11120) information kept.Even if in addition, using the hand of Figure 51 A, Figure 51 B as described later by present embodiment
In the case that section is applied to the processing unit 10000 different from robot 30000, processing unit 10000 is from being included in robot
Interior 3500 grade of control unit obtains first and checks information.Therefore, from the point of view of user, in order to be checked, without again
Input first checks information, it becomes possible to easily produce the second inspection information.
Even the user of the professional knowledge without image procossing as a result, also can be easily performed inspection (at least
Obtain second and check information) or reduction setting second can check burden of information etc. when checking execution.In addition, at this
It is that the example of the result of robot manipulating task is illustrated to the object for checking processing in the following description of specification.That is, use
Person need not input the first inspection information, but as described above, user's input first checks some or all of information
It is harmless.The situation of information is checked even with person's input first, for user, in the input of the first inspection information
In on this point of not requiring professional knowledge, easily carry out checking that this advantage is constant.
In addition, in the following description, mainly to generating by robot 30000 using Figure 52, Figure 53 as described later
Two inspection information and in the robot 30000 perform check processing example illustrate.But the hand of present embodiment
It's not limited to that for section, and the following description can expand as the generation second in processing unit 10000 as shown in Figure 51 A and check
Information and robot 30000 obtain the second inspection information and perform the means for checking processing.It alternatively, also can be such as Figure 51 B
Shown expand to generates the second inspection information and not in robot but in dedicated inspection in processing unit 10000
Look into the means of inspection processing of the execution such as device using the second inspection information.
Hereinafter, the robot 30000 to present embodiment, the system configuration example of processing unit 10000 illustrate, afterwards
Specific process flow is illustrated.More specifically, as processed offline to the acquisition from the first inspection information
Flow to the generation of the second inspection information illustrates, and generated to having used as online processing (online)
The flow for the actual inspection processing that the robot of second inspection information is carried out illustrates.
2. system configuration example
Next the robot 30000 to present embodiment, the system configuration example of processing unit 10000 illustrate.Such as
Shown in Figure 50, the robot of present embodiment include information acquiring section 11110, processing unit 11120, robot mechanism 300000 with
And shoot part 5000.But robot 30000 is not limited to the structure of Figure 50, and the composition of an above-mentioned part can be omitted
Element adds other inscapes etc. and implements various modifications.
Information acquiring section 11110 obtains first before inspection processing and checks information.First inspection is being had input by user
In the case of looking into information, information acquiring section 11110 is received the processing of the input information from user.In addition, make machine
In the case that information used in device people's operation checks information for first, information acquiring section 11110 do not scheme from Figure 50
The storage part shown is when reading processing of control information for being used during operation in processing unit 11120 etc..
First inspection information of the processing unit 11120 according to acquired in information acquiring section 11110, carries out second and checks information
Generation is handled, and carries out checking that the inspection of information is handled using second.The processing in processing unit 11120 is carried out later detailed
Thin narration.In addition, processing unit 11120 beyond inspection is handled and inspection is handled (such as the robot manipulating tasks such as assembling), carries out
The control of robot 30000.Such as in processing unit 11120, carry out the arm being included in robot mechanism 300,000 3100, with
The control of 5000 grade of shoot part.In addition, shoot part 5000 can also be mounted to the trick video camera of the arm 3100 of robot.
In addition, as shown in Figure 51 A, the means of present embodiment can be applied to following processing unit, be for use
By the captured image of the check object object of shoot part (shoot part 5000 being shown in Figure 51 A, but it's not limited to that) shooting
And the device of the inspection processing of above-mentioned check object object is carried out, output is used to check the processing unit 10000 of the information of processing,
According to first check information, generate will check processing the viewpoint position comprising shoot part and direction of visual lines view information,
The second inspection information that inspection area with checking processing is included, and to check that the device output second of processing checks
Information.In this case, first check that the acquisition of information and the generation of the second inspection information are carried out by processing unit 10000
, processing unit 10000, can be as the processing including information acquiring section 11110 with processing unit 11120 for example shown in Figure 51 A
Device is realized.
Here, the device for carrying out checking processing can be robot 30000 as described above.In this case, such as Figure 51 A institutes
Show, robot 30000 includes arm 3100, the shoot part 5000 of inspection processing and progress arm 3100 for check object object
And the control unit 3500 of the control of shoot part 5000.Control unit 3500 is obtained as the second inspection information from processing unit 10000
It takes the view information of the viewpoint position for representing shoot part 5000 and direction of visual lines, the information being included with inspection area,
And it according to the second inspection information, is moved into shoot part 5000 is exercised to viewpoint position corresponding with view information and direction of visual lines
Dynamic control performs inspection processing thereby using the captured image and inspection area of acquisition.
In such manner, it is possible to carry out the second generation for checking information in processing unit 10000, and use and be somebody's turn to do in other machines
Second checks information and suitably performs inspection processing.If the device for check processing is robot 30000, identical with Figure 50
Ground can be realized using the second inspection information check the robot of processing, but be formed as the second inspection information
Generation handles the machine different from checking the executive agent for checking processing of information using second on this point, Figure 51 A and Figure 50
It is different.
In addition, processing unit 10000 not only carries out the generation processing of the second inspection information, machine can also be ordinatedly carried out
The control process of people 30000.For example, the generation of processing unit 11,120 second of processing unit 10000 checks information, and carry out base
In the generation of the control information of the robot of the second inspection information.In this case, 3500 foundation of control unit of robot
The control information generated by the processing unit 11120 of processing unit 10000, is acted arm 3100 etc..That is, processing unit
10000 undertake the part of the essence of the control of robot, and processing unit 10000 in this case will also appreciate that as robot
Control device.
In addition, perform the main body and unlimited for checking processing using the second inspection information generated by processing unit 10000
Due to robot 30000.For example, it is also possible to it is examined in dedicated machine as shown in figure 50 using the second inspection information
Reason is investigated and prosecuted, the composition of the situation is for example shown in Figure 51 B.In Figure 51 B, show that check device receives first and checks the defeated of information
Enter (this is for example using the interface portion IF of Figure 54) and the example of the first inspection information is exported to processing unit 10000.
In this case, processing unit 10000 generates the second inspection information using the first inspection information inputted from check device.But
It is that first checks that information can carry out various modifications implementation as the example directly inputted from user to processing unit.
As shown in figure 52, the robot 30000 of present embodiment can also be the one armed robot that arm is 1.In Figure 52
In, the end effector of arm 3100 is provided with shoot part 5000 (trick video camera).But it is possible to the handles such as hand are set
Hold portion sets shoot part 5000 etc. to implement various changes as end effector, in the other positions etc. of the handle part, arm 3100
Shape.In addition, in Figure 52, the machines such as PC are shown as the 3500 corresponding machine of control unit with Figure 51 A, but the machine
Can also be and 11120 corresponding machine of the information acquiring section of Figure 50 11110 and processing unit.In addition, in Figure 52, including connecing
Including oral area 6000, operation portion 6100 and display unit 6200 are shown as interface portion 6000, but can be to whether including
Interface portion 6000 or how to be formed in the structure including the interface portion 6000 in the case of interface portion 6000 to carry out deformation real
It applies.
In addition, the structure of the robot 30000 of present embodiment is not limited to Figure 52.For example, as shown in figure 53, machine
People 30000 can also include at least the first arm 3100 and second arm 3200 different from the first arm 3100, and shoot part
5000 are disposed on the trick video camera of at least one party of the first arm 3100 and the second arm 3200.In Figure 53, the first arm
3100 are made of joint 3110,3130 and the frame being arranged between joint 3150,3170, and the second arm 3200 is also the same,
But it's not limited to that.In addition, in Figure 53, the example of the tow-armed robot with two support arms is shown, but this implementation
The robot of mode can also have the arm of 3 or more.Although also describing shoot part 5000 is disposed on the first arm 3100
It trick video camera (5000-1) and is arranged on two sides of trick video camera (5000-2) of the second arm 3200, but can also
It is provided on a wherein side.
In addition, the robot 30000 of Figure 53 includes base unit portion 4000.Base unit portion 4000 is arranged at robot master
The lower part of body, and supporting machine human agent.In the example of Figure 53, be formed as being provided in base unit portion 4000 wheel etc.,
And the structure that robot can integrally move.It however, it can be that base unit portion 4000 does not have wheel etc., and be fixed on
The structure on ground etc..(it is to be used as in Figure 52 by storing control device in base unit portion 4000 in the robot of Figure 53
Control unit 3500 and the device shown) so that robot mechanism 300000 is formed with control unit 3500 as integral.Or
Person as the device for the control unit 3500 for being equivalent to Figure 52, can also be not provided with the machine of specific control, and by interior
The substrate (being more specifically arranged at IC on substrate etc.) of robot is placed in, realizes above-mentioned control unit 3500.
In the case where using the robot with the arm of two or more, flexible inspection processing can be carried out.For example,
In the case that multiple shoot parts 5000 are set, inspection processing can be carried out at the same time from multiple viewpoint positions, direction of visual lines.In addition,
Also the trick video camera for being arranged at given arm can be used, to the check object object OB held by the handle part for being arranged at other arms
It is checked.In this case, viewpoint position, the direction of visual lines of shoot part 5000 are not only, can also make check object object OB
Posture variation.
In addition, as shown in figure 20, with the processing unit 11120 in the processing unit of present embodiment or robot 30000
Etc. corresponding part function can also by via the network 20 comprising wired and wireless at least one party and with robot
The servers 700 of 30 communication connections are realized.
Or in the present embodiment, the place of the present invention can also be carried out in 700 one side of server as processing unit
Manage a part for the processing of device etc..At this point, by being arranged at the processing unit of robot side and being used as the service of processing unit
The decentralized processing of device 700, to realize the processing.Specifically, 700 side of server carries out each processing of the processing unit of the present invention
In, processing that be allocated in server 700.On the other hand, the processing unit 10000 for being arranged at robot carries out the place of the present invention
The processing of processing unit in each processing of reason device, to be allocated in robot etc..
It is handled for example, the processing unit of the present invention carries out the first~the M (M is integer), consideration can be so that the first processing
By subprocessing 1a and subprocessing 1b second processing is realized and made by subprocessing 2a and subprocessing 2b the side that realizes
Each processing of the first~the M is divided into the situation of multiple subprocessings by formula.In this case, consider that 700 side of server carries out son
1a, subprocessing 2a, subprocessing Ma are handled, the processing unit 100 for being arranged at robot side carries out subprocessing 1b, subprocessing
The decentralized processing of 2b, subprocessing Mb.At this point, the processing unit of present embodiment, the i.e. place of the first~the M of execution processing
Reason device can be the processing unit for performing subprocessing 1a~subprocessing Ma, can be the place for performing subprocessing 1b~subprocessing Mb
It manages device or performs whole processing units of subprocessing 1a~subprocessing Ma and subprocessing 1b~subprocessing Mb.
Furthermore, the processing unit of present embodiment is that each processing to the first~the M processing at least performs a subprocessing
Processing unit.
The higher server 700 of processing capacity can carry out for example compared with the processing unit 10000 of robot side as a result,
Handle high processing of load etc..Also, in the case where processing unit also carries out robot control, server 700 can be together
The action of each robot is controlled, so as to such as easily making multiple robot coordinated actions.
In addition, in recent years, the situation for manufacturing multi items and the component of minority has increased trend.Moreover, it is manufactured in change
Component species in the case of, it is necessary to change robot progress action.If structure as shown in figure 20, even if not weighing then
The new each robot for be directed to multiple robots instructs operation, and server 700 can also change robot progress together
Action etc..Also, compared with the situation of a processing unit is set for each robot, it can be greatly reduced and carry out processing dress
Trouble during software upgrading put etc..
3. process flow
Next the process flow of present embodiment is illustrated.Specifically, obtained to carrying out the first inspection information
Take and second inspection information generation flow, with according to the second of generation inspection information come perform check processing when flow
It illustrates.In the case where imagining by robot execution inspection processing, the acquisition for checking information due to first and the second inspection
The generation for looking into information is also able to carry out even if the action in the inspection processing for being not accompanied by robot, so being expressed as processed offline
(offline).On the other hand, due to checking the execution handled with robot motion, so being expressed as handling online.
In addition, it is following, it is the result of the assembling work based on robot to the object for checking processing and checks processing
The example performed by robot illustrates, but as described above in the presence of the point that can carry out various modifications implementation.
3.1 processed offline
First, show that the first of present embodiment checks the specific example that information checks information with second in Figure 55.
Second checks that information includes view information (viewpoint position and direction of visual lines), inspection area (ROI confirms region) and qualification
Image.In addition, first check information include shape information (three-dimensional modeling data), the posture information of check object object and
Check processing object's position.
The flow of specific processed offline is shown in the flow chart of Figure 56.If proceeding by processed offline, first
Information acquiring section 11110 obtains the three-dimensional modeling data (shape information) of check object object as the first inspection information
(S100001).Checking in (visual examination), it is important that how to observe check object object, from given viewpoint position,
The view mode of direction of visual lines observation depends on the shape of check object object.Particularly, for three-dimensional modeling data, due to
It is the information of the check object object in the state of N/D, the ideal without deformation, so as the material object for check object object
Inspection processing in useful information.
In the case where inspection processing is the processing carried out for the result of the robot manipulating task based on robot, information obtains
Portion 11110 is taken to obtain three after the three-dimensional modeling data of check object object obtained as the result of robot manipulating task that is, operation
Three-dimensional modeling data before the three-dimensional modeling data of check object object before dimension module data and robot manipulating task that is, operation.
In the case where the result to robot manipulating task checks, it is necessary to whether being appropriately operation and sentencing
It is fixed.In the case where operation is the assembling work for assembling object A and object B, to whether being directed to object A in defined position
It puts from defined direction assembling object B and is judged.That is, object A, object B individuals three-dimensional modeling data acquisition be deficiency
It is no more, it is important that the data of object B, i.e. ideally are assembled with from defined direction in defined position for object A
Terminate the three-dimensional modeling data in the state of operation.Thus the information acquiring section 11110 of present embodiment obtains three-dimensional after operation
Model data.In addition, inspection area, the setting of qualified threshold value are such as be described hereinafter, there is also the differences of the view mode before and after operation
The different scene as main points, therefore three-dimensional modeling data before operation is also ordinatedly obtained in advance.
The example of three-dimensional modeling data and three-dimensional modeling data after operation before operation is shown in Figure 57 A, Figure 57 B.
In Figure 57 A, Figure 57 B, to be directed to cubical block-like object A, the position along given 1 direction of principal axis offset with object
It is illustrated exemplified by the operation of the cubical block-like object B of posture assembling identical body A.In this case, operation first three
Dimension module data due to be object B assembling before, so being the three-dimensional modeling data of object A as shown in Figure 57 A.In addition, as schemed
Shown in 57B, three-dimensional modeling data is the data that object A and object B are assembled with above-mentioned condition after operation.In addition, Figure 57 A,
In Figure 57 B, in the relation illustrated in a planar manner, whether three-dimensional modeling data becomes from given viewpoint position, sight
Direction observe such data, but from " three-dimensional " this word also it was found from, the shape that is obtained as the first inspection information
The three-dimensional data that shape data become the position of observation, direction is not limited.
In addition, in S100001, also ordinatedly obtain and (include viewpoint position and direction of visual lines as view information
Information) candidate viewpoint candidate information.Imagining the viewpoint candidate information is inputted by user or by processing unit 11120
The information of generations is waited, and the manufacturer of e.g. processing unit 10000 (or robot 30000) is preset before dispatching from the factory
Information.
Although viewpoint candidate information is as described above the information of the candidate as view information, it is contemplated that being likely to become this
The point of viewpoint candidate information is very more (being unlimited for narrow sense).For example, in the object article coordinate on the basis of check object object
It is in (object coordinates system) in the case of setting view information, what the inside of the check object object in object coordinate system was included
Point beyond point is all likely to become viewpoint candidate information.Certainly, the so more viewpoint candidate information of use (does not limit in processing
Viewpoint candidate information), it accordingly can flexibly or subtly set view information with situation.If as a result, in view information
If Timing Processing load does not become problem, then viewpoint candidate information can not be obtained in S100001.But in the following description
In, viewpoint candidate information is preset, so that also can be general in the case of even if check object object is become in various objects
Ground is utilized, and increases the processing load in the setting of view information.
At this point, it is known to check that the posture of configuration of check object object OB at moment etc. is not limited to.Therefore, because it is
It is no can make shoot part 5000 to posture corresponding with view information move it is in confused situation really etc., so by view information
It is unpractical to be defined to considerably less quantity (such as one, two).This is because only generating a small number of view informations
In the case of, if the whole of shoot part 5000 to a small number of view information can not be made mobile, inspection processing can not be performed.In order to
Restrain this dangerous, it is necessary to view information be made to generate a degree of quantity, as a result, the quantity of viewpoint candidate information
As a degree of quantity.
The example of viewpoint candidate information is shown in Figure 58.In Figure 58, around the origin of object coordinate system
It is set with 18 viewpoint candidate information.Specific coordinate value is as shown in figure 50.If viewpoint candidate information A, then viewpoint position is in x
On axis, and positioned at the position that given distance (example if Figure 59 is then 200) is left from origin.In addition, direction of visual lines
It is corresponding with the vector represented by (ax, ay, az), in the case of for viewpoint candidate information A, become negative direction of the x-axis, i.e. origin side
To.In addition, even if visual lines vector is only determined, also due to shoot part 5000 can carry out the rotation around visual lines vector
Turn, so posture is not fixed as one kind.Thus here, in advance by regulation around another arrow of the rotation angle of visual lines vector
Amount setting is used as (bx, by, bz).In addition, as shown in figure 58, by the conjunction of the point between 2 points on each axis of xyz and 2 axis in xyz
18 points are counted as viewpoint candidate information.If viewpoint candidate letter is set in a manner of the origin around object coordinate system in this wise
Breath, then in world coordinate system (robot coordinate system), no matter how check object object configures, can realize appropriate viewpoint
The setting of information.Specifically, can restrain can not be such that shoot part 5000 believes to the viewpoint set according to viewpoint candidate information
The whole (or most of) of breath it is mobile or even if mobile also because the possibility that can not be checked durings shelter etc. etc, so as to
It can realize the inspection under at least sufficient amount of view information.
In visual examination, although only check it being harmless from a viewpoint position and direction of visual lines, if
Consider precision, then preferably checked from multiple viewpoint positions and direction of visual lines.This is because consider only examining from 1 direction
When looking into, there is a possibility that can not the abundant region that should check of (such as on the image with sufficiently large size) observation etc..Therefore,
It is preferred that second checks that information is not a view information, but the view information group comprising multiple view informations.This is, for example, logical
Cross using multiple candidate information (substantially whole candidate information) in above-mentioned viewpoint candidate information and generate view information from
And realize.In addition, even if in the case of without using above-mentioned viewpoint candidate information, multiple view informations are obtained.That is,
Two inspection information include the view information group for being included multiple view informations, and each view information of view information group includes inspection
Investigate and prosecute the viewpoint position and direction of visual lines of the shoot part 5000 in reason.Specifically, processing unit 11120 is checked according to first and believed
Breath generates the view information group for being included multiple view informations of shoot part 5000 as the second inspection information.
Above-mentioned viewpoint candidate information is the position in object coordinate system, but in the setting stage of viewpoint candidate information,
Shape, the size of specific check object object are uncertain.Specifically, although Figure 58 is on the basis of check object object
Object coordinate system, but the posture of the object in the object coordinate system becomes indefinite state.For view information
For, due at least needing regulation and the opposite position relationship of check object object, so in order to be generated from viewpoint candidate information
Specific view information is, it is necessary to corresponding with check object object.
Here, in view of above-mentioned viewpoint candidate information, for setting the origin of coordinate system of viewpoint candidate information,
It is the position as the center of whole viewpoint candidate information, and with the feelings of each viewpoint candidate information configuration shoot part 5000
Under condition, positioned at the shooting direction (optical axis direction) of the shoot part 5000.I.e., it is possible to the origin for saying coordinate system is to use shoot part
Optimal position under 5000 observation.Since the position that should be most observed in being handled in inspection is above-mentioned inspection processing object's position
(can be rigging position for narrow sense, can also be assembling position as shown in figure 58), so checking information used as first
And the inspection obtained handles object's position and generates and the corresponding view information of check object object.
That is, first checks that information includes the opposite inspection processing object's position for above-mentioned check object object, robot
30000 on the basis of checking and handle object's position, sets object coordinate system corresponding with check object object, and uses object
Coordinate system generates view information.Specifically, information acquiring section 11110 is obtained as the first inspection information for inspection pair
As the opposite inspection processing object's position of object, processing unit 11120 is on the basis of checking and handle object's position, setting and inspection pair
As the corresponding object coordinate system of object, and object coordinate system is used, generation view information (S100002).
For example, the shape data of check object object obtains point O therein and is dealt with objects to check in the shape shown in Figure 60
The first of position checks information.In this case, using point O as origin, and set shown in the posture as Figure 60 of check object object
Such object coordinate system.If it is determined that the posture of the check object object in object coordinate system, then each viewpoint is waited
It mends information and the relativeness of check object object is clear and definite, therefore each viewpoint candidate information can be used as view information.
If generation includes the view information group of multiple view informations, the various second generations for checking information are carried out.First,
Carry out the generation (S100003) of qualified images corresponding with each view information.Specifically, processing unit 11120 is used as inspection department
Qualified images used in reason and obtain and taken the photograph by the imagination for being configured at viewpoint position corresponding with view information and direction of visual lines
The image of the three-dimensional modeling data of camera shooting.
Qualified images need to become the image of the preferable state of display check object object.Due to checking information as first
And obtain three-dimensional modeling data (shape information) be check object object preferable shape data, so will from view information
The image of the three-dimensional modeling data of the shoot part observation accordingly configured carrys out use as qualified images.In addition, make
In the case of with three-dimensional modeling data, the actually not shooting based on shoot part 5000, but carry out using imaginary video camera
It handles (conversion process that three-dimensional data is specifically projected as to 2-D data).If in addition, imagine for robot manipulating task
Result inspection processing, then qualified images are the figures for the state for showing the preferable check object object at the end of robot manipulating task
Picture.Moreover, because the state of the preferable check object object at the end of robot manipulating task is by three-dimensional modeling data after above-mentioned operation
It has been shown that, so the image of three-dimensional modeling data is as qualified images after the operation that will be shot by imaginary video camera.Due to closing
Table images are obtained in each view information so as described above in the case where setting 18 view informations, and qualification is schemed
As being also 18.In the case of assembling work of the image on the respective right sides of Figure 61 A~Figure 61 G with correspond to Figure 57 B
Qualified images correspond to.In Figure 61 A~Figure 61 G, the image of 7 viewpoint amounts is described, and as described above, number with view information
Amount of images is obtained in amount.In addition, in the processing of S100003, it is contemplated that the processing of aftermentioned inspection area, qualified threshold value, also in advance
Image (the respective left sides of Figure 61 A~Figure 61 G before the operation of three-dimensional modeling data are first obtained before the operation shot by imaginary video camera
Side).
Next, it is obtained to check the qualified images of processing and the image of captured image as the second inspection information
In region that is, inspection area (S100004).Inspection area is as described above.In addition, in inspection part and parcel observation
Mode accordingly changes with view information, therefore, inspection area is carried out for each view information that view information group is included
Setting.
Specifically, processing unit 11120 is obtained as qualified images by being configured at viewpoint position corresponding with view information
Put and the imaginary video camera of direction of visual lines shooting operation after three-dimensional modeling data image, obtained as image before operation
Threedimensional model before the operation shot by the imaginary video camera for being configured at viewpoint position corresponding with view information and direction of visual lines
The image of data, and handled according to the comparison of image before operation and qualified images, the area in the image for checking processing is obtained
Domain that is, inspection area.
The specific example of the setting processing of inspection area is shown in Figure 62 A~Figure 62 D.For object A from right group
The robot manipulating task of object B is filled the result is that in the case of check object, obtained as shown in Figure 62 B as qualified images by
The image of three-dimensional modeling data after the operation of imaginary video camera shooting, obtained as shown in Figure 62 A as image before operation by
The image of three-dimensional modeling data before the operation of imaginary video camera shooting.In this way, in the machine of the stateful variation of operation front and back belt
In people's operation, it is often more important that the changing unit of state.If the example of Figure 62 B, then that should judge in inspection is object B
Whether fitted together with object A, the condition whether its assembling position is correct etc.Although can also in object A with work
Part beyond the relevant part of industry (such as composition surface in assembling) is checked, but importance is than relatively low.
That is, the region in the higher image of importance in qualified images and captured image can be thought of as in operation
The front and rear region for generating variation.Thus in the present embodiment, processing unit 120 as processing is compared before operation is obtained scheme
Picture and the difference of qualified images that is, the processing of difference image, and including in difference image is obtained as inspection area and checks
The region of object.If Figure 62 A, the example of Figure 62 B, then difference image is Figure 62 C, therefore sets included Figure 62 C
The inspection area that the region of object B is included.
In such manner, it is possible to by difference image the region comprising check object object, i.e. be inferred as check importance it is higher
Region as inspection area.
At this point, it is known to check processing object's position (rigging position in Figure 62 A etc.) as the first inspection information, it is somebody's turn to do
It checks that processing object's position is in the picture where to be also known.As noted previously, as check processing object's position
It is the position of the benchmark checked in being handled as inspection, so can also be from difference image with checking that inspection is obtained in processing object's position
Look into region.For example, as shown in Figure 62 C, inspection processing object's position and difference image in difference image is obtained in processing unit 11120
The maximum Blobwidth of the maximum BlobHeight of the length of the longitudinal direction in remaining region and the length of transverse direction.This
Sample, if centered on checking and handle object's position, by the respective distance of Blobwidth in respective BlobHeight, left and right up and down
In the range of region as inspection area, then can be obtained as inspection area in difference image comprising check object object
Region.In addition, in the present embodiment, can also be respectively provided in longitudinal direction and laterally it is remaining white, if the example of Figure 62 D, then
White region more than 30 pixels will be respectively provided with up and down as inspection area.
Figure 63 A~Figure 63 D, Figure 64 A~Figure 64 D are also same.Figure 63 A~Figure 63 D are from viewpoint position
In the case of, operation (or the porose object A insertions of transverse direction in the picture of thinner object B are assembled in the inboard of object A
The operation of rodlike object B) example.In this case, the region of the check object object in difference image is divided into discontinuously
Multiple regions, but can be handled identically with Figure 62 A~Figure 62 D.Here, it is contemplated that object B is thinner than object A, object A
Image in upper end nearby or nearby importance in inspection is relatively low for lower end.If it is the means of the present embodiment,
Then as shown in Figure 63 D, the region for being considered the relatively low object A of importance can be removed from inspection area.
Figure 64 A~Figure 64 D are the operations that the object B smaller than object A is assembled for larger object A.This will be for example with that will make
The operation for the defined position that object A as PC, printer etc. is anchored on for the screw of object B corresponds to.In this operation,
Check that PC, the necessity of printer entirety are low, and the importance for carrying out the position of screw fastening is high.At this point, if this
The means of embodiment then as shown in Figure 64 D, can remove the most of of object A from inspection area, the object that will should be checked
Setting is as inspection area around B.
In addition, above-mentioned means are the setting means of the higher inspection area of versatility, but the test zone of present embodiment
It's not limited to that the setting means in domain, can also by other means set inspection area.For example, in Figure 62 D, due to
Even narrower and small inspection area is also enough, so the means for setting narrower and small region can also be used.
Next, it is (qualified to carry out the threshold value used in the comparison processing of qualified images and the captured image of actual photographed
Threshold value) setting processing (S100005).Specifically, processing unit 11120 obtains above-mentioned qualified images and image before operation, and
According to the similarity of image before operation and qualified images, set used in the inspection processing based on captured image and qualified images
Threshold value.
The specific example of threshold value setting processing is shown in Figure 65 A~Figure 65 D.Figure 65 A are qualified images, if ideally
Carry out robot manipulating task (if in the broadest sense check object object be preferable state), the captured image of actual photographed also Ying Yuhe
Table images are consistent, and similarity is maximum (being 1000 here).On the other hand, if completely not consistent with qualified images will
Element, then similarity is minimum value (being 0 here).Here threshold value is to be worth as follows:If the similarity of qualified images and captured image
Then it is determined as passed examination more than the threshold value, is judged to checking if similarity is smaller than threshold value unqualified.That is, threshold value for 0 with
Specified value between 1000.
Here, Figure 65 B are images before operation corresponding with Figure 65 A, but since Figure 65 B are also comprising general with Figure 65 A
Component, so the similarity of image and qualified images becomes the not value for 0 before operation.For example, using the marginal information of image
In the case of carrying out the judgement of similarity, using as Figure 65 C of the marginal information of Figure 65 A for comparing processing, but conduct
Figure 65 D of the marginal information of image also include the part consistent with Figure 65 C before operation.If Figure 65 C, the example of Figure 65 D, then phase
Like degree value less than 700.Even if the check object object of the state completely without carrying out operation is shot in captured image as a result,
The captured image and the similarity of qualified images also keep 700 or so value.By the inspection of the state completely without carrying out operation
Object shooting for example refers in captured image, can not perform operation in itself or perform operation but assemble the object of side
It falls and does not shoot the state in image etc, and the possibility of robot manipulating task failure is higher.That is, due to being as inspection
The situation that should become " unqualified " is made 700 or so similarity also occur, so being set at less than the value of the value as threshold value
It can be described as unsuitable.
Thus in the present embodiment, by image before the maximum (such as 1000) of similarity and operation and qualified images
Value setting between similarity (such as 700) is as threshold value.As an example, using average value, can also utilize down
Threshold value is obtained in formula (13).
Threshold value={ 1000+ (similarities of qualified images and image before operation) }/2 (13)
In addition, the setting of threshold value can carry out various modifications implementation, such as can also be with image before qualified images and operation
Similarity value accordingly, the formula of threshold value is obtained in change.
For example, following deformation implementation can be carried out:The similarity of image is in less than 600 before qualified images and operation
In the case of, threshold value is fixed as 800, in the case that the similarity of image is in more than 900 before qualified images and operation, by threshold
Value is fixed as 1000, in the case of in addition, uses above formula (13).
In addition, the similarity of image is because of the change of the view mode of the check object object before and after operation before qualified images and operation
Change and change.For example, in the case of Figure 66 A~Figure 66 D corresponding with the view information different from Figure 65 (A)~Figure 65 (D),
The difference of the view mode of the front and rear check object object of assembling is smaller, as a result, qualified images to before operation image it is similar
Degree is higher than above-mentioned example.I.e., identically with qualified images, inspection area, each view information included for view information group
Also the calculating processing of similarity and threshold value is carried out.
Finally, processing unit 11120 be directed to view information group each view information, setting represent make shoot part 5000 to regarding
The priority level information (S100006) of relative importance value when the corresponding viewpoint position of point information and direction of visual lines move.As described above,
The view mode of check object object accordingly changes with the viewpoint position and direction of visual lines represented by view information.As a result,
There may be following situations:From given view information can well from the region that should be checked in check object object, with this
Relatively, from other view informations can not from the region.In addition, as noted previously, as view information group is comprising sufficient amount of
View information, so in inspection processing, without it is all checked as object, if in defined view information (example
Such as position at 2) it is qualified if, then final result is also qualified, so as to not to before without become the view information of object into
Row processing.According to the above, if considering to check the efficient activity handled, preferably further priority processing can be observed well answers
Useful view information in region of inspection etc., inspection processing.Therefore in the present embodiment, set for each view information excellent
First spend.
Here, in the case where inspection processing is for the processing of the result of robot manipulating task, the front and rear of operation is specified
Difference is useful in inspection.As extreme example, as shown in Figure 67 A, consider in larger object A from attached drawing upper left side
Assemble the operation of smaller object B.In this case, use it is corresponding with the viewpoint position of Figure 67 A 1 and direction of visual lines 1
In the case of view information 1, for image as shown in Figure 67 B, qualified images do not generate difference as shown in Figure 67 C before operation.That is, regard
Point information 1 is not useful view information when checking operation here.On the other hand, using and viewpoint position 2
And in the case of 2 corresponding view information 2 of direction of visual lines, image is as shown in Figure 67 D before operation, qualified images such as Figure 67 E institutes
Show, difference is clear and definite.In such a case it is possible to make the relative importance value of view information 2 higher than the relative importance value of view information 1.
That is, the front and rear variable quantity of operation is bigger, then relative importance value is set to higher, the front and rear variable quantity of operation
Image is low with the similarity of qualified images before the operation that big situation expression is illustrated with Figure 65 A~Figure 66 D.Thus in S100006
Processing in, (this is in the threshold value of S100005 for the similarity of image and qualified images before calculating the respective operation of multiple view informations
It is obtained during setting), similarity is lower, then sets higher relative importance value.When performing inspection processing, regarded from relative importance value is high
Point information in order moves shoot part 5000, so as to be checked.
3.2 online processing
Next, with the flow chart of Figure 68 to the inspection of the second inspection information is used to handle that is, is handled online flow into
Row explanation.If proceeding by online processing, carry out being generated by above-mentioned processed offline first second checks the reading of information
(S2001)。
Then, robot 30000 makes bat according to the mobile order set based on the relative importance value represented by priority level information
Portion 5000 is taken the photograph to move to viewpoint position corresponding with each view information of view information group and direction of visual lines.This can for example lead to
The control of the processing unit 11120 of Figure 50 or the control unit 3500 of Figure 51 A is crossed to realize.Specifically, view information group is selected
Comprising multiple view informations in the highest view information (S2002) of relative importance value, and make shoot part 5000 to this
The corresponding viewpoint position of view information and direction of visual lines movement (S2003).In this way, by carrying out according to above-mentioned relative importance value
The control of shoot part 5000 can realize efficient inspection processing.
But the view information in processed offline is substantially the information of object coordinate system defined, is not to consider now
The information of position in the real space (world coordinate system, robot coordinate system).For example, as shown in Figure 69 A, in object coordinate system
In, in the direction setting viewpoint position and viewpoint direction of the given face F1 of check object object.In this case, in the inspection
In the inspection processing of object, as shown in Figure 69 B, the feelings of operation post are configured in a manner of making face F1 directed downwardly in check object object
Under condition, above-mentioned viewpoint position and direction of visual lines can not make (the trick of robot 30000 of shoot part 5000 under operation post
Video camera) it is moved to the locality.
That is, S2003 becomes following control:Shoot part 5000 may not be made to be moved to posture corresponding with view information,
And it is made whether the judgement that can be moved (S2004), it is moved in the case where can move.Specifically, processing unit
11120 in the movable range information according to robot, be judged to can not making shoot part 5000 to i-th in multiple view informations
In the case that the corresponding viewpoint position of (i is natural number) view information and direction of visual lines move, skip and the i-th view information pair
The movement for the shoot part 5000 answered, and (j is to meet i ≠ j to next jth of progress and the i-th view information in mobile order
Natural number) the opposite control of view information.Specifically, in the case of being no in judging in S2004, S2005 is skipped
Following inspection processing, and S2002 is back to, and then carry out the selection of view information.
Here, if the quantity for making the view information that view information group included is that (N is natural number to N, and if above-mentioned figure
58 example, then N=18), i is the integer for meeting 1≤i≤N, and j is the integer for meeting 1≤j≤N and j ≠ i.In addition, machine
The movable range information of people is to represent especially to be provided with the scope that the part of shoot part 5000 can move in robot
Information.For each joint that robot is included, the scope of the joint angle of the selection in the joint is determined in design.And
It, can be according to the given position of direct kinematics calculating robot and if determining the value of the joint angle in each joint.That is, may be used
Dynamic range information is the information being obtained from the design item of robot, can be joint angle can value group, can be shooting
The posture or other information in the desirable space in portion 5000.
In addition, the movable range information of robot is showed with robot coordinate system or world coordinate system.Therefore, it is
The comparison of view information and movable range information is carried out, it is necessary to by the viewpoint letter in the object coordinate system as shown in Figure 69 A
Breath is converted to the view information in position relationship in realistic space as shown in Figure 69 B, i.e. robot coordinate system.
Thus information acquiring section 11110 obtains the world coordinates for representing check object object as the first inspection information in advance
The object posture information of posture in system, in the processing of S2004, processing unit 11120 is according to based on object
The relativeness of global coordinate system that posture information is obtained and object coordinate system, be obtained that global coordinate system showed regards
Point information, and the viewpoint that the movable range information of the robot showed according to global coordinate system and global coordinate system are showed
Information, to that whether shoot part 5000 can be made to sentence to viewpoint position and the direction of visual lines movement represented by view information
It is fixed.
Since the processing is coordinate transform processing, so needing the relativeness of two coordinate systems, can be sat according to the overall situation
Posture, the i.e. object posture information of the benchmark of object coordinate system in mark system, is obtained the relativeness.
In addition, the information of unique posture for determining shoot part 5000 is needed not be by the direction of visual lines that view information represents.Tool
For body, as described above, determining viewpoint position by (x, y, z) in the explanation of the viewpoint candidate information of Figure 58, Figure 59, and by
(ax, ay, az) and (bx, by, bz) determines the posture of shoot part 5000, but can not also consider (bx, by, bz).Right
Whether shoot part 5000 can be made into the judgement that viewpoint position and the direction of visual lines movement represented by view information carries out, if
By (x, y, z), (ax, ay, az) and (bx, by, bz) all as condition, then the shoot part for meeting the condition is difficult to realize
5000 movement.Specifically, that is, allow to from represented by (x, y, z) position shooting as origin direction (ax, ay,
Az), represent that the vector of the rotation angle around (ax, ay, az) at this time also only takes prescribed limit, and possibly can not meet (bx,
by、bz).Thus in the present embodiment, direction of visual lines can not also include (bx, by, bz), if meet (x, y, z) and
(ax, ay, az) is then determined as that shoot part 5000 can be moved to view information.
In the case where shoot part 5000 is completed to posture corresponding with view information movement, carry out by the position appearance
Shooting that the shoot part 5000 of gesture is realized and obtain captured image (S2005).By the comparison of captured image and qualified images come
Inspection processing is carried out, wherein, above-mentioned in qualified images (bx, by, bz) uses defined value, in contrast, is clapped for shooting
It takes the photograph for the shoot part 5000 of image, there is the rotation angle compared with direction of visual lines and the angle by (bx, by, bz) expression not
Same possibility.For example, as qualified images be Figure 70 A and situation that captured image is Figure 70 B, there are two images it
Between generate the situation of rotation under given angle.In this case inspection area cut out be it is unsuitable, similarity
Calculating is similarly unsuitable.In addition, for convenience of description, Figure 70 B identically with from qualified images made of model data,
Make background for single plain color, but since Figure 70 B are captured image, so other objects may also be shining into.In addition, from illumination light
Deng relation from the point of view of, it is also considered that the tone of the check object object situation different from qualified images.
Thus here, the calculating processing (S2006) of the image rotation angle between captured image and qualified images is carried out.Tool
For body, when generating qualified images using above-mentioned (bx, by, bz), therefore (imagination images shoot part corresponding with qualified images
Machine) the rotation angle compared with direction of visual lines be known information.In addition, the position of the shoot part 5000 during shooting captured image
Putting posture should also become in the robot control for shoot part 5000 to be made to be moved to posture corresponding with view information
It is known that it if not, then can not move at all.Shoot part 5000 when being formed as also being obtained shooting as a result, compared with
The information of the rotation angle of direction of visual lines.In the processing of S2006, according to the difference of two rotation angles compared with direction of visual lines
Point, the rotation angle between image is obtained.Also, the image rotation angle that use is obtained, progress qualified images and captured image
The rotational deformation processing of at least one party, so as to correct the difference of the angle of qualified images and captured image.
It is corresponding with the angle of captured image due to achieving qualified images by above processing, so extracting each figure
The inspection area (S2007) being obtained by S100004 as in, and carry out confirmation processing (S2008) using the region.
In S2008, the similarity of qualified images and captured image is calculated, if the similarity is more than the threshold value being obtained by S100005
It is determined as qualification, is if not then determined as unqualified.
But it is also possible to only not carry out inspection processing from a view information as described above, and believed using multiple viewpoints
Breath.As a result, to whether perform predetermined number of times confirmation processing judged (S2009), terminate to locate if predetermined number of times is performed
Reason.It is handled for example, becoming qualified inspection in the case of if having no problem in the confirmation processing of position at 3, then in S2008
It is middle carried out 3 qualified judgements in the case of, S2009 judgement in be yes, and make check object object for qualification and terminate to examine
Investigate and prosecute reason.On the other hand, even if being qualified in S2008, if this is determined as first time or second, sentence in S2009
It is no in fixed, and continues the processing for next view information.
In addition, in the above description, online processing is also to be carried out by information acquiring section 11110, processing unit 11120,
But it's not limited to that, can also carry out above-mentioned processing using the control unit 3500 of robot 30000 as described above.I.e.
Online processing can be carried out by the information acquiring section 11110 of the robot 30000 of Figure 50, processing unit 11120, can also be by scheming
The control unit 3500 of the robot of 51A carries out.Alternatively, can also by the processing unit of Figure 51 A information acquiring section 11110,
Reason portion 11120 carries out, and processing unit 10000 in this case can be thought of as the control device of robot as described above.
In the case where being handled online using the control unit 3500 of robot 30000, the control unit of robot 30000
3500 in the movable range information according to robot 30000, be judged to can not making shoot part 5000 to in multiple view informations
The i-th corresponding viewpoint position of (i is natural number) view information and direction of visual lines move in the case of, skip and the i-th viewpoint
The movement of the corresponding shoot part 5000 of information, and (j is full to next jth of progress and the i-th view information in mobile order
The natural number of sufficient i ≠ j) the opposite control of view information.
In addition, processing unit 10000 of present embodiment etc. can also by program come realize its processing a part or
Person is most of.In this case, the processors such as CPU perform program, so as to fulfill the processing unit 10000 of present embodiment
Deng.Specifically, the program for being stored in non-transitory information storage medium is read, and the processors such as CPU perform the journey read
Sequence.Here, information storage medium (medium that computer can be utilized to read) is the medium for storing program, data etc., function
It can be realized by CD (DVD, CD etc.), HDD (hard disk drive) or memory (card type reservoir, ROM etc.) etc..
Moreover, the processors such as CPU carry out the various processing of present embodiment according to the program (data) for being stored in information storage medium.
That is, information storage medium store for make computer (possess operation portion, processing unit, storage part, output section device) conduct
Each portion of present embodiment and the program functioned (for the program for the processing that computer is made to perform each portion).
Claims (16)
1. a kind of robot controller, which is characterized in that possess:
The first control units, so that the endpoint of the arm of robot instructs position and formed according to 1 or more based on setting
The mode that path is moved to target location generates command value;
Image acquiring unit obtains the image comprising the endpoint that is, target figure when the endpoint is in the target location
Picture and the endpoint are in the image that is, present image that include the endpoint during current location;
Second control unit so that the endpoint according to the present image and the target image from the current location to
The mode of the target location movement, generates command value;And
Drive control part, using by the command value that the first control units generates and the instruction by second control unit generation
It is worth and moves the arm.
2. a kind of robot controller, which is characterized in that possess:
Control unit generates the track of the endpoint in a manner of making the endpoint of arm of robot and target location close;
And image acquiring unit, it obtains the image comprising the endpoint when endpoint is in current location that is, currently schemes
Picture and the endpoint are in the image that is, target image that include the endpoint during target location,
The control unit is according to the path instructed position and formed of 1 or more based on setting and the present image and institute
Target image is stated, moves the arm.
3. robot controller according to claim 1, which is characterized in that
The drive control part using respectively with defined component by by the command value that the first control units generates with by described
The signal that the command value of second control unit generation is formed by stacking, moves the arm.
4. robot controller according to claim 3, which is characterized in that
The drive control part determines the defined component according to the difference of the current location and the target location.
5. robot controller according to claim 3, which is characterized in that
Possesses the input unit for inputting the defined component.
6. robot controller according to claim 3, which is characterized in that
Possesses the storage part for storing the defined component.
7. robot controller according to claim 1, which is characterized in that
The drive control part, meet in the current location as defined in the case of condition, using based on being controlled by described first
The command value of the track of portion processed generation drives the arm, be unsatisfactory in the current location it is described as defined in condition situation
Under, using the command value based on the track generated by the first control units and based on being generated by second control unit
The command value of track drives the arm.
8. robot controller according to claim 1, which is characterized in that possess:
Power test section is detected the power for being applied to the endpoint;And
3rd control unit, according to the value that the power test section is detected, so that the endpoint is from the current location to described
The mode of target location movement generates the track of the endpoint,
The drive control part using the command value based on the track generated by the first control units, based on being controlled by described second
The command value of track and the command value based on the track generated by the 3rd control unit of portion's generation processed use base
In the command value of the track generated by the first control units and instruction based on the track generated by the 3rd control unit
Value moves the arm.
9. a kind of robot system, which is characterized in that possess:
Robot, with arm;
The first control units, so that the endpoint of the arm is according to the path instructed position and formed of 1 or more based on setting
The mode moved to target location, generates command value;
Shoot part, the image comprising the endpoint that is, target image when being in the target location to the endpoint, with
And the image that is, present image that include the endpoint of the endpoint when being in current location as the position at current time
It is shot;
Second control unit so that the endpoint according to the present image and the target image from the current location to
The mode of the target location movement, generates command value;And
Drive control part, using by the command value that the first control units generates and the instruction by second control unit generation
Value moves the arm.
10. a kind of robot system, which is characterized in that possess:
Robot, with arm;
Control unit generates the track of the endpoint in a manner that the endpoint and target location for making the arm are close;And
Shoot part is in the endpoint image for including the endpoint during current location as the position at current time
That is, the image comprising endpoint when being in the target location of present image and the endpoint that is, target image into
Row shooting,
The control unit is according to the path instructed position and formed of 1 or more based on setting and the present image and institute
Target image is stated, moves the arm.
11. a kind of robot, which is characterized in that possess:
Arm;
The first control units, so that the endpoint of the arm is according to the path instructed position and formed of 1 or more based on setting
The mode moved to target location, generates command value;
Image acquiring unit obtains the image comprising the endpoint that is, target figure when the endpoint is in the target location
As and the endpoint be in current location as the position at current time when the image comprising the endpoint that is, currently
Image;
Second control unit so that the endpoint according to the present image and the target image from the current location to
The mode of the target location movement, generates command value;And
Drive control part, using by the command value that the first control units generates and the instruction by second control unit generation
Value moves the arm.
12. a kind of robot, which is characterized in that possess:
Arm;
Control unit generates the track of the endpoint in a manner that the endpoint and target location for making the arm are close;And
Image acquiring unit, obtain the image comprising the endpoint when endpoint is in current location that is, present image,
And the image that is, target image that include the endpoint of the endpoint when being in the target location,
The control unit according to the path instructed position and formed of 1 or more based on setting, with the present image and
The target image moves the arm.
13. a kind of robot control method, which is characterized in that there are following steps:
The step of obtaining the image that is, target image comprising the endpoint when endpoint of the arm of robot is in target location;
It obtains the image comprising the endpoint when endpoint is in the current location as the position at current time that is, works as
The step of preceding image;And
According to the path instructed position and formed of 1 or more based on setting the endpoint to be made to be moved to the target location
Mode generate command value, and to make the endpoint from the present bit according to the present image and the target image
It puts the mode moved to the target location and generates command value, the step of the arm movement is made thereby using described instruction value.
14. a kind of robot control method, it is control with arm and obtains when the endpoint of the arm is in current location
The image comprising endpoint when image that is, present image and the endpoint comprising the endpoint are in target location is also
That is the robot control method of the arm of the robot of the image acquiring unit of target image, which is characterized in that
The command value that is controlled using the position carried out according to the path instructed position and formed of 1 or more based on setting,
Command value with the visual servo carried out according to the present image and the target image, so as to control the arm.
15. a kind of robot control method is bag when endpoint of the control with arm and the acquisition arm is in current location
The image comprising endpoint when image that is, present image and the endpoint containing the endpoint are in target location that is,
The robot control method of the arm of the robot of the image acquiring unit of target image, which is characterized in that
It is carried out at the same time the position control carried out according to the path instructed position and formed of 1 or more based on setting and root
The visual servo carried out according to the present image and the target image.
16. a kind of robot control program, which is characterized in that arithmetic unit is made to perform following steps:
The step of obtaining the image that is, target image comprising the endpoint when endpoint of the arm of robot is in target location,
The image comprising the endpoint when endpoint is in current location as the position at current time that is, currently scheme
The step of picture;And
So that the endpoint is moved according to the path instructed position and formed of 1 or more based on setting to the target location
Mode generate command value, and to make the endpoint from the present bit according to the present image and the target image
It puts the mode moved to the target location and generates command value, the step of the arm movement is made thereby using described instruction value.
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013212930A JP6322949B2 (en) | 2013-10-10 | 2013-10-10 | Robot control apparatus, robot system, robot, robot control method, and robot control program |
JP2013-212930 | 2013-10-10 | ||
JP2013-226536 | 2013-10-31 | ||
JP2013226536A JP6390088B2 (en) | 2013-10-31 | 2013-10-31 | Robot control system, robot, program, and robot control method |
JP2013228653A JP6217322B2 (en) | 2013-11-01 | 2013-11-01 | Robot control apparatus, robot, and robot control method |
JP2013-228653 | 2013-11-01 | ||
JP2013-228655 | 2013-11-01 | ||
JP2013228655A JP6337445B2 (en) | 2013-11-01 | 2013-11-01 | Robot, processing apparatus, and inspection method |
CN201410531769.6A CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410531769.6A Division CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108081268A true CN108081268A (en) | 2018-05-29 |
Family
ID=53069890
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410531769.6A Pending CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
CN201510136619.XA Pending CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201711203574.9A Pending CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137541.3A Expired - Fee Related CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137542.8A Expired - Fee Related CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410531769.6A Pending CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
CN201510136619.XA Pending CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510137541.3A Expired - Fee Related CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137542.8A Expired - Fee Related CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN104552292A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109625118A (en) * | 2018-12-29 | 2019-04-16 | 深圳市优必选科技有限公司 | Biped robot's impedance adjustment and device |
CN111791228A (en) * | 2019-04-01 | 2020-10-20 | 株式会社安川电机 | Programming assistance device, robot system, and programming assistance method |
CN112566758A (en) * | 2019-03-06 | 2021-03-26 | 欧姆龙株式会社 | Robot control device, robot control method, and robot control program |
CN113439013A (en) * | 2019-02-25 | 2021-09-24 | 国立大学法人东京大学 | Robot system, robot control device, and robot control program |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104552292A (en) * | 2013-10-10 | 2015-04-29 | 精工爱普生株式会社 | Control system of robot, robot, program and control method of robot |
CN104965489A (en) * | 2015-07-03 | 2015-10-07 | 昆山市佰奥自动化设备科技有限公司 | CCD automatic positioning assembly system and method based on robot |
US10959795B2 (en) * | 2015-08-25 | 2021-03-30 | Kawasaki Jukogyo Kabushiki Kaisha | Remote-control manipulator system and method of operating the same |
JP6689974B2 (en) * | 2016-07-06 | 2020-04-28 | 株式会社Fuji | Imaging device and imaging system |
JP6490032B2 (en) * | 2016-08-10 | 2019-03-27 | ファナック株式会社 | Robot controller for assembly robot |
US11373286B2 (en) * | 2016-11-07 | 2022-06-28 | Nabtesco Corporation | Status checking device for built-in object, operation checking device and method for checking built-in object |
JP6833460B2 (en) * | 2016-11-08 | 2021-02-24 | 株式会社東芝 | Work support system, work method, and processing equipment |
JP7314475B2 (en) * | 2016-11-11 | 2023-07-26 | セイコーエプソン株式会社 | ROBOT CONTROL DEVICE AND ROBOT CONTROL METHOD |
JP2018126799A (en) * | 2017-02-06 | 2018-08-16 | セイコーエプソン株式会社 | Control device, robot, and robot system |
KR101963643B1 (en) * | 2017-03-13 | 2019-04-01 | 한국과학기술연구원 | 3D Image Generating Method And System For A Plant Phenotype Analysis |
CN106926241A (en) * | 2017-03-20 | 2017-07-07 | 深圳市智能机器人研究院 | A kind of the tow-armed robot assembly method and system of view-based access control model guiding |
JP6788845B2 (en) * | 2017-06-23 | 2020-11-25 | パナソニックIpマネジメント株式会社 | Remote communication methods, remote communication systems and autonomous mobile devices |
EP3432099B1 (en) * | 2017-07-20 | 2021-09-01 | Siemens Aktiengesellschaft | Method and system for detection of an abnormal state of a machine |
JP6795471B2 (en) * | 2017-08-25 | 2020-12-02 | ファナック株式会社 | Robot system |
JP6963748B2 (en) * | 2017-11-24 | 2021-11-10 | 株式会社安川電機 | Robot system and robot system control method |
CN109992093B (en) * | 2017-12-29 | 2024-05-03 | 博世汽车部件(苏州)有限公司 | Gesture comparison method and gesture comparison system |
JP6873941B2 (en) * | 2018-03-02 | 2021-05-19 | 株式会社日立製作所 | Robot work system and control method of robot work system |
CN111937034A (en) * | 2018-03-29 | 2020-11-13 | 国立大学法人奈良先端科学技术大学院大学 | Learning data set creation method and device |
JP6845180B2 (en) * | 2018-04-16 | 2021-03-17 | ファナック株式会社 | Control device and control system |
TWI681487B (en) * | 2018-10-02 | 2020-01-01 | 南韓商Komico有限公司 | System for obtaining image of 3d shape |
JP6904327B2 (en) * | 2018-11-30 | 2021-07-14 | オムロン株式会社 | Control device, control method, and control program |
CN109571477B (en) * | 2018-12-17 | 2020-09-22 | 西安工程大学 | Improved comprehensive calibration method for robot vision and conveyor belt |
JP6892461B2 (en) * | 2019-02-05 | 2021-06-23 | ファナック株式会社 | Machine control device |
EP3696772A3 (en) * | 2019-02-14 | 2020-09-09 | Denso Wave Incorporated | Device and method for analyzing state of manual work by worker, and work analysis program |
CN110102490B (en) * | 2019-05-23 | 2021-06-01 | 北京阿丘机器人科技有限公司 | Assembly line parcel sorting device based on vision technology and electronic equipment |
JP2021094677A (en) * | 2019-12-19 | 2021-06-24 | 本田技研工業株式会社 | Robot control device, robot control method, program and learning model |
JP2021133470A (en) * | 2020-02-28 | 2021-09-13 | セイコーエプソン株式会社 | Control method of robot and robot system |
CN111482800B (en) * | 2020-04-15 | 2021-07-06 | 深圳市欧盛自动化有限公司 | Electricity core top bridge equipment |
CN111761575B (en) * | 2020-06-01 | 2023-03-03 | 湖南视比特机器人有限公司 | Workpiece, grabbing method thereof and production line |
CN111993423A (en) * | 2020-08-17 | 2020-11-27 | 北京理工大学 | Modular intelligent assembling system |
CN112076947A (en) * | 2020-08-31 | 2020-12-15 | 博众精工科技股份有限公司 | Bonding equipment |
JP2022073192A (en) * | 2020-10-30 | 2022-05-17 | セイコーエプソン株式会社 | Control method of robot |
CN112989982B (en) * | 2021-03-05 | 2024-04-30 | 佛山科学技术学院 | Unmanned vehicle image acquisition control method and system |
US11772272B2 (en) * | 2021-03-16 | 2023-10-03 | Google Llc | System(s) and method(s) of using imitation learning in training and refining robotic control policies |
CN113305839B (en) * | 2021-05-26 | 2022-08-19 | 深圳市优必选科技股份有限公司 | Admittance control method and admittance control system of robot and robot |
CN114310063B (en) * | 2022-01-28 | 2023-06-06 | 长春职业技术学院 | Welding optimization method based on six-axis robot |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3405909A1 (en) * | 1984-02-18 | 1985-08-22 | Licentia Patent-Verwaltungs-Gmbh, 6000 Frankfurt | DEVICE FOR DETECTING, MEASURING ANALYSIS AND / OR REGULATING TECHNICAL PROCEDURES |
US20030187548A1 (en) * | 2002-03-29 | 2003-10-02 | Farhang Sakhitab | Methods and apparatus for precision placement of an optical component on a substrate and precision assembly thereof into a fiberoptic telecommunication package |
JP2004009209A (en) * | 2002-06-06 | 2004-01-15 | Yaskawa Electric Corp | Teaching device for robot |
JP2010172969A (en) * | 2009-01-27 | 2010-08-12 | Yaskawa Electric Corp | Robot system and method of controlling robot |
CN102785249A (en) * | 2011-05-16 | 2012-11-21 | 精工爱普生株式会社 | Robot control system, robot system and program |
CN103038030A (en) * | 2010-12-17 | 2013-04-10 | 松下电器产业株式会社 | Apparatus, method and program for controlling elastic actuator drive mechanism |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5608847A (en) * | 1981-05-11 | 1997-03-04 | Sensor Adaptive Machines, Inc. | Vision target based assembly |
JPS62192807A (en) * | 1986-02-20 | 1987-08-24 | Fujitsu Ltd | Robot control system |
JPH03220603A (en) * | 1990-01-26 | 1991-09-27 | Citizen Watch Co Ltd | Robot control method |
JPWO2008047872A1 (en) * | 2006-10-20 | 2010-02-25 | 株式会社日立製作所 | manipulator |
US8864652B2 (en) * | 2008-06-27 | 2014-10-21 | Intuitive Surgical Operations, Inc. | Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip |
JP5509859B2 (en) * | 2010-01-13 | 2014-06-04 | 株式会社Ihi | Robot control apparatus and method |
JP4837116B2 (en) * | 2010-03-05 | 2011-12-14 | ファナック株式会社 | Robot system with visual sensor |
CN102059703A (en) * | 2010-11-22 | 2011-05-18 | 北京理工大学 | Self-adaptive particle filter-based robot vision servo control method |
WO2012153629A1 (en) * | 2011-05-12 | 2012-11-15 | 株式会社Ihi | Device and method for controlling prediction of motion |
JP5834545B2 (en) * | 2011-07-01 | 2015-12-24 | セイコーエプソン株式会社 | Robot, robot control apparatus, robot control method, and robot control program |
CN102501252A (en) * | 2011-09-28 | 2012-06-20 | 三一重工股份有限公司 | Method and system for controlling movement of tail end of executing arm |
JP6000579B2 (en) * | 2012-03-09 | 2016-09-28 | キヤノン株式会社 | Information processing apparatus and information processing method |
CN104552292A (en) * | 2013-10-10 | 2015-04-29 | 精工爱普生株式会社 | Control system of robot, robot, program and control method of robot |
-
2014
- 2014-10-10 CN CN201410531769.6A patent/CN104552292A/en active Pending
- 2014-10-10 CN CN201510136619.XA patent/CN104959982A/en active Pending
- 2014-10-10 CN CN201711203574.9A patent/CN108081268A/en active Pending
- 2014-10-10 CN CN201510137541.3A patent/CN104802166B/en not_active Expired - Fee Related
- 2014-10-10 CN CN201510137542.8A patent/CN104802174B/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3405909A1 (en) * | 1984-02-18 | 1985-08-22 | Licentia Patent-Verwaltungs-Gmbh, 6000 Frankfurt | DEVICE FOR DETECTING, MEASURING ANALYSIS AND / OR REGULATING TECHNICAL PROCEDURES |
US20030187548A1 (en) * | 2002-03-29 | 2003-10-02 | Farhang Sakhitab | Methods and apparatus for precision placement of an optical component on a substrate and precision assembly thereof into a fiberoptic telecommunication package |
JP2004009209A (en) * | 2002-06-06 | 2004-01-15 | Yaskawa Electric Corp | Teaching device for robot |
JP2010172969A (en) * | 2009-01-27 | 2010-08-12 | Yaskawa Electric Corp | Robot system and method of controlling robot |
CN103038030A (en) * | 2010-12-17 | 2013-04-10 | 松下电器产业株式会社 | Apparatus, method and program for controlling elastic actuator drive mechanism |
CN102785249A (en) * | 2011-05-16 | 2012-11-21 | 精工爱普生株式会社 | Robot control system, robot system and program |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109625118A (en) * | 2018-12-29 | 2019-04-16 | 深圳市优必选科技有限公司 | Biped robot's impedance adjustment and device |
CN113439013A (en) * | 2019-02-25 | 2021-09-24 | 国立大学法人东京大学 | Robot system, robot control device, and robot control program |
CN113439013B (en) * | 2019-02-25 | 2024-05-14 | 国立大学法人东京大学 | Robot system, control device for robot, and control program for robot |
CN112566758A (en) * | 2019-03-06 | 2021-03-26 | 欧姆龙株式会社 | Robot control device, robot control method, and robot control program |
CN111791228A (en) * | 2019-04-01 | 2020-10-20 | 株式会社安川电机 | Programming assistance device, robot system, and programming assistance method |
CN111791228B (en) * | 2019-04-01 | 2023-11-17 | 株式会社安川电机 | Programming support device, robot system, and programming support method |
Also Published As
Publication number | Publication date |
---|---|
CN104802166A (en) | 2015-07-29 |
CN104802174A (en) | 2015-07-29 |
CN104959982A (en) | 2015-10-07 |
CN104802174B (en) | 2016-09-07 |
CN104552292A (en) | 2015-04-29 |
CN104802166B (en) | 2016-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108081268A (en) | Robot control system, robot, program and robot control method | |
US11691274B2 (en) | Software compensated robotics | |
CN113696186B (en) | Mechanical arm autonomous moving and grabbing method based on visual-touch fusion under complex illumination condition | |
JP6587761B2 (en) | Position control device and position control method | |
Horaud et al. | Visually guided object grasping | |
JP5743499B2 (en) | Image generating apparatus, image generating method, and program | |
JP7022076B2 (en) | Image recognition processors and controllers for industrial equipment | |
DE102018213985B4 (en) | robotic system | |
CN106873549B (en) | Simulator and analogy method | |
CN104589354B (en) | Robot controller, robot system and robot | |
CN109421071A (en) | Article stacking adapter and machine learning device | |
US20110071675A1 (en) | Visual perception system and method for a humanoid robot | |
CN104842352A (en) | Robot system using visual feedback | |
CN101842195A (en) | Gripping apparatus and gripping apparatus control method | |
CN115194755A (en) | Apparatus and method for controlling robot to insert object into insertion part | |
CN113878588B (en) | Robot compliant assembly method based on tactile feedback and oriented to buckle type connection | |
Cirillo et al. | Vision-based robotic solution for wire insertion with an assigned label orientation | |
CN116419827A (en) | Robot control device and robot system | |
Gravdahl | Force estimation in robotic manipulators: Modeling, simulation and experiments | |
CN111823215A (en) | Synchronous control method and device for industrial robot | |
JP7447568B2 (en) | Simulation equipment and programs | |
CN110413806A (en) | Image management apparatus | |
JP7415013B2 (en) | Robotic device that detects interference between robot components | |
Cerecerez Jiménez | Design and Implementation of a Robotic Arm Prototype for Automated Packaging of Small Chocolates with Artificial Intelligence | |
Bepari et al. | Non Contact 2D and 3D Shape Recognition by Vision System for Robotic Prehension |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180529 |
|
WD01 | Invention patent application deemed withdrawn after publication |