CN104802166B - Robot control system, robot, program and robot control method - Google Patents
Robot control system, robot, program and robot control method Download PDFInfo
- Publication number
- CN104802166B CN104802166B CN201510137541.3A CN201510137541A CN104802166B CN 104802166 B CN104802166 B CN 104802166B CN 201510137541 A CN201510137541 A CN 201510137541A CN 104802166 B CN104802166 B CN 104802166B
- Authority
- CN
- China
- Prior art keywords
- mentioned
- image
- information
- robot
- inspection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23P—METAL-WORKING NOT OTHERWISE PROVIDED FOR; COMBINED OPERATIONS; UNIVERSAL MACHINE TOOLS
- B23P19/00—Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes
- B23P19/001—Article feeders for assembling machines
- B23P19/007—Picking-up and placing mechanisms
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
Abstract
A kind of robot control system, including: shooting image acquiring unit, it obtains shooting image;And control portion, it controls robot according to shooting image, shooting image acquiring unit obtains to reflect the assembling object of assembling operation and in assembled object, the shooting image of the most assembled object, control portion is according to shooting image, the characteristic quantity detection carrying out assembled object processes, and according to the characteristic quantity of assembled object, make assembling object move.
Description
The application be filing date on October 10th, 2014, Application No. 201410531769.6,
Invention entitled " robot control system, robot, program and robot control method "
The divisional application of application.
Technical field
The present invention relates to robot control system, robot, program and robot control method
Deng.
Background technology
In recent years, in production scene, for the mechanization of operation making people be carried out, automatization,
Mostly import industrial robot.But, when carrying out the location of robot, accurate calibration
Premised on one-tenth, it it is the obstacle of robot importing.
Here, as one of means carrying out robot localization, there is visual servo.Existing regard
Feel that servo is according to (current with shooting image with reference to image (goal image, target image)
Image) difference, robot is carried out the technology of feedback control.Certain visual servo is not
Ask calibration precision aspect very useful, and as reducing the robot importing technology of obstacle and quilt
Pay close attention to.
As the technology relevant to this visual servo, such as, there is showing described in patent documentation 1
There is technology.
Patent documentation 1: Japanese Unexamined Patent Publication 2011-143494 publication
It is assembled in assembled object making robot carry out assembling object by visual servo
In the case of the assembling operation of thing, the posture of assembled object is carrying out assembling work every time
All can change during industry.In the case of the posture of assembled object changes, assembled right
As thing and the posture assembling object becoming assembled state also produce change.
Now, identical carry out visual servo with reference to image if use every time, then cannot realize
Correct assembling operation.This is because no matter become the position appearance assembling object of assembled state
Whether gesture changes, and assembling object all can be made to the position assembling object reflecting in reference to image
Put posture to move.
As long as although it addition, the position of assembled object actual from theory becomes
Different reference images is used, it becomes possible to by using the visual servo with reference to image to enter during change
Row assembling operation, but need in this case to prepare substantial amounts of reference image, it is unrealistic
's.
Summary of the invention
One mode of the present invention relates to a kind of robot control system, comprising: shooting image obtains
Taking portion, it obtains shooting image;And control portion, it controls machine according to above-mentioned shooting image
Device people, above-mentioned shooting image acquiring unit obtains to reflect has the assembling object of assembling operation with assembled
The above-mentioned shooting image of assembled object in object, at least the above, above-mentioned control portion
According to above-mentioned shooting image, the characteristic quantity detection carrying out above-mentioned assembled object processes, and root
According to the characteristic quantity of above-mentioned assembled object, above-mentioned assembling object is made to move.
In a mode of the present invention, according to the assembled object detected from shooting image
Characteristic quantity, makes assembling object move.
Thus, even if in the case of the posture of assembled object changes, it is also possible to just
Really carry out assembling operation.
It addition, in a mode of the present invention, it is also possible to being configured to, above-mentioned control portion is according to reflecting
There are above-mentioned assembling object and 1 of above-mentioned assembled object or multiple shooting image,
The features described above amount detection carrying out above-mentioned assembling object and above-mentioned assembled object processes,
And according to the characteristic quantity of above-mentioned assembling object and the characteristic quantity of above-mentioned assembled object, with
Above-mentioned assembling object is made to become mesh with the relative posture relation of above-mentioned assembled object
Mark the mode of relative posture relation, make above-mentioned assembling object move.
Thereby, it is possible to according to from shooting image detect assemble object characteristic quantity and
The characteristic quantity of assembled object, carries out assembling operation etc..
It addition, in a mode of the present invention, it is also possible to being configured to, above-mentioned control portion is according to upper
State the characteristic quantity set as target characteristic amount in the characteristic quantity of assembled object and on
State the characteristic quantity set as paying close attention to characteristic quantity in the characteristic quantity assembling object, so that on
State relative posture relation and become the above-mentioned target mode relative to posture relation, make above-mentioned
Assembling object moves.
Thereby, it is possible to so that set assembles the assembled part of object and the assembled right of setting
As the relative posture relation of the assembled part of thing becomes target relative to posture relation
Mode, make assembling object move.
It addition, in a mode of the present invention, it is also possible to be configured to, above-mentioned control portion so that on
The concern characteristic point stating assembling object is consistent with the target characteristic point of above-mentioned assembled object
Or close mode, makes above-mentioned assembling object move.
Thereby, it is possible to by assemble object assembled part be assembled in assembled object by group
Dress part etc..
It addition, in a mode of the present invention, it is also possible to it is configured to, stores including with reference to image
Portion, the above-mentioned assembling object taking target location posture is entered by this with reference to image storage part storage
Row display with reference to image, above-mentioned control portion has the first of above-mentioned assembling object to shoot according to reflecting
Image makes above-mentioned assembling object move to above-mentioned target location posture with above-mentioned with reference to image,
After making above-mentioned assembling object move, there is the second of above-mentioned assembled object according at least reflecting
Shooting image, the features described above amount detection carrying out above-mentioned assembled object processes, and according to upper
State the characteristic quantity of assembled object, make above-mentioned assembling object move.
Thereby, it is possible to when identical assembling operation is repeated, use identical reference image,
Make assembling object moving about to assembled object, afterwards, compare the assembled of reality
The detailed posture of object carries out assembling operation etc..
It addition, in a mode of the present invention, it is also possible to being configured to, above-mentioned control portion is according to reflecting
There is the first shooting image of the assembled object of first in above-mentioned assembling operation, carry out above-mentioned the
The features described above amount detection of one assembled object processes, and according to above-mentioned first assembled object
The characteristic quantity of thing, makes above-mentioned assembling object move, after making above-mentioned assembling object move,
According at least reflecting the second shooting image having the second assembled object, carry out above-mentioned second by group
The features described above amount detection of dress object processes, and according to the spy of above-mentioned second assembled object
The amount of levying, makes above-mentioned assembling object and above-mentioned first assembled object move.
Thus, when carrying out assembling operation every time, though the first assembled object, the second quilt
Assemble object position skew, it is also possible to carry out assemble object, the first assembled object,
And second assembling operation etc. of assembled object.
It addition, in a mode of the present invention, it is also possible to being configured to, above-mentioned control portion is according to reflecting
There are the above-mentioned assembling object in above-mentioned assembling operation and 1 of the first assembled object
Or multiple first shooting images, carry out above-mentioned assembling object and above-mentioned first assembled right
As the features described above amount detection of thing processes, and according to the characteristic quantity of above-mentioned assembling object and on
State the characteristic quantity of the first assembled object, so that above-mentioned assembling object and above-mentioned first is by group
The relative posture relation of dress object becomes the first object side relative to posture relation
Formula, makes above-mentioned assembling object move, and according to reflecting the second count having the second assembled object
Taking the photograph image, the features described above amount detection carrying out above-mentioned second assembled object processes, and according to
The characteristic quantity of above-mentioned first assembled object and the feature of above-mentioned second assembled object
Amount, so that the relative position of above-mentioned first assembled object and above-mentioned second assembled object
Posture relation becomes second target mode relative to posture relation, makes above-mentioned assembling object
Move with above-mentioned first assembled object.
Thereby, it is possible to so that assemble concern characteristic point and the first assembled object of object
Target characteristic point is close, the concern characteristic point of the first assembled object and the second assembled object
The mode that the target characteristic point of thing is close, carries out visual servo etc..
It addition, in a mode of the present invention, it is also possible to being configured to, above-mentioned control portion is according to reflecting
There are the above-mentioned assembling object in above-mentioned assembling operation, the first assembled object and the second quilt
Assemble 1 or multiple shooting image of object, carry out above-mentioned assembling object, above-mentioned the
The features described above amount detection of one assembled object and above-mentioned second assembled object processes,
And according to the characteristic quantity of above-mentioned assembling object and the feature of above-mentioned first assembled object
Amount, so that above-mentioned assembling object closes with the relative posture of above-mentioned first assembled object
It is tied to form as first object relative to the mode of posture relation, makes above-mentioned assembling object move,
And according to the characteristic quantity of above-mentioned first assembled object and above-mentioned second assembled object
Characteristic quantity so that the phase of above-mentioned first assembled object and above-mentioned second assembled object
Posture relation is become second target mode relative to posture relation, makes above-mentioned first
Assembled object moves.
Thereby, it is possible to assembling operation etc. while carrying out three workpiece.
It addition, in a mode of the present invention, it is also possible to being configured to, above-mentioned control portion is according to reflecting
There is the first shooting image of the assembled object of second in above-mentioned assembling operation, carry out above-mentioned the
The features described above amount detection of two assembled objects processes, and according to above-mentioned second assembled object
The characteristic quantity of thing, makes the first assembled object move, and according to reflect have mobile after above-mentioned the
Second shooting image of one assembled object, carries out the above-mentioned of above-mentioned first assembled object
Characteristic quantity detection processes, and according to the characteristic quantity of above-mentioned first assembled object, makes above-mentioned group
Dress object moves.
Thus, it is not necessary to make assembling object and the first assembled object move, can simultaneously
Carry out the control etc. of robot more easily.
It addition, in a mode of the present invention, it is also possible to be configured to, above-mentioned control portion pass through into
Row visual servo based on above-mentioned shooting image, controls above-mentioned robot.
Thereby, it is possible to according to current job status, robot is carried out feedback control etc..
It addition, the another way of the present invention relates to a kind of robot, comprising: shooting image obtains
Taking portion, it obtains shooting image;And control portion, it controls machine according to above-mentioned shooting image
Device people, above-mentioned shooting image acquiring unit obtains to reflect has the assembling object of assembling operation with assembled
The above-mentioned shooting image of assembled object in object, at least the above, above-mentioned control portion root
According to above-mentioned shooting image, the characteristic quantity detection carrying out above-mentioned assembled object processes, and according to
The characteristic quantity of above-mentioned assembled object, makes above-mentioned assembling object move.
It addition, in the another way of the present invention, relate to one and make computer as above-mentioned each portion
And the program of function.
It addition, the another way of the present invention relates to a kind of robot control method, it includes obtaining
Reflect have assembling operation assemble object assembled right with in assembled object, at least the above
Step as the shooting image of thing;According to above-mentioned shooting image, carry out above-mentioned assembled object
Characteristic quantity detection process step;And the characteristic quantity according to above-mentioned assembled object, make
The step that above-mentioned assembling object moves.
Several modes according to the present invention, using the teaching of the invention it is possible to provide even if in the position appearance of assembled object
In the case of gesture change, it is also possible to correctly carry out the robot control system of assembling operation, machine
Device people, program and robot control method etc..
It addition, another way is robot controller, it is characterised in that possess: the first control
Portion processed, itself so that robot arm end points according to based on set more than 1 instruct position
And the mode that the path formed is moved to target location, generate command value;Image acquiring unit, its
Obtain the image comprising above-mentioned end points when above-mentioned end points is in above-mentioned target location that is target
The image comprising above-mentioned end points when image and above-mentioned end points are in current location that is current
Image;Second control portion, it is so that above-mentioned end points is according to above-mentioned present image and above-mentioned target
The mode that image moves to above-mentioned target location from above-mentioned current location, generates command value;And
Drive control part, it uses the command value generated by above-mentioned first control portion to control with by above-mentioned second
Portion processed generate command value and make said arm move.
According to the manner, so that the end points of the arm of robot is according to based on more than 1 set
The mode that the path instructing position and formed is moved to target location, generates command value, and with
Make the side that end points moves to target location from current location according to present image and target image
Formula, generates command value.Then, these command value are used to make arm move.Thereby, it is possible to dimension
Hold the high speed of position control, and also be able to corresponding with the situation of target location change.
It addition, another way is robot controller, it is characterised in that possess: control portion,
It generates the rail of above-mentioned end points with target location making the end points of the arm of robot in the way of close
Road;And image acquiring unit, it obtains above-mentioned end points and comprises above-mentioned end when being in current location
The comprising when image of point that is present image and above-mentioned end points are in above-mentioned target location
Stating image that is the target image of end points, above-mentioned control portion is according to based on more than 1 set
The path instructing position and formed and above-mentioned present image and above-mentioned target image, make said arm
Mobile.Thus, maintain the high speed of position control, and also be able to and the feelings of target location change
Condition is corresponding.
Here, above-mentioned drive control part can also use the component with regulation respectively will be by above-mentioned the
The command value that one control portion generates is formed by stacking with the command value generated by above-mentioned second control portion
Signal, make said arm move.Thereby, it is possible in the way of becoming desired track, make end
The track of point moves.Although for instance, it is possible to make the track of end points be formed as undesirable but at hands
The visual angle of eye camera comprises the track of object.
Here, above-mentioned drive control part can also be according to above-mentioned current location and above-mentioned target location
Difference, determine the component of above-mentioned regulation.Thereby, it is possible to change the most continuously with distance
Component, therefore, it is possible to successfully switching control.
Here, it is also possible to possess the input unit of the component inputting above-mentioned regulation.Thereby, it is possible to
User desired track upper suspension arm.
Here, it is also possible to possess the storage part of the component storing above-mentioned regulation.Thereby, it is possible to make
With the initial component set in advance.
Here, above-mentioned drive control part can also be configured to, and meets regulation in above-mentioned current location
Condition in the case of, use based on by above-mentioned first control portion generate track command value come
Drive said arm, in the case of above-mentioned current location is unsatisfactory for the condition of above-mentioned regulation, use
Based on the command value of track generated by above-mentioned first control portion and control based on by above-mentioned second
The command value of the track that portion processed generates drives said arm.Thereby, it is possible to locate more at high speed
Reason.
Here, it is also possible to possess: power test section, the power putting on above-mentioned end points is examined by it
Survey;And the 3rd control portion, its value detected according to above-mentioned power test section, so that above-mentioned end
The mode that point moves to above-mentioned target location from above-mentioned current location, generates the rail of above-mentioned end points
Road, above-mentioned drive control part uses instruction based on the track generated by above-mentioned first control portion
Value, command value based on the track generated by above-mentioned second control portion and based on by above-mentioned the
The command value of the track that three control portions generate, or use based on by above-mentioned first control portion generation
Track command value and based on the command value of track generated by above-mentioned 3rd control portion,
Said arm is made to move.Thus, even if in the case of target location is moved, mesh cannot be being confirmed
In the case of cursor position, it is also possible to maintain the high speed of position control and carry out operation safely.
It addition, another way is robot system, it is characterised in that possess: robot, its
There is arm;First control portion, it is so that the end points of said arm is according to based on more than 1 set
Instruct position and mode that the path that formed is moved to target location, generate command value;Shooting
Portion, it comprises image that is the mesh of above-mentioned end points when above-mentioned end points is in above-mentioned target location
Logo image and above-mentioned end points were in as the comprising during current location of the position of current time
Image that is the present image of above-mentioned end points shoot;Second control portion, it is so that above-mentioned end
Point according to above-mentioned present image and above-mentioned target image from above-mentioned current location to above-mentioned target
The mode that position is moved, generates command value;And drive control part, it uses by above-mentioned first
The command value that control portion generates and the command value generated by above-mentioned second control portion, make said arm move
Dynamic.Thereby, it is possible to maintain the high speed of position control, and it also is able to and target location change
Situation is corresponding.
It addition, another way is robot system, it is characterised in that possess: robot, its
There is arm;Control portion, it generates in the way of making the end points of said arm and target location close
State the track of end points;And shoot part, it is in the position as current time to above-mentioned end points
Current location time the image comprising above-mentioned end points that is present image and above-mentioned end points at
The image comprising above-mentioned end points that is target image when above-mentioned target location shoot, on
State control portion according to the path instructing position and formed based on more than 1 set with above-mentioned
Present image and above-mentioned target image, make said arm move.Thereby, it is possible to maintenance position control
High speed, and also be able to target location change situation corresponding.
It addition, another way is robot, it is characterised in that possess: arm;First control portion,
It is so that the end points of said arm is formed according to position of instructing based on more than 1 set
The mode that path is moved to target location, generates command value;Image acquiring unit, its acquisition is above-mentioned
The image comprising above-mentioned end points when end points is in above-mentioned target location that is target image and
Above-mentioned end points is in as the above-mentioned end points that the comprises during current location of the position of current time
Image that is present image;Second control portion, it is so that above-mentioned end points is according to above-mentioned present image
And the mode that above-mentioned target image moves to above-mentioned target location from above-mentioned current location, generate
Command value, and drive control part, its use by above-mentioned first control portion generate command value with
The command value generated by above-mentioned second control portion, makes said arm move.Thereby, it is possible to maintenance position
Put the high speed of control, and also be able to corresponding with the situation of target location change.
It addition, another way is robot, it is characterised in that possess: arm;Control portion, its
To make the end points of said arm generate the track of above-mentioned end points in the way of close with target location;And
Image acquiring unit, it obtains the image comprising above-mentioned end points when above-mentioned end points is in current location
That is present image and above-mentioned end points comprise above-mentioned end points when being in above-mentioned target location
Image that is target image, above-mentioned control portion instructs position according to based on more than 1 set
And the path formed and above-mentioned present image and above-mentioned target image, make said arm move.
Thereby, it is possible to maintain the high speed of position control, and it also is able to and the situation of target location change
Corresponding.
It addition, another way is robot control method, it is characterised in that including: obtain machine
The image comprising above-mentioned end points when the end points of the arm of people is in target location that is target image
Step;Obtain above-mentioned end points to be in as the comprising during current location of the position of current time
The image of above-mentioned end points that is the step of present image;And with according to based on set 1 with
On instruct position and the path that the formed mode that makes above-mentioned end points move to above-mentioned target location
Generate command value, and to make above-mentioned end points according to above-mentioned present image and above-mentioned target image
The mode moved to above-mentioned target location from above-mentioned current location generates command value, so that using
The step stating command value and make said arm move.Thereby, it is possible to maintain the high speed of position control,
And it also is able to corresponding with the situation of target location change.
It addition, another way is robot control method, it is characterised in that it has for control
The image comprising above-mentioned end points when the end points of arm and acquisition said arm is in current location is also
The image comprising above-mentioned end points when i.e. present image and above-mentioned end points are in target location that is
The robot control method of the said arm of the robot of the image acquiring unit of target image, and make
With the position carried out according to the path instructing position and formed based on more than 1 set
The command value controlled and the vision carried out according to above-mentioned present image and above-mentioned target image
The command value of servo, thus control said arm.Thereby, it is possible to maintain the high speed of position control,
And it also is able to corresponding with the situation of target location change.
It addition, another way is robot control method, it is characterised in that it has for control
The image comprising above-mentioned end points when the end points of arm and acquisition said arm is in current location is also
The image comprising above-mentioned end points when i.e. present image and above-mentioned end points are in target location that is
The robot control method of the said arm of the robot of the image acquiring unit of target image, and with
Shi Jinhang is carried out according to the path instructing position and formed based on more than 1 set
Position control is watched with the vision carried out according to above-mentioned present image and above-mentioned target image
Clothes.Thereby, it is possible to maintain the high speed of position control, and it also is able to and target location change
Situation is corresponding.
It addition, another way is robot control program, it is characterised in that make arithmetic unit hold
Row following steps: obtain and comprise above-mentioned end points when the end points of arm of robot is in target location
Image that is the step of target image, above-mentioned end points be in working as of the position as current time
The image comprising above-mentioned end points during front position that is the step of present image;And it is so that above-mentioned
End points according to the path instructing position and formed based on more than 1 set to above-mentioned target
The mode that position is moved generates command value, and with according to above-mentioned present image and above-mentioned target
Image makes above-mentioned end points generate to the mode that above-mentioned target location is moved from above-mentioned current location to refer to
Make value, so that the step making said arm move by above-mentioned command value.Thereby, it is possible to maintain
The high speed of position control, and it also is able to corresponding with the situation of target location change.
It addition, another way relates to a kind of robot controller, comprising: robot controls
Portion, it controls robot according to image information;Variable quantity operational part, it is according to above-mentioned image
Information and obtain image feature amount variable quantity;Variable quantity inferring portion, it is according to as above-mentioned machine
People or the information of object and the variable quantity as the information beyond above-mentioned image information push away
Disconnected information, deduction amount that is the deduction characteristics of image quantitative change to above-mentioned image feature amount variable quantity
Change amount carries out computing;And abnormality determination unit, its by above-mentioned image feature amount variable quantity with upper
State and infer that the comparison process of image feature amount variable quantity is to carry out unusual determination.
In the manner, according to based on image feature amount variable quantity and variable quantity deduction information
The deduction image feature amount variable quantity obtained, carries out the control of the robot of use image information
Unusual determination.Thereby, it is possible in using the control of robot of image information, particularly exist
Use in the means of image feature amount, suitably carry out unusual determination etc..
It addition, in another way, it is also possible to being configured to, above-mentioned variable quantity deduction information is
The joint angle information of above-mentioned robot.
Thus, as variable quantity deduction information, it is possible to use the joint angle information of robot.
It addition, in another way, it is also possible to being configured to, above-mentioned variable quantity inferring portion is by right
The variable quantity of above-mentioned joint angle information, effect makes the variable quantity of above-mentioned joint angle information and above-mentioned figure
As the Jacobian matrix that Feature change amount is corresponding, thus to above-mentioned deduction characteristics of image quantitative change
Change amount carries out computing.
Thereby, it is possible to use the variable quantity of joint angle information and Jacobian matrix to obtain deduction figure
As Feature change amount etc..
It addition, in another way, it is also possible to being configured to, above-mentioned variable quantity deduction information is
The end effector of above-mentioned robot or the posture information of above-mentioned object.
Thus, as variable quantity deduction information, it is possible to use robot end effector or
The posture information of the above-mentioned object of person.
It addition, in another way, it is also possible to being configured to, above-mentioned variable quantity inferring portion is by right
The variable quantity effect of above-mentioned posture information makes the variable quantity of above-mentioned posture information with upper
State the Jacobian matrix that image feature amount variable quantity is corresponding, thus to above-mentioned deduction characteristics of image
Amount variable quantity carries out computing.
Thereby, it is possible to use the variable quantity of posture information and Jacobian matrix to obtain deduction
Image feature amount variable quantity etc..
It addition, in another way, it is also possible to it is configured to, in the i-th (i is natural number) moment
Obtain image feature amount f1 of the first image information and (j is the nature meeting j ≠ i in jth
Number) in the case of the moment obtains image feature amount f2 of the second image information, above-mentioned variable quantity
Operational part using the difference of above-mentioned image feature amount f1 and above-mentioned image feature amount f2 as above-mentioned figure
Obtaining as Feature change amount, above-mentioned variable quantity inferring portion is when kth (k is natural number)
Carve obtain above-mentioned variable quantity deduction information p1 corresponding with above-mentioned first image information and
The above-mentioned variable quantity corresponding with above-mentioned second image information is obtained in l (l the is natural number) moment
In the case of inferring information p2 of using, according to above-mentioned variable quantity deduction information p1 and above-mentioned change
Amount deduction information p2, obtains above-mentioned deduction image feature amount variable quantity.
Thereby, it is possible to consider the moment, and obtain image feature amount variable quantity and the deduction figure of correspondence
As Feature change amount etc..
It addition, in another way, it is also possible to being configured to, the above-mentioned kth moment is above-mentioned first
In the acquisition moment of image information, the above-mentioned l moment is the acquisition moment of above-mentioned second image information.
Thus, when considering the acquisition that can carry out joint angle information at high speed, it is possible to hold
Change places and accounted for the process etc. in moment.
It addition, in another way, it is also possible to being configured to, above-mentioned abnormality determination unit carries out above-mentioned
The differential information of image feature amount variable quantity and above-mentioned deduction image feature amount variable quantity and threshold value
Comparison process, and be judged to different in the case of above-mentioned differential information is bigger than above-mentioned threshold value
Often.
Thereby, it is possible to carry out unusual determination etc. by threshold determination.
It addition, in another way, it is also possible to it is configured to, for above-mentioned abnormality determination unit,
Used two of the computing of the above-mentioned image feature amount variable quantity in above-mentioned variable quantity operational part
The difference in the acquisition moment of above-mentioned image information is the biggest, then be set to the biggest by above-mentioned threshold value.
Thereby, it is possible to change threshold value etc. accordingly with situation.
It addition, in another way, it is also possible to it is configured to, above-mentioned abnormality determination unit is detecting
In the case of abnormal, control portion of above-mentioned robot carries out the control making above-mentioned robot stop.
Thereby, it is possible to by stopping robot when abnormality detection, thus realize safe machine
The control etc. of people.
It addition, in another way, it is also possible to it is configured to, above-mentioned abnormality determination unit is detecting
In the case of abnormal, control portion of above-mentioned robot skips based in above-mentioned variable quantity operational part
In two above-mentioned image informations that the computing of above-mentioned image feature amount variable quantity is used, time
Between moment rearward obtains in sequence above-mentioned image information that is unusual determination image information shape
The control become, and above-mentioned according to obtain in the moment more forward than above-mentioned unusual determination image information
Image information is controlled.
Thereby, it is possible to skip the robot of use unusual determination image information when abnormality detection
Control.
It addition, another way relates to a kind of robot controller, comprising: robot controls
Portion, it controls robot according to image information;Variable quantity operational part, it is above-mentioned that it obtains expression
The position appearance of the variable quantity of the end effector of robot or the posture information of object
Gesture variable quantity or represent above-mentioned robot joint angle information variable quantity joint angle change
Amount;Variable quantity inferring portion, it obtains image feature amount variable quantity according to above-mentioned image information,
And according to above-mentioned image feature amount variable quantity, also obtain the deduction amount of above-mentioned posture variable quantity
I.e. inferred position postural change amount or the deduction amount of above-mentioned joint angle variable quantity that is deduction is closed
Joint angle variable quantity;And abnormality determination unit, it is pushed away with above-mentioned by above-mentioned posture variable quantity
Comparison process or the above-mentioned joint angle variable quantity of disconnected posture variable quantity close with above-mentioned deduction
The comparison process of joint angle variable quantity carries out unusual determination.
It addition, in another way, obtain inferred position posture according to image feature amount variable quantity
Variable quantity or deduction joint angle variable quantity, and by posture variable quantity and inferred position appearance
The comparison of gesture variable quantity or joint angle variable quantity with infer relatively entering of joint angle variable quantity
Row unusual determination.Thus, it is also possible to by posture information or the comparison of joint angle information,
In using the control of robot of image information, particularly in the means using image feature amount
In, suitably carry out unusual determination.
It addition, in another way, it is also possible to being configured to, above-mentioned variable quantity operational part obtains
Take multiple above-mentioned posture information and obtain on multiple as above-mentioned posture variable quantity
Rheme is put the process of the difference of pose information, is obtained multiple above-mentioned posture information and according to many
The difference of individual above-mentioned posture information and obtain above-mentioned joint angle variable quantity process, obtain many
Individual above-mentioned joint angle information also obtains multiple above-mentioned joint angle as above-mentioned joint angle variable quantity
The process of the difference of information and obtain multiple above-mentioned joint angle information and according to multiple above-mentioned passes
Save the difference of angle information and obtain any one in the process of above-mentioned posture variable quantity and process.
Thereby, it is possible to obtain posture variable quantity or joint angle change by various means
Amount etc..
It addition, another way relates to a kind of robot, comprising: control portion of robot, its root
Robot is controlled according to image information;Variable quantity operational part, it is asked according to above-mentioned image information
Publish picture as Feature change amount;Variable quantity inferring portion, it is according to as above-mentioned robot or right
Information as thing and the variable quantity deduction information as the information beyond above-mentioned image information,
Deduction amount that is deduction image feature amount variable quantity to above-mentioned image feature amount variable quantity are carried out
Computing;And abnormality determination unit, it is by above-mentioned image feature amount variable quantity and above-mentioned deduction figure
As the comparison process of Feature change amount carries out unusual determination.
It addition, in another way, push away with based on variable quantity according to image feature amount variable quantity
Disconnected information and the deduction image feature amount variable quantity obtained, carry out using the machine of image information
The unusual determination of the control of people.Thereby, it is possible in using the control of robot of image information,
Particularly in the means using image feature amount, suitably carry out unusual determination.
It addition, another way relates to a kind of robot control method, it is to come according to image information
Controlling the robot control method of robot, it includes, according to above-mentioned image information, obtaining
The step of the variable quantity calculation process of image feature amount variable quantity;According to as above-mentioned robot or
The information of person's object and the variable quantity as the information beyond above-mentioned image information infer use
Information, deduction amount that is the deduction image feature amount variable quantity to above-mentioned image feature amount variable quantity
Carry out the step of the variable quantity inference process of computing;And by above-mentioned image feature amount variable quantity
With the step that the comparison process of above-mentioned deduction image feature amount variable quantity carries out unusual determination.
It addition, in another way, push away with based on variable quantity according to image feature amount variable quantity
Disconnected information and the deduction image feature amount variable quantity obtained, carry out using the machine of image information
The unusual determination of the control of people.Thereby, it is possible in using the control of robot of image information,
Particularly in the means using image feature amount, suitably carry out unusual determination.
It addition, another way relates to a kind of program, it makes computer play as such as lower component
Function: control portion of robot, it controls robot according to image information;Variable quantity operational part,
It obtains image feature amount variable quantity according to above-mentioned image information;Variable quantity inferring portion, its root
According to the information as above-mentioned robot or object and as above-mentioned image information beyond
The variable quantity deduction information of information, to the deduction amount of above-mentioned image feature amount variable quantity that is push away
Disconnected image feature amount variable quantity carries out computing;And abnormality determination unit, it is special by above-mentioned image
The comparison process of the amount of levying variable quantity and above-mentioned deduction image feature amount variable quantity carries out extremely sentencing
Fixed.
It addition, in another way, push away with based on variable quantity according to image feature amount variable quantity
Disconnected information and the deduction image feature amount variable quantity obtained, make computer perform to use image letter
The unusual determination of the control of the robot of breath.Thereby, it is possible in the robot using image information
Control in, particularly use image feature amount means in, suitably carry out unusual determination.
So, according to several modes, using the teaching of the invention it is possible to provide suitably carry out based on image information realization
The robot control of the abnormal detection of the control of the use image feature amount in the control of robot
Device processed, robot and robot control method etc..
It addition, another way relates to robot, it is to use the inspection object shot by shoot part
The shooting image of thing carries out checking the robot that the inspection of above-mentioned inspection object processes, and
Check information according to first, generate the second inspection letter comprising the inspection area that above-mentioned inspection processes
Breath, and check information according to above-mentioned second, carry out above-mentioned inspection process.
It addition, in another way, check information according to first, generate and comprise inspection area
Second checks information.Usually, the image that (visual examination for narrow sense) is used will be checked
In which region depend on checking the information such as shape of object, right for checking for processing
The job content etc. carried out as thing, therefore, when checking that object, job content change every time,
Must reset inspection area, and cause the burden of user bigger.In this, logical
Cross and generate the second inspection information according to the first inspection information, it is possible to be easily determined by inspection area
Deng.
It addition, in another way, it is also possible to be configured to, above-mentioned second inspection information comprise by
The view information group that multiple view information are included, and each viewpoint of above-mentioned view information group
Information comprises viewpoint position and the direction of visual lines of the above-mentioned shoot part during above-mentioned inspection processes.
Thereby, it is possible to generate view information group etc. as the second inspection information.
It addition, in another way, it is also possible to it is configured to, to respectively regarding of above-mentioned view information group
Dot information, set make above-mentioned shoot part to the above-mentioned viewpoint position corresponding with above-mentioned view information with
And the relative importance value that above-mentioned direction of visual lines is when moving.
Thereby, it is possible to each view information that view information group is comprised, set relative importance value etc..
It addition, in another way, it is also possible to it is configured to, according to setting based on above-mentioned relative importance value
Fixed mobile order, makes above-mentioned shoot part to the above-mentioned each view information with above-mentioned view information group
Corresponding above-mentioned viewpoint position and above-mentioned direction of visual lines move.
Thereby, it is possible to use the multiple view information setting relative importance value, actual control shoot part
And carry out inspection process etc..
It addition, in another way, it is also possible to it is configured to, according to movable range information, sentences
Be set to cannot make above-mentioned shoot part to i-th (i is natural number) in multiple above-mentioned view information
In the case of above-mentioned viewpoint position that view information is corresponding and above-mentioned direction of visual lines move, do not enter
The movement of row above-mentioned shoot part based on above-mentioned i-th view information, and according to above-mentioned mobile order
In Next jth (j is the natural number the meeting i ≠ j) viewpoint of above-mentioned i-th view information
Information, makes above-mentioned shoot part move.
Thereby, it is possible to realize considering the control etc. of the shoot part of the movable range of robot.
It addition, in another way, it is also possible to being configured to, above-mentioned first inspection information comprises pin
Relative inspection to above-mentioned inspection object processes object's position, and right with above-mentioned inspection department reason
On the basis of position, set the object coordinate system corresponding with above-mentioned inspection object, so that
By above-mentioned object coordinate system, generate above-mentioned view information.
Thereby, it is possible to the view information etc. generated in object coordinate system.
It addition, in another way, it is also possible to being configured to, above-mentioned first inspection information comprises table
Show that the object posture of the posture in the global coordinate system of above-mentioned inspection object is believed
Breath, according to the above-mentioned global coordinate system obtained based on above-mentioned object posture information with upper
State the relativeness of object coordinate system, obtain the above-mentioned viewpoint letter in above-mentioned global coordinate system
Breath, and according in the movable range information in above-mentioned global coordinate system and above-mentioned global coordinate system
Above-mentioned view information, and above-mentioned regards to above-mentioned viewpoint position whether making above-mentioned shoot part
Line direction is moved and is judged.
Thereby, it is possible to view information in generation global coordinate system and according to this view information
The mobile etc. of shoot part is controlled with the movable range information of robot.
It addition, in another way, it is also possible to being configured to, it is for machine that above-mentioned inspection processes
The process that the result of people's operation is carried out, above-mentioned first inspection information is in above-mentioned robot manipulating task
The information obtained.
Thereby, it is possible to obtain the first inspection information etc. in robot manipulating task.
It addition, in another way, it is also possible to being configured to, above-mentioned first inspection information is to comprise
The shape information of above-mentioned inspection object, the posture information of above-mentioned inspection object and
Relative inspection for above-mentioned inspection object processes the letter of at least one in object's position
Breath.
Thereby, it is possible to obtain as the first inspection information shape information, posture information with
And inspection processes at least one information of object's position.
It addition, in another way, it is also possible to being configured to, above-mentioned first inspection information comprises
State the three-dimensional modeling data checking object.
Thereby, it is possible to obtain three-dimensional modeling data as the first inspection information.
It addition, in another way, it is also possible to being configured to, it is for machine that above-mentioned inspection processes
The process that the result of people's operation is carried out, above-mentioned three-dimensional modeling data comprises by carrying out above-mentioned machine
People's operation and above-mentioned inspection before three-dimensional modeling data and above-mentioned robot manipulating task after the operation that obtains
Look into three-dimensional modeling data before the above-mentioned three-dimensional modeling data of object that is operation.
Thereby, it is possible to obtain the three-dimensional modeling data before and after operation as the first inspection information.
It addition, in another way, it is also possible to being configured to, above-mentioned second inspection information comprises conjunction
Table images, above-mentioned qualified images is by being configured at the above-mentioned viewpoint position corresponding with above-mentioned view information
Put and the imaginary video camera of above-mentioned direction of visual lines shoots figure obtained by above-mentioned three-dimensional modeling data
Picture.
Thereby, it is possible to from three-dimensional modeling data and view information, obtain as the second inspection information
Take qualified images etc..
It addition, in another way, it is also possible to being configured to, above-mentioned second inspection information comprises conjunction
Image before table images and operation, above-mentioned qualified images is corresponding with above-mentioned view information by being configured at
Above-mentioned viewpoint position and the imaginary video camera of above-mentioned direction of visual lines shoot after above-mentioned operation three
Image obtained by dimension module data, before above-mentioned operation, image is by being configured at and above-mentioned view information
The above-mentioned imagination video camera shooting of corresponding above-mentioned viewpoint position and above-mentioned direction of visual lines is above-mentioned
Image obtained by three-dimensional modeling data before operation, by qualified with above-mentioned to image before above-mentioned operation
Image compares, and obtains above-mentioned inspection area.
Thereby, it is possible to according to the three-dimensional modeling data before and after operation and view information, it is qualified to obtain
Image before image and operation, and obtain inspection area etc. according to its comparison process.
It addition, in another way, it is also possible to it is configured to, in above-mentioned comparison, obtains above-mentioned
The difference of image and above-mentioned qualified images that is difference image before operation, on above-mentioned inspection area is
State the region comprising above-mentioned inspection object in difference image.
Thereby, it is possible to use difference image and obtain inspection area etc..
It addition, in another way, it is also possible to being configured to, above-mentioned second inspection information comprises conjunction
Image before table images and operation, above-mentioned qualified images is corresponding with above-mentioned view information by being configured at
Above-mentioned viewpoint position and the imaginary video camera of above-mentioned direction of visual lines shoot after above-mentioned operation three
Image obtained by dimension module data, before above-mentioned operation, image is by being configured at and above-mentioned view information
The above-mentioned imagination video camera shooting of corresponding above-mentioned viewpoint position and above-mentioned direction of visual lines is above-mentioned
Image obtained by three-dimensional modeling data before operation, according to image before above-mentioned operation and above-mentioned qualified figure
The similarity of picture, sets the above-mentioned inspection carried out based on above-mentioned shooting image and above-mentioned qualified images
Process the threshold value used.
Thereby, it is possible to image and the similarity of qualified images, setting inspection process before use operation
Threshold value etc..
It addition, in another way, it is also possible to it is configured to, at least includes the first arm and the second arm,
Above-mentioned shoot part is disposed on the trick of at least one party of above-mentioned first arm and above-mentioned second arm
Video camera.
Thereby, it is possible to use the arm of more than 2 and be arranged at the trick of at least one of this arm
Video camera carries out inspection process etc..
It addition, another way relates to processing means, it is the inspection shot by shoot part for use
Look into the shooting image of object and carry out the device that the inspection of above-mentioned inspection object processes, output
Above-mentioned inspection processes the processing means of the information used, and checks information according to first, raw
Become regarding of the viewpoint position comprising above-mentioned shoot part that above-mentioned inspection processed and direction of visual lines
The second inspection information that the inspection area that dot information and above-mentioned inspection process is included, and pin
The said apparatus carrying out above-mentioned inspection process is exported above-mentioned second inspection information.
It addition, in another way, generate according to the first inspection information and comprise the of inspection area
Two check information.Usually, the image that (by visual examination for narrow sense) is used will be checked
In which region depend on checking the information such as shape of object, right for checking for processing
The job content etc. carried out as thing, therefore, every time when checking the change of object, job content,
Must reset inspection area, thus cause the burden of user bigger.In this,
By checking that information generates the second inspection information according to first, it is possible to be easily determined by test zone
Territory, so that other device carries out inspection process etc..
It addition, another way relates to inspection method, it is to use the inspection shot by shoot part right
As the shooting image of thing, and carry out checking the reviewing party that the inspection of above-mentioned inspection object processes
In this inspection method, method, includes that, according to the first inspection information, above-mentioned inspection is processed by generation
Comprise the viewpoint position of above-mentioned shoot part and the view information of direction of visual lines and above-mentioned inspection department
The step of the second inspection information that the inspection area of reason is included.
It addition, in another way, generate according to the first inspection information and comprise the of inspection area
Two check information.Usually, the image that (by visual examination for narrow sense) is used will be checked
In which region depend on checking the information such as shape of object, right for checking for processing
The job content etc. carried out as thing, therefore, every time when checking the change of object, job content,
Must reset inspection area, thus cause the burden of user bigger.In this,
By checking that information generates the second inspection information according to first, it is possible to be easily determined by inspection area
Deng.
So, according to several modes, using the teaching of the invention it is possible to provide check by checking that information generates according to first
The second required inspection information, it is possible to reduce the burden of user and be easily performed inspection
Robot, processing means and inspection method etc..
Accompanying drawing explanation
Fig. 1 is the explanatory diagram of the assembling operation carried out by visual servo.
Fig. 2 A, Fig. 2 B are the explanatory diagrams of the position skew of assembled object.
Fig. 3 is the system configuration example of present embodiment.
Fig. 4 is the assembling operation carried out by the visual servo of characteristic quantity based on assembled object
Explanatory diagram.
Fig. 5 is the shooting image that the visual servo of characteristic quantity based on assembled object is used
One example.
Fig. 6 is the explanatory diagram of assembled state.
Fig. 7 is the flow chart of the visual servo of characteristic quantity based on assembled object.
Fig. 8 is another flow chart of the visual servo of characteristic quantity based on assembled object.
Fig. 9 is the explanation of the process making assembling object move to the surface of assembled object
Figure.
Figure 10 is the explanatory diagram of the assembling operation carried out by two kinds of visual servos.
Figure 11 is the flow chart being carried out continuously the process in the case of two kinds of visual servos.
Figure 12 (A)~(D) are the explanatory diagrams with reference to image with shooting image.
Figure 13 A, Figure 13 B are the explanatory diagrams of the assembling operation of three workpiece.
Figure 14 (A)~(C) are by the shooting image of use during the assembling operation of three workpiece
Explanatory diagram.
The flow chart of process when Figure 15 is by the assembling operation of three workpiece.
The explanatory diagram of the shooting image that Figure 16 (A)~(C) use when being and assemble three workpiece simultaneously.
The flow chart of process when Figure 17 is to assemble three workpiece simultaneously.
The shooting image that Figure 18 (A)~(C) use when being and assemble three workpiece in the other order
Explanatory diagram.
Figure 19 A, Figure 19 B are the configuration examples of robot.
Figure 20 is the configuration example of the robot control system controlling robot via network.
Figure 21 is the figure of an example of the composition of the robot system 1 representing the second embodiment.
Figure 22 is the block diagram of an example of the functional structure representing robot system 1.
Figure 23 is the data flowchart of robot system 1.
Figure 24 is the figure of the hardware configuration representing control portion 20.
Figure 25 A is the rail to the end points come by position control and visual servo during control arm 11
The figure that road illustrates, Figure 25 B is an example of target image.
Figure 26 is the figure illustrating component α.
Figure 27 is the handling process of the robot system 2 representing third embodiment of the present invention
Flow chart.
Figure 28 is that the track of the position to object, the position of switching point and end points illustrates
Figure.
Figure 29 is of the structure of the robot system 3 of the 4th embodiment representing the present invention
The figure of example.
Figure 30 is the block diagram of an example of the functional structure representing robot system 3.
Figure 31 is the flow chart of the handling process representing robot system 3.
Figure 32 is the figure of the assembling work representing that workpiece is inserted hole H by robot system 3.
Figure 33 is the handling process of the robot system 4 of the 5th embodiment representing the present invention
Flow chart.
Figure 34 is the figure of the assembling work representing that workpiece is inserted hole H by robot system 4.
Figure 35 is the configuration example of the robot controller of present embodiment.
Figure 36 is the detailed configuration example of the robot controller of present embodiment.
Figure 37 is the configuration example of the shoot part obtaining image information.
Figure 38 is the configuration example of the robot of present embodiment.
Figure 39 is other examples of the structure of the robot of present embodiment.
Figure 40 is the configuration example of general Visual servoing control system.
Figure 41 is to image feature amount variable quantity, the variable quantity of posture information and joint angle
The figure that the variable quantity of information and the relation of Jacobian matrix illustrate.
Figure 42 is the figure illustrating Visual servoing control.
Figure 43 A, Figure 43 B are the explanatory diagrams of the abnormality detection means of present embodiment.
Figure 44 is the explanatory diagram that the difference with the Image Acquisition moment sets the means of threshold value accordingly.
Figure 45 is to represent Image Acquisition moment, the acquisition moment of joint angle information and characteristics of image
Amount obtains the figure of the relation in moment.
Figure 46 is to represent Image Acquisition moment, the acquisition moment of joint angle information and characteristics of image
Amount obtains another figure of the relation in moment.
Figure 47 is to combine mathematical formulae explanatory diagram as Feature change amount, the change of posture information
The figure of the mutual relation of the variable quantity of change amount and joint angle information.
Figure 48 is the flow chart that the process to present embodiment illustrates.
Figure 49 is another detailed configuration example of the robot controller of present embodiment.
Figure 50 is the configuration example of the robot of present embodiment.
Figure 51 A, Figure 51 B are the configuration examples of the processing means of present embodiment.
Figure 52 is the configuration example of the robot of present embodiment.
Figure 53 is other configuration examples of the robot of present embodiment.
Figure 54 is the configuration example checking device using the second inspection information.
Figure 55 is the first inspection information and the second example checking information.
Figure 56 is the flow chart that the flow process to processed offline illustrates.
Figure 57 A, Figure 57 B are the examples of shape information (three-dimensional modeling data).
Figure 58 is the example of the viewpoint candidate information that the generation of view information is used.
Figure 59 is the example of the coordinate figure in the object coordinate system of viewpoint candidate information.
Figure 60 is the setting example of the object coordinate system processing object's position based on inspection.
Figure 61 A~Figure 61 G is image and the example of qualified images before the operation corresponding with each view information
Son.
Figure 62 A~Figure 62 D is the explanatory diagram of the setting means of inspection area.
Figure 63 A~Figure 63 D is the explanatory diagram of the setting means of inspection area.
Figure 64 A~Figure 64 D is the explanatory diagram of the setting means of inspection area.
Figure 65 A~Figure 65 D is the explanatory diagram that the Similarity Measure before and after operation processes.
Figure 66 A~Figure 66 D is the explanatory diagram that the Similarity Measure before and after operation processes.
Figure 67 A~Figure 67 E is the explanatory diagram of the relative importance value of view information.
Figure 68 is the flow chart that the flow process to online treatment illustrates.
Figure 69 A, Figure 69 B are in the view information in object coordinate system and robot coordinate system
The comparative example of view information.
Figure 70 A, Figure 70 B are the explanatory diagrams of image rotation angle.
Detailed description of the invention
Hereinafter, present embodiment is illustrated.Additionally, the present embodiment of following description is also
Non-restriction present disclosure described in claims unreasonably.It addition, this embodiment party
In formula, the entire infrastructure of explanation is not necessarily all constitutive requirements necessary to the present invention.
1. the means of present embodiment
First embodiment
As it is shown in figure 1, here, to the assembling object that will be held by the hand HD of robot
The situation of the assembling operation that WK1 is assembled in assembled object WK2 illustrates.Additionally,
The hand HD of robot is arranged at the front end of the arm AM of robot.
First, as the comparative example of present embodiment, by using above-mentioned regarding with reference to image
In the case of feeling that servo carries out the assembling operation shown in Fig. 1, according to by video camera (shoot part)
The shooting image of CM shooting and preprepared reference image, control robot
System.Specifically, make assembling object WK1 to mirroring with reference to figure as arrow YJ
The position of the assembling object WK1R of picture is moved, and is assembled in assembled object
WK2。
Here, the reference image RIM now used is shown in fig. 2, in fig. 2b
Show that the realistic space mirroring the assembled object WK2 with reference to image RIM is (three-dimensional
Space) on position.In the reference image RIM of Fig. 2 A, reflect and have assembled object
The assembling object WK1R (phase of WK2 and assembled state (or the state before assembling)
When in the WK1R of Fig. 1).In using the above-mentioned visual servo with reference to image RIM, with
Make the posture mirroring the assembling object WK1 of shooting image and mirror with reference to image
The mode that the posture of the assembling object WK1R of the assembled state of RIM is consistent, makes group
Dress object WK1 moves.
But, as it has been described above, actual carry out assembling operation in the case of, it is assembled right to exist
Situation about changing as the posture of thing WK2.Such as, as shown in Figure 2 B, Fig. 2 A is mirrored
The position of centre of gravity with reference to the assembled object WK2 of image RIM on realistic space be
GC1.On the other hand, actual assembled object WK2 can bias, and the quilt of reality
The position of centre of gravity assembling object WK2 is GC2.In this case, even if so that with reflect
Enter the mode that the posture of the assembling object WK1R with reference to image RIM is consistent, make reality
The assembling object WK1 on border moves, and also cannot become and actual assembled object WK2
Assembled state, it is thus impossible to correctly carry out assembling operation.This is because assembled right
In the case of changing as the posture of thing WK2, become group with assembled object WK2
The posture of the assembling object WK1 of dress state also changes.
Therefore, even if the robot control system of present embodiment 100 grade is at assembled object
Posture change in the case of, it is also possible to correctly carry out assembling operation.
Specifically, figure 3 illustrates the robot control system 100 of present embodiment
Configuration example.The robot control system 100 of present embodiment includes obtaining from shoot part 200 clapping
Take the photograph the shooting image acquiring unit 110 of image and control robot 300 according to shooting image
Control portion 120.It addition, robot 300 has end effector (hand) 310 and arm 320.
Additionally, later the structure of shoot part 200 and robot 300 is described in detail.
First, shooting image acquiring unit 110 obtains and reflects assembling object and the quilt having assembling operation
Assemble in object, the shooting image of the most assembled object.
Then, control portion 120, according to shooting image, carries out the characteristic quantity inspection of assembled object
Survey processes, and according to the characteristic quantity of assembled object, makes assembling object move.Additionally,
In the process making assembling object move, also include the control information of output device people 300
The process etc. of (control signal).It addition, the function in control portion 120 is by various processors
Hardware or the program etc. such as (CPU etc.), ASIC (gate array etc.) and be capable of.
So, in using the above-mentioned visual servo (comparative example) with reference to image, according to reference
The characteristic quantity assembling object of image, makes assembling object move, on the other hand, in this reality
Execute in mode, according to the characteristic quantity of the assembled object mirroring shooting image, make assembling object
Thing moves.Such as, as shown in Figure 4, in the shooting image shot by video camera CM,
Detect the characteristic quantity of the workpiece WK2 as assembled object, and according to the workpiece detected
The characteristic quantity of WK2, makes the workpiece WK1 as assembling object move as shown in arrow YJ
Dynamic.
Here, in the shooting image shot by video camera CM, reflect and have current time (shooting
Moment) assembled object WK2.Therefore, it is possible to make assembling object WK1 to
The position of the assembled object WK2 of current time is moved.Thereby, it is possible to prevent as used
The failed example (problem of comparative example illustrated in fig. 1) of the above-mentioned visual servo with reference to image that
Sample, makes assembling object WK1 can not become what the position of assembled state was moved to current time
Situation.Further, since when carrying out assembling operation every time, watch according to shooting image setting vision
Clothes new target location, even if so assembled object WK2 posture change
In the case of, it is also possible to set correct target location.
Even if as it has been described above, in the case of the posture of assembled object changes, also can
Enough correctly carry out assembling operation.Further, in the present embodiment, without preparing ginseng in advance
According to image such that it is able to reduce the setting up cost of visual servo.
It addition, so, control portion 120 carries out visual servo according to shooting image, thus controls
Robot processed.
Thereby, it is possible to according to current job status, robot is carried out feedback control etc..
Additionally, robot control system 100 is not limited to the structure of Fig. 1, and can carry out
The various deformation such as the element omitting an above-mentioned part or the element adding other are real
Execute.It addition, as be described hereinafter shown in Figure 19 B, the robot control system 100 of present embodiment
It is included in robot 300, and can also be integrally formed with robot 300.Further,
As be described hereinafter shown in Figure 20, the function of robot control system 100 can also pass through server 500,
The termination 330 that each robot 300 is had realizes.
It addition, such as robot control system 100 and shoot part 200 by include wired with
And the network of wireless at least one party and in the case of connecting, shooting image acquiring unit 110 also may be used
To be and to carry out the communication unit (interface portion) that shoot part 200 communicates.Further, in robot control
In the case of system 100 processed includes shoot part 200, shooting image acquiring unit 110 can also be
Shoot part 200 itself.
Here, shooting image refers to utilize shoot part 200 to shoot and the image that obtains.It addition,
Shoot the image that image can also be stored in outside storage part, the figure obtained via network
Picture.Shooting image PIM etc. shown in the most aftermentioned Fig. 5 of image.
It addition, assembling operation refers to assemble the operation of multiple operation object, specifically, it is
Refer to be assembled in assembling object the operation of assembled object.Assembling operation is e.g. by workpiece
WK1 is placed on the operation on (or side) on workpiece WK2, is by embedding for workpiece WK1
Enter the operation (embedding operation, chimeric operation) of (being fitted together to) workpiece WK2 or by workpiece
Operation (bonding operation, the company that WK1 bonds with workpiece WK2, is connected, assembles, melts
Connect operation, assembling work, melt operation) etc..
Further, assemble object to refer to carry out group for assembled object in assembling operation
The object of dress.Such as, it is workpiece WK1 in the example in fig. 4.
On the other hand, assembled object refers in assembling operation for assembling what object assembled
Object.Such as, it is workpiece WK2 in the example in fig. 4.
2. process is detailed
It follows that the process to present embodiment is described in detail.
2.1. the visual servo of characteristic quantity based on assembled object
The shooting image acquiring unit 110 of present embodiment obtains to reflect has assembling object and by group
1 or multiple shooting image of dress object.Then, control portion 120 is according to 1 obtained
Individual or multiple shooting images, carry out assembling the characteristic quantity inspection of object and assembled object
Survey processes.Further, control portion 120 is according to assembling the characteristic quantity of object and assembled object
The characteristic quantity of thing, is tied to form so that assembling object posture relative with assembled object pass
For target relative to the mode of posture relation, assembling object is made to move.
Here, target refers to relative to posture relation, makees carrying out assembling by visual servo
Target, assembling object posture relation relative with assembled object is become during industry.
Such as, in the example in fig. 4, the hole HL of the triangle of workpiece WK1 and workpiece WK2
Relative posture relation during contact (adjoining) is that target is relative to posture.
Thereby, it is possible to according to the characteristic quantity assembling object detected from shooting image and by group
The characteristic quantity of dress object, carries out assembling operation etc..Additionally, shooting image is obtained in back segment
Take acquired in portion 110 1 or multiple shooting image is described in detail.
It addition, in most assembling operations, mostly determine assemble in object be assembled in by
Supplying in the part (assembled part) of assembling object and assembled object assembles object
The part (assembled part) assembled.Such as, in the example in fig. 4, assemble in object
Assembled part refer to the bottom surface BA of workpiece WK1, the assembled part of assembled object
Refer to the hole HL of the triangle of workpiece WK2.In the example of the assembling operation of Fig. 4, it is
Assembled part BA is embedded the hole HL of assembled part, even if such as by workpiece WK1's
It is also null(NUL) that side SA is assembled in hole HL.It is therefore preferable that preset assembling object
The assembled part of thing and the assembled part of assembled object.
Therefore, control portion 120 according in the characteristic quantity of assembled object as target characteristic
Conduct in the characteristic quantity measured and set and the characteristic quantity assembling object is paid close attention to characteristic quantity and is set
Fixed characteristic quantity, is tied to form so that assembling object posture relative with assembled object pass
For target relative to the mode of posture relation, assembling object is made to move.
Here, characteristic quantity such as refers to image characteristic point, the detection object mirrored in image
The contour line etc. of (assembling object and assembled object etc.).And, characteristic quantity detects
Process the process referring to detect the characteristic quantity in image, such as, refer to that feature point detection processes, takes turns
Profile detection process etc..
Hereinafter, the situation detecting characteristic point as characteristic quantity is illustrated.Characteristic point refers to
The point of observation can be highlighted from image.Such as, at the shooting image PIM11 shown in Fig. 5
In, as the characteristic point of the workpiece WK2 of assembled object, detect characteristic point P1~P10,
As the characteristic point of the workpiece WK1 assembling object, detect characteristic point Q1~Q5.Additionally,
In the example of fig. 5, for the ease of diagram and explanation, it is illustrated that P1~P10 only detected
And the appearance of the characteristic point of Q1~Q5, but detect than this in actual shooting image
More characteristic point.But, even if in the case of characteristic point more than that being detected, with
Under explanation process content the most unchanged.
It addition, in the present embodiment, as the detection method of characteristic point (at feature point detection
Reason), use corner detection approach etc., but other general corner detection side can also be used
Method (eigenvalue, FAST feature detection), it is possible to use with SIFT (Scale Invariant
Feature Transform: scale invariant feature is changed) into representative local feature amount describe son,
SURF (Speeded Up Robust Feature: fast robust feature) etc..
And, in the present embodiment, according in the characteristic quantity of assembled object as mesh
Feature is paid close attention in conduct in the characteristic quantity marking characteristic quantity and set and the characteristic quantity assembling object
The characteristic quantity measured and set, carries out visual servo.
Specifically, in the example of fig. 5, in characteristic point P1~P10 of workpiece WK2,
Target setting characteristic point P9 and target characteristic point P10 as target characteristic amount.The opposing party
Face, in characteristic point Q1~Q5 of workpiece WK1, sets concern as paying close attention to characteristic quantity
Characteristic point Q4 and concern characteristic point Q5.
And, control portion 120 is so that assembling the concern characteristic point of object and assembled object
The consistent or close mode of target characteristic point, make assembling object move.
That is, so that concern characteristic point Q4 is close with target characteristic point P9, pay close attention to characteristic point Q5
The mode close with target characteristic point P10, makes assembling object WK1 as shown in arrow YJ
Ground is mobile.
Here, during target characteristic amount refers to represent the characteristic quantity of assembled object, by regarding
Feel that servo makes assembling object become clarification of objective amount when moving.In other words, target characteristic amount
It it is the characteristic quantity of the assembled part of assembled object.It addition, target characteristic point refers to entering
The characteristic point that row feature point detection sets as target characteristic amount in the case of processing.As above institute
State, in the example of fig. 5, be set with and the triangle of workpiece WK2 as target characteristic point
The hole HL characteristic of correspondence point P9 of shape and characteristic point P10.
On the other hand, pay close attention to characteristic quantity to refer to represent and assemble object or assembled object
In characteristic quantity, towards the point (example at Fig. 5 on the realistic space corresponding with target characteristic amount
Son is the hole HL of the triangle of workpiece WK2) movement, the point represented on realistic space
The characteristic quantity of (being the bottom surface of workpiece WK1 in the example of fig. 5).In other words, spy is paid close attention to
The amount of levying is the characteristic quantity of the assembled part assembling object.Refer to entering it addition, pay close attention to characteristic point
The characteristic point that row feature point detection sets as paying close attention to characteristic quantity in the case of processing.As above institute
State, in the example of fig. 5, be set with and the bottom surface of workpiece WK1 as paying close attention to characteristic point
Characteristic of correspondence point Q4 and characteristic point Q5.
It addition, target characteristic amount (target characteristic point) and concern characteristic quantity (concern characteristic point)
May refer to person's of leading (user) set in advance, it is also possible to be based on the algorithm that gives and set
Fixed characteristic quantity (characteristic point).For example, it is also possible to according to the deviation of the characteristic point detected with
And with the relative position relation of target characteristic point, target setting characteristic point.Specifically, at figure
In the example of 5, it is also possible in shooting image PIM11, will be located in representing workpiece WK2's
Characteristic point P9 of the deviation immediate vicinity of characteristic point P1~P10 and characteristic point P10 set to be made
For target characteristic point.It addition, in addition can also be at the CAD representing assembled object
In (Computer Aided Design: computer-aided design) data, preset with
The point that target characteristic point is corresponding, and shooting image in carry out cad data with CAD
Join, thus according to the result of CAD coupling, among the characteristic point of assembled object, really
The characteristic point that fixed (detection) sets as target characteristic point.Pay close attention to characteristic quantity and (pay close attention to feature
Point) also it is same.
Additionally, in the present embodiment, control portion 120 is so that assembling the concern feature of object
The mode that point is consistent or close with the target characteristic of assembled object point, makes assembling object
Mobile, but owing to assembled object and assembling object are visible objects, so actually
Can't detect target characteristic point at identical point and pay close attention to characteristic point.Make i.e., in a uniform matter
Assembling object and moving is to instigate to detect that the point paying close attention to characteristic point is to detecting that target is special eventually
Levy the meaning putting movement a little.
Thereby, it is possible to so that set assembles the assembled part of object and the assembled right of setting
As the relative posture relation of the assembled part of thing becomes target relative to posture relation
Mode, make assembling object move.
And, it is possible to by assemble object assembled part be assembled in assembled object by group
Dress part etc..Such as shown in Fig. 6, it is possible to using the bottom surface of the assembled part as workpiece WK1
BA embeds the hole HL of the assembled part as workpiece WK2.
It follows that the handling process of present embodiment is illustrated with the flow chart of Fig. 7.
First, shooting image acquiring unit 110 obtains the such as shooting image PIM11 shown in Fig. 5
(S101).Reflect at this shooting image PIM11 and have assembling object WK1 and assembled object
The both sides of thing WK2.
It follows that control portion 120 is according to the shooting image PIM11 obtained, carry out characteristic quantity inspection
Survey processes, thus detect target characteristic amount FB of assembled object WK2 with assemble right
Concern characteristic quantity FA (S102, S103) as thing WK1.
Then, as it has been described above, control portion 120 is according to the concern characteristic quantity FA detected and mesh
Mark characteristic quantity FB, makes assembling object WK1 move (S104), and to assembling object
It is relative whether the relative posture relation of WK1 and assembled object WK2 becomes target
Posture relation carries out judging (S105).
Finally, it is being judged to that assembling object WK1 is relative with assembled object WK2
Posture relation become as illustrated in fig. 6 target relative to posture relation in the case of, knot
Bundle processes, and is being judged to assemble the phase para-position of object WK1 and assembled object WK2
Put posture relation do not become target relative to posture relation in the case of, return to step S101,
And process is repeated.It is above the handling process of present embodiment.
It addition, shooting image acquiring unit 110 can also obtain multiple shooting image.In this feelings
Under condition, shooting image acquiring unit 110 can obtain multiple reflecting has assembling object right with assembled
Shooting image as the both sides of thing, it is also possible to obtain only reflect have assemble object shooting image,
And only reflect the shooting image having assembled object.
Here, show that the acquisition of the latter is reflected respectively and had assembling object in the flow chart of figure 8
Handling process with the situation of multiple shooting images of assembled object.
First, shooting image acquiring unit 110 obtains at least to reflect and has assembled object WK2's
Shooting image PIM11 (S201).Additionally, this shooting image PIM11 can also reflect and have assembling
Object WK1.Then, assembled object detects from shooting image PIM11 in control portion 120
Target characteristic amount FB (S202) of thing WK2.
It follows that shooting image acquiring unit 110 obtains at least to reflect to have assembles object WK1's
Shooting image PIM12 (S203).Additionally, identical with step S201, this shooting image PIM12
Can also reflect and have assembled object WK2.Then, control portion 120 is from shooting image PIM12
Detection assembles concern characteristic quantity FA (S204) of object WK1.
Then, the handling process illustrated with Fig. 7 below is identical, and control portion 120 is according to detecting
Concern characteristic quantity FA and target characteristic amount FB, make assembling object WK1 move (S205),
And to assembling the relative posture relation of object WK1 and assembled object WK2
Whether become target to carry out judging (S206) relative to posture relation.
Finally, it is being judged to that assembling object WK1 is relative with assembled object WK2
Posture relation become as illustrated in fig. 6 target relative to posture relation in the case of, knot
Bundle processes, and is being judged to assemble the phase para-position of object WK1 and assembled object WK2
Put posture relation do not become target relative to posture relation in the case of, return to step S203,
And process is repeated.It is above obtaining reflecting respectively and has assembling object and assembled object
The handling process of situation of multiple shooting images.
It addition, in the above example, object actual assembled will be assembled in quilt by visual servo
Assemble object, but the present invention is not limited to this, it is also possible to being formed by visual servo will
Assemble object and be assembled in the state before assembled object.
That is, control portion 120 can also according to the characteristic quantity (characteristic point) of assembled object,
Determine and be in the image-region of given position relationship with assembled object, and so that assemble
The mode that the concern characteristic point of object is consistent or close with the image-region determined, makes group
Dress object moves.In other words, control portion 120 can also be according to the feature of assembled object
Amount, determines the point being on the realistic space of given position relationship with assembled object, and
Assembling object is made to move to the point determined.
Such as, in the shooting image PIM shown in Fig. 9, as representing assembled object
The assembled part of WK2 that is the characteristic point of the hole HL of triangle, detect characteristic point
P8~P10.In this case, in shooting image PIM, determine and characteristic point P8~P10
It is in image-region R1~R3 of given position relationship.Then, so that assembling object WK1
Concern characteristic point Q4 consistent with image-region R2 (close), assemble object concern special
Levy the mode of a Q5 consistent with image-region R3 (close), make assembling object WK1
Mobile.
Thereby, it is possible to form the state etc. before such as assembling operation.
It addition, be not necessarily required to carry out detecting the characteristic quantity assembling object as shown in above-mentioned example
Process.For example, it is also possible to detect the characteristic quantity of assembled object, and according to detecting
The characteristic quantity of assembled object, infers the position appearance of the assembled object relative with robot
Gesture, thus so that the position holding hand and the assembled object inferred assembling object
Put close mode, control robot etc..
2.2. the assembling operation carried out by two kinds of visual servos
It follows that to persistently carry out use with reference to image visual servo (First look servo),
The visual servo (second making assembling object move with using the characteristic quantity of assembled object
Visual servo) process of situation of both visual servos illustrates.
Such as, in Fig. 10, by using the First look servo with reference to image to assemble
Object WK1 from position GC1 towards the movement of position GC2 (shown in arrow YJ1
Mobile), by using the second visual servo of the characteristic quantity of assembled object WK2 to carry out
Assemble object WK1 from position GC2 towards movement (the arrow YJ2 institute of position GC3
The movement shown).Additionally, position GC1~GC3 is the position of centre of gravity assembling object WK1.
In the case of carrying out such process, as it is shown on figure 3, the robot of present embodiment
Control system 100 also includes with reference to image storage part 130.Store with reference to image storage part 130
The reference image that object shows is assembled to taking target location posture.With reference to image example
Image RIM shown in Figure 12 (A) the most as be described hereinafter.Additionally, with reference to image storage part 130
Function can pass through RAM (Random Access Memory: random access memory)
Realize Deng memorizer, HDD (Hard Disk Drive: hard disk drive) etc..
And, for control portion 120, as shown in the arrow YJ1 of above-mentioned Figure 10
First look servo, according at least reflect have assemble object first shooting image with reference to figure
Picture, makes assembling object move to target location posture.
Further, control portion 120, after above-mentioned First look servo, carries out the arrow such as Figure 10
The second visual servo shown in YJ2.That is, control portion 120 is after making assembling object move,
According at least reflecting the second shooting image having assembled object, carry out the spy of assembled object
The amount of levying detection processes, and according to the characteristic quantity of assembled object, makes assembling object move.
Here, with the flow chart of Figure 11, Figure 12 (A)~Figure 12 (D) to more specific
Handling process illustrates.
First, as the preparation of First look servo, make the hand HD of robot hold and assemble
Object WK1, and make assembling object WK1 move (S301) to this target location posture,
The assembling utilizing shoot part 200 (the video camera CM of Figure 10) photographic subjects posture is right
As thing WK1, thus obtain reference image (target image) RIM as shown in Figure 12 (A)
(S302).Then, from the spy assembling object WK1 with reference to image RIM detection obtained
The amount of levying F0 (S303).
Here, target location posture refers to become the assembling object of target in First look servo
The posture of thing WK1.Such as, in Fig. 10, position GC2 is target location posture,
In the reference image RIM of Figure 12 (A), reflect and have assembling object WK1 to be positioned at this mesh
The appearance of cursor position posture GC2.This target location posture is by referring to when generating with reference to image
The posture that person's of leading (user) sets.
It addition, with reference to image such as the reference image RIM of Figure 12 (A), refer to
Reflect under the posture of above-mentioned target location have the mobile object in First look servo that is assemble right
Image as thing WK1.Additionally, in the reference image RIM of Figure 12 (A), also reflect
There is assembled object WK2, but be not necessarily to mirror assembled object WK2.Separately
Outward, the image of the storage part of outside can also be stored in reference to image, obtain via network
Image, according to the image etc. of cad model data genaration.
It follows that carry out First look servo.First, shooting image acquiring unit 110 obtains such as
The first shooting image PIM101 (S304) shown in Figure 12 (B).
Here, the first shooting image in this example is such as the shooting image of Figure 12 (B)
PIM101 is such, refers to reflect the assembling object WK1 having assembling operation and assembled object
The shooting image at least assembling object WK1 in thing WK2.
Then, control portion 120 assembles object WK1 from the first shooting image PIM101 detection
Characteristic quantity F1 (S305), and according to features described above amount F0 and characteristic quantity F1, make assembling right
As thing WK1 mobile (S306) as shown in the arrow YJ1 of Figure 10.
Then, control portion 120 is to assembling whether object WK1 is in target location posture
GC2 carries out judging (S306), is being judged to that assembling object WK1 is in target location appearance
In the case of gesture GC2, move to the second visual servo.On the other hand, be judged to assemble right
In the case of being not in target location posture GC2 as thing WK1, return to step S304, and
And First look servo is repeated.
So, in First look servo, while compare with reference to image RIM and the first shooting figure
Each other, limit controls robot to the characteristic quantity of the assembling object WK1 in picture PIM101.
It follows that carry out the second visual servo.In the second visual servo, first, shooting figure
As acquisition unit 110 obtains the second shooting image PIM21 (S308) as shown in Figure 12 (C).
Here, the second shooting image refers to the shooting image for the second visual servo.Additionally, at this
In second shooting image PIM21 of example, reflect and have assembling object WK1 and assembled object
The both sides of thing WK2.
Then, assembled object WK2 detects from the second shooting image PIM21 in control portion 120
Target characteristic amount FB (S309).The most in this example, as shown in Figure 12 (C), as
Target characteristic amount FB and target characteristic point GP1 and target characteristic point GP2 detected.
Equally, control portion 120 assembles object WK1 from the second shooting image PIM21 detection
Concern characteristic quantity FA (S310).The most in this example, as shown in Figure 12 (C), as
Pay close attention to characteristic quantity FA and concern characteristic point IP1 detected and pay close attention to characteristic point IP2.
It follows that control portion 120 is according to paying close attention to characteristic quantity FA and target characteristic amount FB, make group
Dress object WK1 moves (S312).That is, identical with the example hereinbefore illustrated with Fig. 5
Ground, so that concern characteristic point IP1 is close with target characteristic point GP1 and makes concern characteristic point
Mode close with target characteristic point GP2 for IP2 makes assembling object WK1 move.
Then, control portion 120 to assembling object WK1 with assembled object WK2 is
The no target that is in carries out judging (S312) relative to posture relation.Such as at Figure 12 (D)
In shown shooting image PIME, owing to paying close attention to characteristic point IP1 and target characteristic point GP1
Adjacent, pay close attention to characteristic point IP2 and adjoin with target characteristic point GP2, it is determined that right for assembling
As thing WK1 and assembled object WK2 is in target relative to posture relation, thus
End processes.
On the other hand, it is being judged to assembling object WK1 and assembled object WK2 not
Be in target relative to posture relation in the case of, return to step S308, and repeatedly enter
Row the second visual servo.
Thus, when identical assembling operation is repeated every time, use identical reference image,
And make assembling object moving about to assembled object, afterwards, it is possible to actual quilt
The detailed posture assembling object cooperatively carries out assembling operation etc..That is, even if existing
When generating the assembling operation with reference to the posture of assembled object during image and reality
In the case of posture skew (different) of assembled object, also due in the second vision
Servo offset corresponding with the position of assembled object, so in First look servo, energy
The reference image that enough uses every time are identical, and without using different reference images.As a result of which it is,
The setting up cost etc. with reference to image can be suppressed.
Additionally, in above-mentioned steps S310, it is right to assemble from the second shooting image PIM21 detection
As the concern characteristic quantity FA of thing WK1, must be from the second shooting image but do not necessarily imply that
PIM21 detects.Such as do not reflect at the second shooting image PIM21 and have assembling object
The situation etc. of WK1, it is also possible to from reflecting other the second shooting images having assembling object
PIM22 detection assembles the characteristic quantity etc. of object WK1.
2.3. the assembling operation of three workpiece
It follows that as shown in Figure 13 A and Figure 13 B, to carrying out three workpiece WK1~WK3
The process of situation of assembling operation illustrate.
In this assembling operation, as shown in FIG. 13A, by by the first hand HD1 of robot
The assembling object WK1 (workpiece WK1, for example, driver) held is assembled in by machine
The first assembled object WK2 (workpiece WK2, example that the second hand HD2 of people holds
As for screw), and the workpiece WK2 becoming assembled state with workpiece WK1 is assembled in work
The second assembled object WK3 (workpiece WK3, such as screw) on industry platform.Then,
After assembling operation, become assembled state as shown in Figure 13 B.
Specifically, in the case of carrying out such process, as shown in Figure 14 (A), control
There is the first of the in assembling operation first assembled object WK2 in portion 120 processed according at least reflecting
Shooting image PIM31, the characteristic quantity detection carrying out the first assembled object WK2 processes.
The first shooting image in this example refers to carrying out assembling object WK1 and first assembled right
As the shooting image used during the assembling operation of thing WK2.
Then, control portion 120, according to the characteristic quantity of the first assembled object WK2, makes group
Dress object WK1 is mobile as shown in the arrow YJ1 of Figure 13 A.
It follows that as shown in Figure 14 (B), control portion 120 makes assembling object WK1
After movement, according at least reflecting the second shooting image having the second assembled object WK3
PIM41, the characteristic quantity detection carrying out the second assembled object WK3 processes.In this example
Second shooting image refers to carrying out the first assembled object WK2 and the second assembled object
The shooting image used during the assembling operation of thing WK3.
Then, control portion 120, according to the characteristic quantity of the second assembled object WK3, makes group
The arrow YJ2's of dress object WK1 and first assembled object WK2 such as Figure 13 A
Shown mobile.
Thus, when carrying out assembling operation every time, though the first assembled object WK2,
In the case of the position skew of the second assembled object WK3, it is also possible to carry out assembling object
Thing, the first assembled object and the assembling operation etc. of the second assembled object.
It follows that with the flow chart of Figure 15 to three workpiece shown in Figure 13 A and Figure 13 B
Assembling operation in handling process be described in detail.
First, shooting image acquiring unit 110 obtains and at least reflects the assembling object having in assembling operation
Thing WK1 and 1 of the first assembled object WK2 or multiple first shooting image.
Then, control portion 120, according to the first shooting image, carries out assembling object WK1 and the
The characteristic quantity detection of one assembled object WK2 processes.
In this example, first, shooting image acquiring unit 110 obtains to reflect and has the first assembled object
First shooting image PIM31 (S401) of thing WK2.Then, control portion 120 is according to
One shooting image PIM31, the characteristic quantity detection carrying out the first assembled object WK2 processes,
Thus detect first object characteristic quantity FB1 (S402).Here, as first object characteristic quantity
FB1 and target characteristic point GP1 as shown in Figure 14 (A) and target characteristic point GP2 detected.
It follows that shooting image acquiring unit 110 obtains to reflect to have assembles the first of object WK1
Shooting image PIM32 (S403).Then, control portion 120 shoots image PIM32 according to first,
The characteristic quantity detection carrying out assembling object WK1 processes, thus detects the first concern characteristic quantity
FA(S404).Here, pay close attention to characteristic quantity FA as first and concern characteristic point IP1 detected
With concern characteristic point IP2.
Additionally, in step S401~S404, have assembling object WK1 to obtaining to reflect respectively
Multiple first shootings image (PIM31 and PIM32) with the first assembled object WK2
Example be illustrated, but as shown in Figure 14 (A), assemble object WK1 with
In the case of first assembled object WK2 mirrors the first identical shooting image PIM31,
Can also be assembled from the first shooting image PIM31 detection assembling object WK1 and first
The characteristic quantity of the both sides of object WK2.
It follows that control portion 120 is according to the characteristic quantity assembling object WK1, (first pays close attention to
Characteristic quantity FA) and characteristic quantity (the first object feature of the first assembled object WK2
Amount FB1) so that assembling object WK1 and the phase para-position of the first assembled object WK2
Put posture relation and become the first object mode relative to posture relation, make assembling object
WK1 moves (S405).Specifically, in shooting image, so that paying close attention to characteristic point IP1
Close with target characteristic point GP1 and make concern characteristic point IP2 connect with target characteristic point GP2
Near mode, makes assembling object WK1 move.Additionally, this moves is equivalent to Figure 13 A's
The movement of arrow YJ1.
Then, control portion 120 is to assembling object WK1 and the first assembled object WK2
Whether it is in first object to carry out judging (S406) relative to posture relation.It is being judged to group
Dress object WK1 and the first assembled object WK2 is not in first object relative to position
In the case of posture relation, return to step S403, and re-start process.
On the other hand, it is being judged to assembling object WK1 and the first assembled object WK2
Be in first object relative to posture relation in the case of, from first shooting image PIM32
Detect second concern characteristic quantity FB2 (S407) of the first assembled object WK2.Specifically
For, as be described hereinafter shown in Figure 14 (B), control portion 120 pays close attention to characteristic quantity FB2 as second
And concern characteristic point IP3 detected and pay close attention to characteristic point IP4.
Have it follows that shooting image acquiring unit 110 obtains at least reflecting as shown in Figure 14 (B)
Second shooting image PIM41 (S408) of the second assembled object WK3.
Then, control portion 120, according to the second shooting image PIM41, carries out second assembled right
As the characteristic quantity detection of thing WK3 processes, thus detect the second target characteristic amount FC (S409).
Specifically, as shown in Figure 14 (B), control portion 120 is as the second target characteristic amount FC
And target characteristic point GP3 and target characteristic point GP4 detected.
It follows that control portion 120 is according to the characteristic quantity (of the first assembled object WK2
Two pay close attention to characteristic quantity FB2) and characteristic quantity (second mesh of the second assembled object WK3
Mark characteristic quantity FC) so that the first assembled object WK2 and the second assembled object
The relative posture relation of WK3 becomes second target mode relative to posture relation,
Assembling object WK1 and the first assembled object WK2 is made to move (S410).
Specifically, in shooting image, so that paying close attention to characteristic point IP3 and target characteristic point
GP3 is close and makes the concern characteristic point IP4 mode close with target characteristic point GP4, makes
Assemble object WK1 and the first assembled object WK2 to move.Additionally, this moves phase
Movement as the arrow YJ2 in Figure 13 A.
Then, control portion 120 is to the first assembled object WK2 and the second assembled object
Whether thing WK3 is in the second target carries out judging (S411) relative to posture relation.?
It is judged to that the first assembled object WK2 and the second assembled object WK3 is not in
In the case of two targets are relative to posture relation, return to step S408, and re-start
Process.
On the other hand, the shooting image PIME as shown in Figure 14 (C), it is being judged to
First assembled object WK2 and the second assembled object WK3 be in assembled state,
I.e. be in the second target relative to posture relation in the case of, end process.
In such manner, it is possible to so that assemble the concern characteristic point (IP1 and IP2) of object WK1
Close with target characteristic point (GP1 and GP2) of the first assembled object WK2 and
Make concern characteristic point (IP3 and IP4) and second quilt of the first assembled object WK2
Assemble the mode that target characteristic point (GP3 and GP4) of object WK3 is close, carry out
Visual servo etc..
Alternatively, it is also possible in order to assembling object as shown in the flow chart of Figure 15
WK1 and the first assembled object WK2 assembles, and such as Figure 16 (A)~Figure 16
(C) assemble three workpiece shown in simultaneously.
Handling process now is shown in the flow chart of Figure 17.First, shooting image obtains
Take portion 110 acquisition and reflect the assembling object WK1 having in assembling operation, the first assembled object
Thing WK2 and 1 of the second assembled object WK3 or multiple shooting image
(S501).In this example, the shooting image PIM51 shown in Figure 16 (A) is obtained.
It follows that control portion 120 is according to 1 or multiple shooting image, carry out assembling object
Thing WK1, the first assembled object WK2 and the spy of the second assembled object WK3
The amount of levying detection processes (S502~S504).
In this example, as shown in Figure 16 (A), as the second assembled object WK3's
Characteristic quantity and target characteristic point GP3 and target characteristic point GP4 (S502) detected.Then,
Detect as the characteristic quantity of the first assembled object WK2 target characteristic point GP1 and
Target characteristic point GP2, concern characteristic point IP3 and concern characteristic point IP4 (S503).And
And, concern characteristic point IP1 and pass detected as the characteristic quantity assembling object WK1
Note characteristic point IP2 (S504).Additionally, mirror the most different shooting images at three workpiece
In the case of, it is also possible to carry out characteristic quantity detection process respectively at different shooting images.
It follows that control portion 120 is while according to the characteristic quantity and the assembling object WK1
The characteristic quantity of one assembled object WK2, so that assembling object WK1 and first by group
The relative posture relation of dress object WK2 becomes first object relative to posture relation
Mode, make assembling object WK1 move, according to the first assembled object WK2
Characteristic quantity and the characteristic quantity of the second assembled object WK3 so that first is assembled right
As the relative posture relation of thing WK2 and the second assembled object WK3 becomes second
Target, relative to the mode of posture relation, makes the first assembled object WK2 move
(S505)。
That is, so that concern characteristic point IP1 is with target characteristic point GP1 is close, make concern feature
Point IP2 is with target characteristic point GP2 is close, make concern characteristic point IP3 and target characteristic point GP3
Close and make the concern characteristic point IP4 mode close with target characteristic point GP4, make assembling
Object WK1 and the first assembled object WK2 moves simultaneously.
Then, shooting image acquiring unit 110 reacquires shooting image (S506), and controls
Portion 120 processed is according to the shooting image reacquired, to assembling object WK1, first by group
Whether dress object WK2 and the second assembled object WK3 these three workpiece are in
Target carries out judging (S507) relative to posture relation.
Such as, the shooting image obtained in step S506 is the bat as shown in Figure 16 (B)
Take the photograph image PIM52, be judged to three workpiece still not at target relative to posture relation
In the case of, return to step S503, and process is repeated.Additionally, according to again obtaining
The shooting image PIM52 taken, carries out the process of below step S503.
On the other hand, the shooting image obtained in step S506 is as shown in Figure 16 (C)
Shooting image PIME in the case of, it is determined that be that three workpiece are in target relative to posture
Relation, thus terminate to process.
Thereby, it is possible to carry out the assembling operation etc. of three workpiece simultaneously.As a result of which it is, can contract
The activity duration etc. of the assembling operation of short three workpiece.
Further, when carrying out the assembling operation of three workpiece, it is also possible to according to the stream with Figure 15
The order that assembling sequence shown in journey figure is contrary, carries out assembling operation.That is, such as Figure 18 (A)
~shown in Figure 18 (C), it is also possible to the first assembled object WK2 is being assembled in second
After assembled object WK3, assembling object WK1 is assembled in the first assembled object
Thing WK2.
In this case, as shown in Figure 18 (A), there is group in control portion 120 according at least reflecting
Pretend the first shooting image PIM61 of the assembled object WK3 of second in industry, carry out the
The characteristic quantity detection of two assembled object WK3 processes, and according to the second assembled object
The characteristic quantity of WK3, makes the first assembled object WK2 move.Additionally, due to feature
The detailed content that amount detection processes is identical, so omitting it with the example illustrated with Figure 16 (A)
Explanation.
It follows that as shown in Figure 18 (B), control portion 120 according at least reflect have mobile after
The second shooting image PIM71 of the first assembled object WK2, carries out first assembled right
As the characteristic quantity detection of thing WK2 processes, and according to the spy of the first assembled object WK2
The amount of levying, in the way of being formed such as the assembled state of Figure 18 (C), makes assembling object WK1
Mobile.
Thus, it is not necessary to make assembling object WK1 and the first assembled object WK2 simultaneously
Mobile, and can more easily carry out the control etc. of robot.Even if it addition, not being multi-arm
Robot but the robot of single armed, it is also possible to carry out the assembling operation etc. of three workpiece.
It addition, the shoot part (video camera) 200 used in above present embodiment such as wraps
Include capturing element and the optics such as CCD (charge-coupled device: charge coupled cell)
System.Shoot part 200 such as ceiling, operation post first-class, with the detection in visual servo
Object (assembles object, assembled object or the end effector 310 of robot 300
Deng) enter such angle in the visual angle of shoot part 200 and configure.And, shoot part 200
The information of shooting image is exported to robot control system 100 grade.Wherein, this embodiment party
In formula, the information of shooting image is exported to robot control system 100 with keeping intact, but
It is to be not limited to this.Such as, shoot part 200 can include the dress that image procossing etc. is used
Put (processor).
3. robot
It follows that in Figure 19 A and Figure 19 B, it is shown that the machine of application present embodiment
The configuration example of the robot 300 of device people's control system 100.Figure 19 A's and Figure 19 B
Under either case, robot 300 all has end effector 310.
End effector 310 is to hold, mention, sling, adsorb workpiece (manipulating object
Thing), to workpiece implement processing and be installed on the parts of the end points of arm.End effector 310 example
As being hand (handle part), can be hook portion, it is also possible to be sucker etc..Further, also
Multiple end effector can be set for 1 support arm.Additionally, the parts of Bei Shi robot 300,
And it is the movable member including more than one joint.
Such as, the robot of Figure 19 A is robot body 300 (robot) and robot control
System 100 processed is separately constructed.In this case, the one of robot control system 100
Function is such as by PC (Personal Computer: personal computer) partly or completely
Realize.
It addition, the robot of present embodiment is not limited to the structure of Figure 19 A, it is also possible to be
Floor-washing robot main body 300 is integrally formed with robot control system 100 as shown in Figure 19 B
's.That is, robot 300 can also include robot control system 100.Specifically, as
Shown in Figure 19 B, robot 300 can also have robot body and (have arm and end
Executor 310) and the base unit portion of support robot body, and control system of robot
System 100 is accommodated in this base unit portion.In the robot 300 of Figure 19 B, be formed as
The structure that base unit portion is provided with wheel etc. and robot entirety can move.Additionally,
Figure 19 A is the example of single armed type, and robot 300 can also be both arms as shown in Figure 19 B
The robot of many arm type such as type.It addition, robot 300 can be the machine moved by staff
Device people, it is also possible to be the motor of driving wheel to be set and utilizes robot control system 100 to control
This motor thus the robot that moves it.It addition, be not limited to as shown in Figure 19 B
In the base unit portion under robot 300 that is arranged at, robot control system 100 is set.
It addition, as shown in figure 20, the function of robot control system 100 can also pass through warp
Communicated to connect with robot 300 by the network 400 including wired and wireless at least one party
Server 500 realize.
Or in the present embodiment, it is also possible to by the robot control system of server 500 side
Carry out a part for the process of the robot control system of the present invention.In this case, pass through
Dispersion with the robot control system being arranged at robot 300 side processes, thus realizes at this
Reason.Additionally, the robot control system of robot 300 side is such as by being arranged at robot
The termination 330 (control portion) of 300 realizes.
And, in this case, the robot control system of server 500 side carries out this
In each process of bright robot control system, to be allocated in server 500 robot controls
The process of system.On the other hand, the robot control system being arranged at robot 300 carries out this
In each process of the robot control system of invention, to be allocated in robot 300 robot control
The process of system processed.Additionally, each process of the robot control system of the present invention can be distribution
Process in server 500 side, it is also possible to be the process being allocated in robot 300 side.
Thus, the server 500 that such as disposal ability is higher compared with termination 330 can
Carry out the process etc. that treating capacity is more.Further, such as server 500 can control each machine in the lump
The action of device people 300, and can easily make multiple robot 300 coordination etc..
It addition, in recent years, the situation of the parts manufacturing multi items and minority had the trend of increase.
And, in the case of the kind of the parts manufactured in change, need to change that robot carries out is dynamic
Make.If structure as shown in figure 20, even if the most not re-starting for multiple robots 300
Respective instructing operation, server 500 also is able to change in the lump that robot 300 carried out is dynamic
Make.
Further, if structure as shown in figure 20, then one is arranged with for each robot 300
The situation of individual robot control system 100 is compared, it is possible to is greatly reduced and carries out robot control
Trouble the etc. during software upgrading of system 100.
Additionally, the robot control system of present embodiment and robot etc. can also pass through journey
Sequence realizes a part or the major part of above-mentioned process.In this case, CPU etc.
Reason device performs program, thus realizes robot control system and the robot of present embodiment
Deng.Specifically, the process such as the program being stored in information storage medium, and CPU are read
Device performs the program read.Here, information storage medium (medium that can be read by computer)
The medium of storage program, data etc., its function can pass through CD (DVD, CD etc.),
HDD (hard disk drive) or memorizer (card type reservoir, ROM etc.) etc. come real
Existing.And, the processor such as CPU according to being stored in the program (data) of information storage medium,
Carry out the various process of present embodiment.That is, it is used for making computer in information storage medium storage
(possessing operating portion, process portion, storage part, the device of output unit) is as present embodiment
The program of each portion and function (for making computer perform the program of process in each portion).
Above present embodiment is described in detail, but can be substantially without departing from this
Under conditions of the new content of invention and effect, carrying out diversified change, this is for this area
It is apparent from for technical staff.Therefore, this change example is also all contained in the present invention's
In the range of.Such as, in description or accompanying drawing, at least one times with more broad sense or synonym not
The term being described together with term, any position in description or accompanying drawing, all can replace
Change this difference term into.It addition, the structure of robot control system, robot and program,
Action is also not limited in present embodiment the structure of explanation, action, and can carry out various change
Shape is implemented.
Second embodiment
Figure 21 is an example of the structure of the robot system 1 representing one embodiment of the present invention
The system pie graph of son.The robot system 1 of present embodiment mainly possesses robot 10, controls
Portion's the 20, first shoot part 30 and the second shoot part 40.
Robot 10 is to have to include multiple joint (joint) 12 and the arm of multiple connecting rod 13
The arm type robot of 11.Robot 10 is carried out according to the control signal from control portion 20
Process.
It is provided with for making them carry out the actuator of action (not shown) at joint 12.Actuate
Device such as possesses servo motor, encoder etc..The encoder values of encoder output is for by controlling
The feedback control of the robot 10 that portion 20 is carried out.
Trick video camera 15 it is provided with near the front end of arm 11.Trick video camera 15 is shooting
It is in the object of the front end of arm 11 and generates the unit of view data.As trick video camera 15,
Such as can use visible light camera, infrared camera etc..
As the region of the fore-end of arm 11, not other regions with robot 10 (are removed
Remove the hand 14 being described later) region that connects is defined as the end points of arm 11.In this reality
Execute in mode, make the position of the some E shown in Figure 21 be positioned at the position of end points.
Additionally, for the structure of robot 10, the feature of present embodiment is being carried out
During explanation, primary structure is illustrated, and is not limited to said structure.It is not precluded from one
As hold the structure that possessed of robot.Such as, figure 21 illustrates the arm of 6 axles,
But the number of axle (piece-ups) can also be made to increase further, it is possible to so that it reduces.Can also
The quantity of increase and decrease connecting rod.Alternatively, it is also possible to suitably change the various portions such as arm, connecting rod, joint
The shape of part, size, configure, structure etc..
Control portion 20 is controlled the overall process of robot 10.Control portion 20 can set
It is disposed away from the place of the main body of robot 10, it is also possible to be built in robot 10.Controlling
In the case of portion 20 is arranged at the place of the main body away from robot 10, control portion 20 is to have
Line or be wirelessly connected with robot 10.
First shoot part 30 and the second shoot part 40 are from different perspectives to arm 11 respectively
Carry out near operating area shooting and generating the unit of view data.First shoot part 30 and
Second shoot part 40 such as includes video camera, and is arranged at operation post, ceiling, wall etc..
As the first shoot part 30 and the second shoot part 40, it is possible to use visible light camera, red
Outside line video camera etc..First shoot part 30 and the second shoot part 40 are connected with control portion 20,
And the image shot by the first shoot part 30 and the second shoot part 40 is inputted to control portion 20.
Additionally, the first shoot part 30 and the second shoot part 40 can not also with control portion 20 and with
Robot 10 connects, it is also possible to be built in robot 10.Now, via robot 10
The image shot by the first shoot part 30 and the second shoot part 40 is inputted to control portion 20.
It follows that the function configuration example of robot system 1 is illustrated.Figure 22 represents machine
The functional block diagram of device people's system 1.
Robot 10 possesses the sensor values of the encoder values according to actuator and sensor
Etc. the operation control part 101 carrying out control arm 11.
Operation control part 101 is according to the information exported from control portion 20, the encoder of actuator
Value and the sensor values etc. of sensor, so that arm 11 is to the position exported from control portion 20
The mode of movement, drives actuator.Can be according to the volume of the actuator being arranged at joint 12 grade
Code device value etc. obtains the current location of end points.
Control portion 20 mainly possesses position control section 2000, visual servo portion 210 and drives
Control portion 220.Position control section 2000 mainly possesses path acquisition unit 201 and the first control
Portion 202 processed.Visual servo portion 210 mainly possesses image acquiring unit 211, image processing part 212
And the second control portion 213.
Position control section 2000 performs to make arm 11 move along the path of regulation set in advance
Position control.
Path acquisition unit 201 obtains the information relevant to path.Path is according to instructing position shape
Become, e.g. with the order of regulation set in advance, 1 will set beforehand through guidance
Individual above instruct position to link up thus formed.Additionally, the information relevant to path, example
(say later as the information of the order in denotation coordination, path is held in memorizer 22
Bright, with reference to Figure 24 etc.).The information relevant to path being held in memorizer 22 can also be through
Inputted by input equipment 25 grade.Additionally, the information relevant to path also comprises end points
Whole position, i.e. relevant to target location information.
First control portion 202 is according to the letter relevant to path obtained by path acquisition unit 201
Breath, sets the next one and instructs position, i.e. sets the track of end points.
It addition, the first control portion 202 is according to the track of end points, determine the next shifting of arm 11
Dynamic position, i.e. decision are arranged at the angle on target of each actuator of joint 12.It addition, first
Control portion 202 generates and makes arm 11 move command value as angle on target, and by it to driving
Control portion 220 exports.Additionally, due to the process that the first control portion 202 is carried out is general interior
Hold, so omitting detailed description.
Visual servo portion 210 performs to shoot according to the first shoot part 30 and the second shoot part 40
Image, measures the change of the position relative with object as visual information, and by it
Use as feedback information, thus follow the trail of the control device of object that is so-called vision is watched
Clothes, and make arm 11 move.
Additionally, as visual servo, it is possible to use the method for position reference, the side of character references
The methods such as method, the method for above-mentioned position reference controls machine according to the three dimensional local information of object
People, the three dimensional local information of above-mentioned object is by two images as use generation parallax
The methods such as the axonometric chart identify image as stereo-picture are calculated;Features described above
The method of benchmark so that by two shoot parts from the image of orthogonal direction shooting with keep in advance
The difference of target image of each shoot part be that zero (error matrix of the pixel quantity of each image is
Zero) mode controls robot.Such as, in the present embodiment, the side of character references is used
Method.Although additionally, the method for character references can use 1 shoot part to carry out, but being
Raising precision, is preferably used 2 shoot parts.
Image acquiring unit 211 obtain image captured by the first shoot part 30 (hereinafter referred to as
First image) and the second shoot part 40 captured by image (hereinafter referred to as the second image).
The first image image acquiring unit 211 obtained and the second image are defeated to image processing part 212
Go out.
Image processing part 212 is according to the first image and second obtained from image acquiring unit 211
Image, from the first image and the front end of the second image recognition end points, and extraction includes end points
Image.Additionally, due to the image recognition processing that image processing part 212 is carried out can use typically
Various technology, so the description thereof will be omitted.
Second control portion 213 according to extracted by image processing part 212 image (hereinafter referred to as
Present image) and image (hereinafter referred to as target image) time end points is positioned at target location,
Set amount of movement and the moving direction of the track of end points, i.e. end points.Additionally, for target figure
For Xiang, the information obtained in advance is stored in memorizer 22 grade.
It addition, the second control portion 213 is according to the amount of movement of end points and moving direction, determine to set
It is placed in the angle on target of each actuator of joint 12.Further, the second control portion 213 generates and makes
Arm 11 moves command value as angle on target, and it is exported to drive control part 220.
Additionally, due to the process that the second control portion 213 is carried out is general content, so omitting in detail
Explanation.
Additionally, in tool articulated robot 10, if determining the angle in each joint, then
The position of end points is uniquely determined by forward kinematic process.That is, at N articulated robot
In, owing to a target location can be showed by N number of joint angles, if so making this N number of
Joint angles be combined as a target joint angle, then can be thought of as the track of end points closing
The set of joint angle.Thus, export from the first control portion 202 and the second control portion 213
Command value can be the value (target location) relevant to position, it is also possible to be the angle with joint
Relevant value (angle on target).
Drive control part 220 obtains according to from the first control portion 202 and the second control portion 213
Information, by make end points position, i.e. to make arm 11 move in the way of, to operation control part
101 output instructions.Later the detailed content of the process that drive control part 220 is carried out is entered
Row describes in detail.
Figure 23 is the data flowchart of robot system 1.
In position control section 2000, transmission is had for being made robot by position control
The feedback control loop that each joint is close with angle on target.The information in path set in advance comprises and mesh
The information that cursor position is relevant.For the first control portion 202, if obtaining and target location phase
The information closed, then according to the information relevant to target location and obtained by path acquisition unit 201
Current location, generate track and command value (being angle on target here).
In visual servo portion 210, transmission have for using from the first shoot part 30 and
The information of the second shoot part 40 and the visual feedback loop close with target location.Second controls
Portion 213 obtains target image as the information relevant to target location.For the second control portion
For 213, owing to the target location on present image and present image is with the seat on image
Mark system represents, so transforming it into the coordinate system of robot.Second 213, control portion
According to the current present image after conversion and target image, generate track and command value (this
In be angle on target).
Drive control part 220 exports from the first control portion 202 to robot 10 output basis
Command value and the command value formed from the command value of the second control portion 213 output.Specifically
For, drive control part 220 command value exported from the first control portion 202 is multiplied by α this
Coefficient, is multiplied by this coefficient of 1-α by the command value exported from the second control portion 213, and to
Robot 10 exports the value above-mentioned value synthesized.Here, α is bigger, less than 1 than 0
Real number.
Additionally, according to the command value from the output of the first control portion 202 and from the second control portion
The command value of 213 outputs and the mode of command value that formed is not limited to this.
Here, in the present embodiment, from the first control portion 202 with constant interval (such as
Every 1 millisecond (msec)) output order value, from the second control portion 213 with ratio from first
Interval (such as every 30msec) the output order value of the output gap length in control portion 202.
Therefore, drive control part 220 not in the case of the second control portion 213 output order value,
The command value exported from the first control portion 202 is multiplied by this coefficient of α, will be finally from the second control
The command value of portion 213 processed output is multiplied by this coefficient of 1-α, and exports upper to robot 10
State the value of value synthesis.For drive control part 220, finally from the second control portion
The command value of 213 outputs is temporarily stored in the storage devices such as memorizer 22 (Figure 24 reference),
Drive control part 220 reads it from storage device and uses.
Operation control part 101 obtains command value (angle on target) from control portion 20.Action control
Portion 101, according to the encoder values etc. of the actuator being arranged at joint 12 grade, obtains working as of end points
Front angle, and calculate the difference (misalignment angle) of angle on target and current angular.It addition, it is dynamic
Make control portion 101 and such as calculate the translational speed (misalignment angle of arm 11 according to misalignment angle
The biggest translational speed is the fastest), and with the translational speed calculated make that arm 11 mobile computing goes out inclined
Declinate degree.
Figure 24 is the block diagram of an example of the brief configuration representing control portion 20.As it can be seen,
The control portion 20 being made up of such as computer etc. possesses the central processing unit as arithmetic unit
(Central Processing Unit :) 21;RAM by the storage device as volatibility
(Random Access Memory: random access memory) is deposited with as non-volatile
The memorizer 22 that the ROM (Read only Memory: read only memory) of storage device is constituted;
External memory 23;The communicator 24 that the device outside with robot 10 etc. communicates;
The input equipment such as mouse or keyboard 25;The output devices such as display 26;And by control portion 20
The interface (I/F) 27 being connected with other unit.
Above-mentioned each function part is e.g. read by CPU21 at memorizer 22 and performs to be stored in
The regulated procedure of memorizer 22 thus realize.Additionally, regulated procedure such as can be pre-
First it is installed on memorizer 22, it is also possible to download from network via communicator 24 thus install
Or update.
For the structure of above robot system 1, the feature of present embodiment is being entered
During row explanation, primary structure is illustrated, and is not limited to said structure.It addition, not
Get rid of the structure possessing general robot system.
It follows that the feature of the robot system 1 being made up of said structure to present embodiment
Process illustrate.In the present embodiment, to use trick video camera 15 the most right
Object O1, O2, O3 as shown in figure 21 says as a example by carrying out the operation visually inspected
Bright.
If inputting control to start instruction via not shown button etc., then control portion 20 passes through
Position control and visual servo and control arm 11.Drive control part 220 is controlling from second
Portion 213 inputs in the situation (carrying out 1 time the most every 30 times) of command value,
Use the value that exports from the first control portion 202 (hereinafter referred to as instruction based on position control
Value) and from second control portion 213 output the value (finger hereinafter referred to as view-based access control model servo
Make value) with arbitrary component synthesis command value, and to operation control part 101 output refer to
Show.Drive control part 220 does not inputs the situation of command value (at this from the second control portion 213
Embodiment carries out 29 times every 30 times) under, use the base from the output of the first control portion 202
Command value in position control is deposited with finally exporting from the second control portion 213 and be temporarily stored in
The command value of reservoir 22 grade, and to operation control part 101 output instruction.
Figure 25 A is to end points during control arm 11 by position control and visual servo
The figure that track illustrates.Position 1 in Figure 25 A is provided with object O1, in position
2 are provided with object O2, and position 3 is provided with object O3.In Figure 25 A, object
Thing O1, O2, O3 are generally aligned in the same plane on (X/Y plane), and trick video camera 15 is positioned at
Constant Z-direction position.
In Figure 25 A, track shown in solid is only to use command value based on position control
In the case of the track of end points.This track becomes the track on by position 1,2,3,
In the case of object O1, O2, O3 are arranged with identical position, posture all the time, it is possible to
Only by position control, object O1, O2, O3 are visually inspected.
On the other hand, in Figure 25 A, it is considered to object O2 position 2 from solid line is to shifting
The situation that position 2 after Dong is moved.Owing to end points moves on object O2 shown in solid
Dynamic, so in the case of only using command value based on position control, it is possible to imagination object
The precision of the inspection of O2 reduces or cannot be carried out the situation checked.
Mobile corresponding, so applying visual servo well due to the position of object.If
Use visual servo, even if then the position of object offsets, it is also possible to make end points at object
Surface is mobile.Such as, if object O2 is positioned at the position 2 after moving, then figure is being given
In the case of image shown in 25B is as target image, it is assumed that only use view-based access control model servo
Command value, then end points is by the track shown in dotted line in Figure 25 A.
Visual servo is highly useful control method that can be corresponding with the skew of object, but
It is because of the first shoot part 30 or the frame rate of the second shoot part 40, figure of image processing part 212
As the process time etc., thus compared with the situation of position control, till there is arrival target location
Time spend this problem more.
Therefore, by using position control (to carry out position with the command value of visual servo simultaneously simultaneously
Put control and visual servo, i.e. parallel control), thus with the position of object O1, O2, O3
Put skew to guarantee accordingly to check precision, move at high speed compared with visual servo simultaneously.
Additionally, what is called is not limited to identical time, moment simultaneously.Such as, simultaneously
Use position control to refer to the situation of the command value of visual servo, also include outgoing position simultaneously
The command value controlled and the situation of the command value of visual servo, with staggering tiny time outgoing position
The command value controlled and the concept of the situation of the command value of visual servo.For tiny time
Speech, as long as the time of the process identical with situation can be carried out simultaneously, it is possible to be any
The time of length.
Particularly visual inspection in the case of, due to the visual angle of trick video camera 15 comprise right
As thing O2 gets final product (object O2 is without being positioned at the center at visual angle), even if so track does not exists
On object O2 the most out of question.
Therefore, in the present embodiment, drive control part 220 is to form trick video camera 15
Visual angle comprise the mode of track of object O2, by command value based on position control and base
Command value in visual servo synthesizes.Rail now is represented by chain-dotted line in Figure 25 A
Road.This track is not by the surface of object O2, but can guarantee to greatest extent to check
The position of precision.
Additionally, except object arrange position skew in addition to, variations in temperature the arm 11 caused
The expansion etc. of each parts etc. also become the position with actual object, position on path
The key factor of skew, but in this case, it is also possible to by using position control simultaneously
Solve with the command value of visual servo.
The position of the track shown in chain-dotted line in this Figure 25 A can change because of component α.Figure
26 is the figure illustrating component α.
Figure 26 A is to represent the distance of target (being object O1, O2, O3 here) and divide
The figure of the relation of amount α.Line A is unrelated and be constant component α with to the distance of target location
Situation.Line B is the most periodically to reduce component α with the distance to target location
Situation.Line C, D are the feelings reducing component α with the distance to target location the most continuously
Condition, line C be with distance proportionally make the change of component α diminish situation, line D be away from
From the situation proportional to component α.Wherein, component α is 0 < α < 1.
In the case of for line B, C, D of Figure 26 A so that along with to target location away from
Close to becoming, the proportion of the command value of position control reduces, the proportion of the command value of visual servo increases
The mode added, sets component α.Thus, in the case of target location is moved, it is possible to so that
The mode that end points and target location are more nearly generates track.
Further, since set component α so that position control is folded with each command value of visual servo
Add, it is possible to change component α the most continuously with the distance to target location.By with
Distance changes component the most continuously, it is possible to will control from the arm being generally based on position control
Control the control to the arm being generally based on visual servo successfully to switch.
Additionally, as shown in fig. 26, component α is not limited to by (being object here to target
Thing O1, O2, O3) distance situation about specifying.As shown in fig. 26b, it is also possible to by from
The distance regulation component α that starting position is left.That is, drive control part 220 can be according to currently
Position determines component α with the difference of target location.
Additionally, range-to-go, the distance left from starting position can obtain according to path
Path acquired in portion 101 and obtain, it is also possible to ask with target image according to present image
Go out.Such as, in the case of obtaining according to path, it is possible to according to the information relevant to path
The starting position that comprised, target, the coordinate, sequentially and current of position etc. of object
The coordinate of position, order calculate.
For current location as shown in figure 26 and target location difference and the relation of component and
Speech, owing to carrying out control arm 11 with the desired track of user, so such as can be via input
The input units such as device 25 and input.It addition, the difference of current location and target location and component
Relation be pre-stored within the storing mechanisms such as memorizer 22 so that with it.Additionally,
The relation being stored in the current location of storing mechanism and the difference of target location and component can be
The content inputted via input unit, it is also possible to be the content of initial setting in advance.
According to present embodiment, owing to using position control and visual servo with constant ratio
Each command value synthesis command value carry out control arm (trick video camera), even if so
In the case of producing the position skew of object, it is also possible to precision carries out checking at a high speed well.
Particularly, according to present embodiment, it is possible to make speed and position control equal (with visual servo
Compare at a high speed), and robustness inspection can be carried out for position skew compared with position control.
Additionally, in the present embodiment, usually by command value based on position control with based on regarding
Feel that the command value of servo synthesizes, but such as in the position deviation ratio regulation of object O2
Threshold value big in the case of, it is also possible to only use the command value of view-based access control model servo to make arm 11
Mobile.Second control portion 213 obtains the position skew of object O2 according to present image
The no threshold value than regulation is greatly.
It addition, in the present embodiment, drive control part 220 is according to current location and target position
The difference put and determine component α, but determine that the method for component α is not limited to this.Such as,
Drive control part 220 can also make component α over time through and change.It addition, drive
Control portion 220 can also make to through certain time component α over time through and become
Change, change component α according to the difference of current location Yu target location afterwards.
3rd embodiment
First embodiment of the present invention uses with constant ratio usually by position control and vision
The command value of each command value synthesis of servo carrys out control arm, but the scope of application of the present invention is also
It is not limited to this.
Third embodiment of the present invention is that the position with object is the most only used position
Control each command value situation and use with constant ratio by position control and visual servo
The mode that is combined of the situation of command value of each command value synthesis.Hereinafter, to this
The robot system 2 of the 3rd bright embodiment illustrates.Additionally, due to robot system
The structure of 2 is identical with the robot system 1 of the second embodiment, so omitting robot system
The explanation of the structure of 2, and the process to robot system 2 illustrates.It addition, for
The part that second embodiment is identical, marks identical reference, and the description thereof will be omitted.
Figure 27 is the flow chart of the flow process of the control process of the arm 11 representing the present invention.This process
E.g. control to start instruction thus start by inputting via not shown button etc..?
In present embodiment, carry out the visual inspection of object O1, O2.
If proceeding by process, then position control section 2000 carries out position control (step
S1000).That is, the first control portion 202 according to obtained by path acquisition unit 201 with path phase
The information closed and generate command value, and it is exported to drive control part 220.Drive control part
The command value exported from the first control portion 202 is exported by 220 to robot 10.So, dynamic
Making control portion 101 makes arm 11 (i.e. end points) mobile according to command value.
It follows that the first control portion 202 result to being made end points move by position control,
I.e. whether end points is judged (step S1002) by switching point 1.Represent switching point 1
The information of position be included in the information relevant to path set in advance.
Figure 28 is the position to object O1, O2, the position of switching point and the rail of end points
The figure that road illustrates.In the present embodiment, switching point 1 is arranged on beginning place and object
Between thing O1.
Do not pass through in the situation (being no in step S1002) of switching point 1 at end points, control portion
20 process that step S1000 is repeated.
In the end points situation (being yes in step S1002) by switching point 1, drive and control
Portion 220 uses position control and visual servo to carry out control arm 11 (step S1004).That is,
First control portion 202 gives birth to according to the information relevant to path obtained by path acquisition unit 201
Become command value, and it is exported to drive control part 220.It addition, 213, the second control portion
Command value is generated according to the present image processed by image processing part 212 and target image, and will
It exports to drive control part 220.Drive control part 220 over time through and to component
α carries out interim switching, and uses the component α after switching, will be defeated from the first control portion 202
The command value gone out synthesizes with the command value from the output of the second control portion 213, and to robot
10 outputs.So, operation control part 101 makes arm 11 (i.e. end points) move according to command value
Dynamic.
Hereinafter, the process to step S1004 is specifically described.Carrying out step S1004
Before process, i.e. in the process of step S1000, do not use from visual servo portion 210
Command value.Therefore, to be 1 (come the component α from the command value of position control section 2000
Component 1-α=0 of the command value from visual servo portion 210).
After the process of step S1004 starts, if having passed through certain time (such as 10msec),
Then the component α of the command value from position control section 2000 is cut by drive control part 220 from 1
Shift to 0.9.So, the component 1-α from the command value in visual servo portion 210 becomes 0.1.
Then, drive control part 220 makes the component α of the command value from position control section 2000
It is 0.9 and to make the component 1-α of the command value from visual servo portion 210 be the premise of 0.1
Under their command value is synthesized, and export to robot 10.
Afterwards, if further across certain time, then drive control part 220 will be from position
The component α of the command value in control portion 2000 switches to 0.8 from 0.9, and will be from visual servo
The component 1-α of the command value in portion 210 switches to 0.2 from 0.1.So, along with certain time
Through and periodically switch component α, and use the component after switching, will control from first
The command value of portion 202 output synthesizes with the command value from the output of the second control portion 213.
Position control section 2000 becomes at the component α of the command value from position control section 2000
0.5, before the component 1-α from the command value in visual servo portion 210 becomes 0.5, repeatedly
Carry out switching and the synthesis of command value of this component α.Drive control part 220 is from position
The component α of the command value in control portion 2000 becomes 0.5, instruction from visual servo portion 210
After the component 1-α of value becomes 0.5, in the way of not switching component α and maintaining component α,
The synthesis of command value is repeated.
Thus, even if also being able to visually examine in the case of the change in location of object O1
Look into.It addition, in the case of beyond desirably away from object, only made by position control
End points moves such that it is able to carry out high speed processing.It addition, with object close to time pass through position
Put control and visual servo and make end points move, thus also be able to the change in location with object
Situation corresponding.Further, by slowly switching component α, it is possible to preventing arm 11 unexpected
Action, vibration.
Additionally, in the process of step S1004, carrying out switching and the instruction of this component α
During the synthesis of value, end points by switching point 2 (step S1006 carries out in detail later
Narration) in the case of, become not carrying out before 0.5 switching and the instruction of component α at component α
The synthesis of value, and enter step S1006.
It follows that the first control portion 202 makes end points to by position control and visual servo
The result of movement, i.e. whether end points is judged (step S1006) by switching point 2.
Represent that the information of the position of switching point 2 is included in the information relevant to path.Such as Figure 28 institute
Showing, switching point 2 is set in object O1.
Do not pass through in the situation (being no in step S1006) of switching point 2 at end points, control portion
20 process that step S1004 is repeated.
In the end points situation (being yes in step S1006) by switching point 2, drive and control
Portion 220 by make component α over time through and in the way of interim increase, component α is carried out
Switching, and uses the component α after switching, by the command value that exports from the first control portion 202 with
Synthesize from the command value of the second control portion 213 output, and export to robot 10.Dynamic
Making control portion 101 makes arm 11 (i.e. end points) mobile (step S1008) according to command value.
Hereinafter, the process to step S1008 is specifically described.Carrying out step S1008
Before process, i.e. in the process of step S1006, drive control part 220 makes to control from position
The component α of the command value in portion 2000 processed is 0.5, and makes the instruction from visual servo portion 210
The component 1-α of value is 0.5 thus synthetic instruction value.
After the process of step S1008 starts, if having passed through certain time (such as 10msec),
Then drive control part 220 by the component α of the command value from position control section 2000 from 0.5
Switch to 0.6.So, the component 1-α from the command value in visual servo portion 210 becomes
0.4.Then, drive control part 220 makes dividing of the command value from position control section 2000
Amount α is 0.6, from the component 1-α of the command value in visual servo portion 210 be the premise of 0.4
Under, their command value is synthesized, and exports to robot 10.
Afterwards, if further across certain time, then drive control part 220 will be from position
The component α of the command value in control portion 2000 switches to 0.7 from 0.6, and will be from visual servo
The component 1-α of the command value in portion 210 switches to 0.3 from 0.4.So, along with certain time
Through and periodically switch component α, and use the component after switching, will control from first
The command value of portion 202 output synthesizes with the command value from the output of the second control portion 213.
Drive control part 220 becomes being repeated before 1 the switching of component α at component α.?
In the case of component α becomes 1, from the component 1-α of the command value in visual servo portion 210
It is 0.Therefore, drive control part 220 exports from the first control portion 202 defeated to robot 10
The command value gone out.So, operation control part 101 makes arm 11 (i.e. end points) according to command value
Mobile (step S1010).As a result of which it is, made end points move by position control.Step
The process of S1010 is identical with step S1000.
So, in the stage by object O1, end points is made to move by position control,
Thus allow for high speed processing.It addition, by slowly switching component α, it is possible to preventing arm 11
Unexpected action, vibration.
It follows that the first control portion 202 result to being made end points move by position control,
I.e. whether end points is judged (step S1012) by switching point 3.Represent switching point 3
The information of position be included in the information relevant to path set in advance.As shown in figure 28,
Switching point 3 is arranged between object O1 (switching point 2) and object O2.
Do not pass through in the situation (being no in step S1012) of switching point 3 at end points, control portion
20 process that step S1010 is repeated.
In the end points situation (being yes in step S1012) by switching point 3, drive and control
Portion 220 over time through and component α is carried out interim switching, and after using switching
Component α, exports the command value exported from the first control portion 202 with from the second control portion 213
Command value synthesize, and export to robot 10.So, operation control part 101
Make arm 11 (i.e. end points) mobile (step S1014) according to command value.The place of step S1014
Manage identical with step S1004.
It follows that the first control portion 202 makes end points to by position control and visual servo
The result of movement, i.e. whether end points is judged (step S1016) by switching point 4.
Represent that the information of the position of switching point 4 is included in the information relevant to path.Such as Figure 28 institute
Showing, switching point 4 is set in object O2.
Do not pass through in the situation (being no in step S1016) of switching point 4 at end points, control portion
20 process that step S1014 is repeated.
In the end points situation (being yes in step S1016) by switching point 4, drive and control
Portion 220 by make component α over time through and in the way of interim increase, component α is carried out
Switching, and uses the component α after switching, by the command value that exports from the first control portion 202 with
Synthesize from the command value of the second control portion 213 output, and export to robot 10.Dynamic
Making control portion 101 makes arm 11 (i.e. end points) mobile (step S1018) according to command value.
The process of step S1018 is identical with step S1008.
Drive control part 220 becomes being repeated before 1 the switching of component α at component α.If
Component α becomes 1, then drive control part 220 exports from the first control portion 202 to robot 10
The command value of output.So, operation control part 101 makes arm 11 (i.e. hold according to command value
Point) mobile (step S1020).The process of step S1020 is identical with step S1010.
It follows that the first control portion 202 result to being made end points move by position control,
I.e. whether end points is arrived objective and judge (step S1022).Represent objective
The information of position be included in the information relevant to path set in advance.
In the situation (being no in step S1022) that end points does not arrives objective, control portion
20 process that step S1020 is repeated.
Under end points arrives the situation (being yes in step S1022) of objective, drive and control
Portion 220 end processes.
According to present embodiment, with object close to time, watched by position control and vision
Take and make end points move, thus the situation also being able to the change in location with object is corresponding.It addition,
At end points (current location) beyond in the case of desirably away from object, meet end points (when
Front position) in the case of condition by regulations such as the situations of object, only pass through position control
And make end points move such that it is able to carry out high speed processing.
It addition, according to present embodiment, to control based on position control and visual servo,
And control of based on position control is when switching over, by slowly switching component α, it is possible to anti-
The only unexpected action of arm, vibration.
Additionally, in the present embodiment, in the case of slowly switching component α, at each warp
When spending certain time, component α is periodically switched 0.1, but slowly switches component α's
Method is not limited to this.Such as, as shown in figure 26, it is also possible to (suitable with to object
Target location in Figure 26 (A)) position, (be equivalent to Figure 26 B from object
Starting position) position away from changing component α accordingly.It addition, as shown in figure 26, point
Amount α can also change (with reference to Figure 26 A, line C, D etc. of Figure 26 B) continuously.
It addition, in the present embodiment, the position control command value with visual servo is being used
In situation (step S1004, S1008, S1014, S1018), to make component α be 0.5,0.6,
0.7,0.8,0.9, but as long as component α is than 0, the big real number less than 1 can be just any
Value.
4th embodiment
Second, third embodiment of the present invention uses trick video camera to visually inspect, but
It is that the scope of application of the present invention is not limited to this.
4th embodiment of the present invention be apply the present invention to object apertures insert
The mode of assembling work such as enter.Hereinafter, the 4th embodiment of the present invention is illustrated.This
Outward, for the part identical with the second embodiment and the 3rd embodiment, mark identical
Reference, and the description thereof will be omitted.
Figure 29 is of the structure of the robot system 3 representing one embodiment of the present invention
The system pie graph of example.The robot system 3 of present embodiment mainly possess robot 10A,
Control portion the 20, first shoot part 30 and the second shoot part 40.
Robot 10A has and includes multiple joint (joint) 12 and multiple connecting rod 13
The arm type robot of arm 11A.It is provided with in the front end of arm 11A and to hold workpiece W, utensil
Hand 14 (so-called end effector).The position of the end points of arm 11A is the position of hand 14
Put.Additionally, end effector is not limited to hand 14.
Arm segment at arm 11A be provided with force sensor 102 (not shown in Figure 29,
With reference to Figure 30).Force sensor 102 is to as relative with the power that robot 10A exports
Counteracting force and the power that is subject to, moment carry out the sensor detected.As force sensor, example
As, it is possible to use and can detect translation 3 axial power compositions and around rotating 3 axles simultaneously
6 axle force sensors of 6 compositions of moment composition.It addition, the thing that force sensor is used
Reason amount is electric current, voltage, the quantity of electric charge, inductance, deformation, resistance, electromagnetic guide, magnetic, empty
Air pressure, light etc..Force sensor 102 by desired physical quantity is converted to the signal of telecommunication, from
And 6 compositions can be detected.Additionally, force sensor 102 is not limited to 6 axles, the most also
Can be 3 axles.
It follows that the function configuration example of robot system 3 is illustrated.Figure 30 represents machine
The functional block diagram of device people's system 3.
Robot 10A possesses the sensor values etc. of the encoder values according to actuator and sensor
Come operation control part 101 and the force sensor 102 of control arm 11A.
Control portion 20A mainly possesses at position control section 2000, visual servo portion 210, image
Reason portion 212, drive control part 220 and power control portion 230.
Power control portion 230 according to from force sensor 102 sensor information (force information,
Moment information), carry out power control (power feel controls).
In the present embodiment, impedance control is carried out as power control.Impedance control be in order to
By externally to the machinery resistance produced in the case of hands point (hand 14) applying power of robot
Anti-(inertia, attenuation quotient, rigidity) is set as under target job position and the power of suitably value
Control device.Specifically, it is in the end effector portion quality of connection of robot, viscosity
In the model of coefficient and elastic key element, be set as the quality of target, viscosity and
Coefficient of elasticity and the control that contacts with object.
Power control portion 230 determines the moving direction of end points, amount of movement by impedance control.Separately
Outward, power control portion 230, according to the moving direction of end points, amount of movement, determines to be arranged at joint
The angle on target of each actuator of 12.Arm 11A is made to move it addition, power control portion 230 generates
Command value as angle on target, and it is exported to drive control part 220.Additionally, due to
The process that power control portion 230 is carried out is general content, so omitting detailed description.
Additionally, power controls to be not limited to mix control, and the energy such as compliance control can be used
Enough control methods controlling perturbed force dexterously.It addition, in order to carry out power control, need detection
Put on the power of hand 14 end effector such as grade, but the power putting on end effector is entered
The method of row detection is not limited to use the situation of force sensor.Such as, it is also possible to from arm
Each axle torque value of 11A infers the external force that end effector is subject to.Therefore, in order to carry out power control
System, as long as arm 11A has obtains the power putting on end effector directly or indirectly
Mechanism.
It follows that the feature of the robot system 3 being made up of said structure to present embodiment
Process illustrate.Figure 31 is the flow process of the control process of the arm 11A representing the present invention
Flow chart.This process e.g. input via not shown button etc. control start instruction thus
Start.In the present embodiment, as shown in figure 32, so that workpiece W is inserted hole H's
Illustrate as a example by assembling work.
If inputting control to start instruction, then the first control portion 202 via not shown button etc.
Control arm 11 by position control, and make end points move (step S130).Step S130
Process identical with step S1000.
In the present embodiment, the component of command value based on position control is set as α, will
The component of the command value of view-based access control model servo is set as β, and by the command value that controls based on power
Component is set as γ.Component α, β and γ are set as, they add up to 1.In step
In S130, α is 1, β and γ is 0.
It follows that the first control portion 202 result to being made end points move by position control,
I.e. whether end points is judged (step S132) by switching point 1.The place of step S132
Manage identical with step S1002.Represent that the information of the position of switching point 1 is included in set in advance
In the information relevant to path.
Figure 32 is the figure that the position of the track to end points and switching point illustrates.In this reality
Execute in mode, the position of the regulation predetermined that switching point 1 is arranged in working space.
At end points not by the situation (being no in step S132) of switching point 1, first controls
Portion 202 processed is repeated the process of step S130.
In the end points situation (being yes in step S132) by switching point 1, drive and control
Portion 220 over time through and component α and β is carried out interim switching, and use and cut
Component α and β after changing, controls the command value exported from the first control portion 202 with from second
The command value of portion 213 processed output synthesizes, and exports to robot 10.So, action
Control portion 101 makes arm 11 (i.e. end points) mobile (step S134) according to command value.That is,
In step S134, end points is made to move by position control and visual servo.
Hereinafter, the process to step S134 is specifically described.At the place carrying out step S134
Before reason, i.e. in the process of step S132, drive control part 220 makes from position control
The component α of the command value in portion 200 is 1, makes dividing of the command value from visual servo portion 210
Amount β is 0, and the component γ of the command value in the control portion 230 that makes to rely on oneself is 0, thus synthesizes
Command value.
After the process of step S134 starts, if having passed through certain time (such as 10msec),
Then the component α of the command value from position control section 2000 is cut by drive control part 220 from 1
Shift to 0.95, and the component β of the command value from visual servo portion 210 is switched to 0.05.
Then, drive control part 220 make from the command value of position control section 2000 be component
0.95 and to make the component of the command value from visual servo portion 210 be to it on the premise of 0.05
Command value synthesize, and export to robot 10.
Afterwards, if further across certain time, then drive control part 220 will be from position
The component α of the command value in control portion 2000 switches to 0.9 from 0.95, and will watch from vision
The component β of the command value taking portion 210 switches to 0.1 from 0.05.
So, periodically switch component α and β along with the process of certain time, and make
With component after switching, by the command value exported from the first control portion 202 and from the second control portion
The command value of 213 outputs synthesizes.Drive control part 220 becomes 0.05 at component α, divides
Amount β becomes being repeated before 0.95 the switching of above-mentioned component.As a result of which it is, pass through position
Control and visual servo and make end points move.Additionally, in step S134, owing to not making
Firmly control, so component γ is 0 with keeping intact.
Additionally, the ratio, α of final component α, β: β is not limited to 0.05:0.95.Point
That amount α, β can take component α, β and be 1 various values.But, in this operation,
Owing to the position of hole H is not limited to constant, it is advantageous to make the component β of visual servo compare position
The component α putting control is big.
Additionally, slowly the method for switching component α is not limited to this.Such as, as Figure 26 A,
Shown in Figure 26 B, it is also possible to object position, position away from from object corresponding
Ground changes component α.It addition, as shown in line C, D of Figure 26, component α can also be continuous
Ground change.
It follows that the second control portion 213 makes end points to by position control and visual servo
The result of movement, i.e. whether end points is judged (step S136) by switching point 2.
Switching point 2 is determined by the relative position from hole H.Such as, switching point 2 is from hole H
Peristome center leave the position of distance L (such as, 10cm).Peristome from hole H
Center is left the position of distance L and can be set as hemispherical in x, y, z space.At figure
In 32, the position of distance L is left at the peristome center from hole H that exemplifies in the z-direction.
Image processing part 212 extracts from present image and comprises the front end of workpiece W and the figure of hole H
Picture, and export to the second control portion 213.It addition, image processing part 212 is according to the first shooting
The camera parameters (focal length etc.) of portion 30 or the second shoot part 40 to the distance in image with
The relation of the distance in realistic space calculates, and exports to the second control portion 213.Second
Control portion 213 is according to the centre bit of front position and the hole H of the workpiece W in the image extracted
Whether the difference put, judged by switching point 2 end points.
At end points not by the situation (being no in step S136) of switching point 2, first controls
202, second control portion 213 of portion processed and drive control part 220 are repeated step S134
Process.
In the end points situation (being yes in step S136) by switching point 2, drive and control
Portion 220 is by the command value exported from the first control portion 202 and the finger exported from power control portion 230
Make value synthesize, and export to robot 10.Operation control part 101 is according to command value
Make arm 11 (i.e. end points) mobile (step S138).
Hereinafter, the process to step S138 is specifically described.At the place carrying out step S138
Before reason, i.e. in the process of step S134, drive control part 220 makes from position control
The component α of the command value in portion 2000 is 0.05, and makes the instruction from visual servo portion 210
The component β of value is 0.95, thus synthetic instruction value.
After the process of step S138 starts, drive control part 220 will be from position control section
The component α of the command value of 2000 switches to 0.5 from 0.05.The control portion it addition, rely on oneself in the future
The component γ of the command value of 230 switches to 0.5 from 0.As a result of which it is, drive control part 220
The component α making the command value from position control section 2000 be 0.5, from visual servo portion
The component β of the command value of 210 is 0, the component γ of the command value in control portion 230 of relying on oneself is
On the premise of 0.5, their command value is synthesized, and export to robot 10.Additionally,
In step S138, owing to not using visual servo, so component β is 0 with keeping intact.
In addition it is also possible to periodically switch component α, γ.
It follows that power control portion 230 controls by visual servo and power and makes end points move
Result, i.e. whether end points is arrived objective and judges (step S140).Can root
Output according to force sensor 102, it may be judged whether arrive objective.
In the situation (being no in step S140) that end points does not arrives objective, position is controlled
Portion 200 processed, power control portion 230 and drive control part 220 are repeated step S138
Process.
Under end points arrives the situation (being yes in step S140) of objective, drive and control
Portion 220 end processes.
According to present embodiment, it is possible to maintain the high speed of position control, and can be from different mesh
Cursor position is corresponding.Even if it addition, cannot confirm that target location etc. cannot use visual servo
In the case of, it is also possible to maintain the high speed of position control and carry out operation safely.
Additionally, in the present embodiment, switching point 1 is pre-set in any position of working space
Putting, switching point 2 is set in the position of the distance leaving regulation from hole H, but switching point 1,
The position of 2 is not limited to this.The elapsed time started from the position of regulation can also be utilized to set
Determine the position of switching point 1,2.Specifically, such as, the position of switching point 2 can set
After by 30 seconds after switching point 1.Alternatively, it is also possible to utilize the position from regulation to leave
Distance set switching point 1,2 position.Specifically, such as, the position of switching point 1
The position leaving distance X from beginning place can be set in.And, it is also possible to according to from
Outside signal input (such as, from the input signal of input equipment 25) sets switching point
1, the position of 2.
5th embodiment
5th embodiment of the present invention is controlled with power by position control and carries out object to hole
The assembling work such as insertion, but the scope of application of the present invention is not limited to this.
5th embodiment of the present invention is by position control, visual servo and power control
Apply the present invention to the mode of the assembling works such as the object insertion to hole.Hereinafter, to this
The 5th bright embodiment illustrates.Knot due to the robot system 4 of the 5th embodiment
Structure is identical with robot system 3, so the description thereof will be omitted.It addition, enter in robot system 4
Row process in, for the second embodiment, the 3rd embodiment and the 4th embodiment
Identical part, marks identical reference, and omits detailed description.
Process to the feature of the robot system 4 of present embodiment illustrates.Figure 33 is
Represent the flow chart of the flow process of the control process of the arm 11A of robot system 4.This process example
Input via not shown button etc. in this way and to control starting instruction thus start.In this enforcement
In mode, as shown in figure 34, workpiece W to be inserted the dress of the hole H being formed at mobile station
Illustrate as a example by joining operation.
If inputting control to start instruction, then the first control portion 202 via not shown button etc.
The control arm 11A by position control, and make end points move (step S130).
It follows that the first control portion 202 result to being made end points move by position control,
I.e. whether end points is judged (step S132) by switching point 1.
At end points not by the situation (being no in step S132) of switching point 1, first controls
Portion 202 processed is repeated the process of step S130.
In the end points situation (being yes in step S132) by switching point 1, drive and control
Portion 220 over time through and component α and β is carried out interim switching, and use and cut
Component α and β after changing, controls the command value exported from the first control portion 202 with from second
The command value of portion 213 processed output synthesizes, and exports to robot 10.So, action
Control portion 101 makes arm 11A (i.e. end points) mobile (step S134) according to command value.
It follows that the second control portion 213 makes end points to by position control and visual servo
The result of movement, i.e. whether end points is judged (step S136) by switching point 2.
At end points not by the situation (being no in step S136) of switching point 2, first controls
202, second control portion 213 of portion processed and drive control part 220 are repeated step S134
Process.
In the end points situation (being yes in step S136) by switching point 2, drive and control
Portion 220 is by the command value exported from the first control portion 202, export from the second control portion 213
Command value and the command value from the output of power control portion 230 synthesize, and to robot
10 outputs.Operation control part 101 makes arm 11A (i.e. end points) move (step according to command value
Rapid S139).
Hereinafter, the process to step S139 is specifically described.At the place carrying out step S139
Before reason, i.e. in the process of step S134, drive control part 220 makes from position control
The component α of the command value in portion 200 is 0.05, and makes the command value from visual servo portion 210
Component β be 0.95, thus synthetic instruction value.
After the process of step S139 starts, drive control part 220 will be from position control section
The component α of the command value of 2000 switches to 0.34 from 0.05.It addition, drive control part 220
The component β of the command value from visual servo portion 210 is switched to 0.33 from 0.95.Further,
Rely on oneself the in the future component γ of command value in control portion 230 of drive control part 220 switches to from 0
0.33.As a result of which it is, drive control part 220 makes the instruction from position control section 2000
The component α of value is 0.34, component β from the command value in visual servo portion 210 is 0.33,
The component γ of command value in control portion 230 of relying on oneself is the instruction on the premise of 0.33 to them
Value synthesizes, and exports to robot 10.
Additionally, the ratio, α of component α, β, γ: β: γ is not limited to 0.34:0.33:0.33.
That component α, β, γ can set component α, β, γ accordingly with operation and be 1 various values.
Alternatively, it is also possible to slowly switch component α, β, γ.
It follows that power control portion 230 controls by visual servo and power and makes end points move
Result, i.e. whether end points is arrived objective and judges (step S140).
In the situation (being no in step S140) that end points does not arrives objective, position is controlled
Portion 2000 processed, visual servo portion 210, power control portion 230 and drive control part 220 are repeatedly
Carry out the process of step S139.
Under end points arrives the situation (being yes in step S140) of objective, drive and control
Portion 220 end processes.
According to present embodiment, it is possible to maintain the high speed of position control, and end points can be made to not
Same target location is moved.Particularly, though in the case of target location is moved and
In the case of cannot confirming target location, owing to being controlled by position control, visual servo, power
And be controlled, so it also is able to maintain the high speed of position control and carry out operation safely.
Additionally, in the present embodiment, by carrying out position control, visual servo, power simultaneously
Control (parallel control) and control arm, but in the 5th embodiment, by carrying out simultaneously
Position control, power control (parallel control) and control arm.Drive control part 220 can basis
The condition that could be visually confirmed to be, specify with or without movement etc. of workpiece W, hole H etc., selection is
No carry out position control according to condition being stored in memorizer 22 grade set in advance etc. simultaneously
System, visual servo, power control, or carry out position control, power control simultaneously.
In the above-described embodiment, the situation using one armed robot is illustrated, but
It also is able to the situation applying the present invention to use tow-armed robot.In the above-described embodiment,
The situation that the front end of the arm in robot is provided with end points is illustrated, but so-called arranges
It is not limited to be arranged at arm in robot.Connect for example, it is also possible to arrange in robot by multiple
Head is constituted and by making joint move and the mechanical hand of mass activity with connecting rod, and by mechanical hand
Front end as end points.
It addition, in the above-described embodiment, possess the first shoot part 30 and the second shoot part
40 the two shoot parts, but shoot part can also be one.
Above, describe the present invention with embodiment, but the model of the technology of the present invention
Enclose the scope being not limited to described in above-mentioned embodiment.Above-mentioned embodiment can be applied
Various changes or improvement, this will be readily apparent to one having ordinary skill.It addition, root
Understanding according to the record of claims, the mode being applied with such change or improvement also is able to
It is included in the range of the technology of the present invention.Particularly, the present invention can provide and be respectively arranged with
Robot, control portion and the robot system of shoot part, it is provided that robot includes controlling
The robot in portion etc., it is also possible to provide only by control portion, or by control portion and shoot part structure
The robot controller become.It addition, the present invention also be able to provide control robot etc. program,
The storage medium of storage program.
6th embodiment
1. the means of present embodiment
Well-known have the robot using image information to control.Obtain continuously for example, it is known that have
Image information, and to the comparison process from the information of this image information acquisition with the information becoming target
Result carry out the Visual servoing control that feeds back.In visual servo, to from up-to-date image information
The direction controlling robot that the information of acquisition diminishes with the difference of the information becoming target.Specifically,
Control as follows: obtain with target close to the variable quantity etc. of such joint angle, and according to this change
Change amount etc. drives joint.
At the posture of the hands point etc. of the robot becoming target, and to form this target
The mode of posture control, in the means of robot, to be difficult to improve positioning precision, be i.e. difficult to
Hands point (hand) etc. is made correctly to move to the posture of target.Ideally, however, it is determined that
The model of robot, then can uniquely obtain hands point posture according to this model.Here
Model such as refers to the length of framework (connecting rod), the structure in joint being arranged between two joints
Make information such as the direction of rotation of the joints (, whether there is biasing etc.).
But, robot comprises various error.The deviation of the length of such as connecting rod, drawn by gravity
The flexure etc. risen.Due to these error components, thus robot is made to take the appearance given carrying out
In the case of the control (such as determining the control of the angle in each joint) of gesture, preferable position appearance
Gesture can become different values from actual posture.
In this, in Visual servoing control, due to the image relative with shooting image
Result is fed back, thus with people can eye observation job status limit, limit to arm,
The situation that the moving direction of hands is finely adjusted in the same manner, even if current posture and target
Posture offsets, it is also possible to identifies and revises this skew.
In Visual servoing control, as above-mentioned " from the information of Image Acquisition " and " become
The information of target ", it is possible to use the three-dimensional posture information of the hands point etc. of robot, also
Can use and not be converted into posture information from the image feature amount of Image Acquisition.Will
The visual servo using posture information is referred to as the visual servo of position reference, will use image
The visual servo of characteristic quantity is referred to as the visual servo of characteristic quantity benchmark.
In order to suitably carry out visual servo, need to detect position well from image information precision
Pose information or image feature amount.If the precision that this detection processes is relatively low, then can wrong identification
Current state.Therefore, feed back the most not become in the information controlling loop and make the shape of robot
The information that state and dbjective state are suitably close to, and the robot control that precision is higher cannot be realized
System.
Imagination posture information, image feature amount are all to be processed (such as by some detection
Join process) etc. and obtain, but this detection process precision may not be enough.This is because
Robot is actual carry out action in the environment of, shooting image in, not only shooting as identify right
The object (hand of such as robot) of elephant, also can shoot workpiece, fixture or be configured at
The object etc. of operating environment.Owing to various objects are shining into the background of image, thus cause desired
The accuracy of identification (accuracy of detection) of object reduces, and obtain posture information, image
The precision of characteristic quantity also step-down.
In patent documentation 1, it is disclosed directly below means: in the visual servo of position reference, right
From the position in the space that image calculates or translational speed and the space calculated from encoder
Position or translational speed compare, thus detect exception.Additionally, due to the position in space is
It is included in the information in posture information, and translational speed is also according to posture information
The information obtained of variable quantity, so below using the position in space or translational speed as position appearance
Gesture information illustrates.
Consider the means by using patent documentation 1, thus in the position obtained according to image information
Put pose information produce bigger error etc., visual servo produce some abnormal in the case of, it is possible to
Detect this exception.If being capable of abnormal detection, then can stop the control of robot, or
Person re-starts the detection of posture information, thus at least suppresses to keep intact ground in the controlling
Use the situation of abnormal information.
But, the means of patent documentation 1 are premised on the visual servo of position reference.If
For position reference, then as it has been described above, carry out the position that the information according to encoder etc. is easily obtained
The comparison process of pose information and the posture information obtained according to image information, therefore
Realize easily.On the other hand, in the visual servo of characteristic quantity benchmark, in the control of robot
Middle use image feature amount.And, even if easily obtaining robot according to the information of encoder etc.
The position in space of hands point etc., can not directly obtain the relation with image feature amount.That is,
In the case of the visual servo of imagination characteristic quantity benchmark, it is difficult to the hands of application patent documentation 1
Section.
Therefore, present applicant has proposed following means: in the control using image feature amount,
Use actual from the image feature amount variable quantity of image information acquisition and according to from robot control
The information that obtains of result and the deduction image feature amount variable quantity inferred, thus detect exception.
Specifically, as shown in figure 35, the robot controller 1000 of present embodiment includes root
The control portion of robot 1110 of robot 20000 is controlled according to image information;Believe according to image
Breath obtains the variable quantity operational part 1120 of image feature amount variable quantity;According to as robot
20000 or the information of object and as the information beyond image information variable quantity infer
Deduction amount that is deduction image feature amount variable quantity by information, to image feature amount variable quantity enter
The variable quantity inferring portion 1130 of row operation;And schemed with inferring by image feature amount variable quantity
As the comparison process of Feature change amount carries out the abnormality determination unit 1140 of unusual determination.
Here, image feature amount is described above as representing in image region, area, line segment
Length, the amount of the feature such as position of characteristic point, image feature amount variable quantity is to represent from multiple (narrow
For justice it is two) information of change between multiple image feature amount of image information acquisition.
As image feature amount, if using the example of the two-dimensional position on the image of 3 characteristic points,
Then image feature amount is the vector of 6 dimensions, and image feature amount variable quantity is two 6 n dimensional vector ns
Difference, be the difference of each key element with vector 6 n dimensional vector ns as key element.
It addition, variable quantity deduction information is the letter of the deduction for image feature amount variable quantity
Breath, and be the information beyond image information.Variable quantity deduction information can be such as from machine
The result that device people controls obtains the information of (actual measurement), can also be specifically robot 20000
Joint angle information.Joint angle information can be from the joint drive measuring, controlling robot
The encoder of the action of motor (being actuator in the broadest sense) obtains.Or, variable quantity is inferred
Can also be the end effector 2220 by robot 20000 or by robot by information
The posture information of the object of 20000 operations carried out.Posture information is e.g. wrapped
The three-dimensional position (x, y, z) of the datum mark containing object and relative to benchmark posture around each axle
6 n dimensional vector ns of rotation (R1, R2, R3).Consider the various posture obtaining object
The means of information, but such as use following means: use the range determination hands of ultrasound wave
Section, use the means of measuring instrument, LED etc. be set at hands point and detect this LED thus carry out
The means measured, the means etc. using mechanical three-dimensional determinator.
So, (the amount of being characterized benchmark for narrow sense is controlled in the robot using image feature amount
Visual servo) in, it is possible to detection is abnormal.Now, the image letter obtained according to reality is carried out
Breath and the image feature amount variable quantity obtained and obtain according to from the viewpoint being different from image information
Variable quantity deduction information and the comparison process of deduction image feature amount variable quantity obtained.
Additionally, the control of the robot carried out according to image information is not limited to visual servo.
Such as, in visual servo, it is carried out continuously for the information based on image information controlling loop
Feedback, but by carrying out the acquisition of 1 image information and can also ask according to this image information
Go out the amount of movement for target location posture thus carry out position control according to this amount of movement
Visual manner, the control as robot based on image information uses.It addition, except regarding
Feel outside servo, visual manner, in using the control of robot of image information, from
The detection etc. of the information of image information, as the means that detection is abnormal, it is also possible to apply this enforcement
The means of mode.
But, as described later in the means of present embodiment, it is contemplated that infer image feature amount
The computing of variable quantity uses Jacobian matrix.And Jacobian matrix is the change representing specified value
The information of the relation of the variable quantity of change amount and other values.Such as, though first information x and second
Information y is non-linear relation (g in y=g (x) is nonlinear function), it is also possible to consider
Near specified value, variation delta x of the first information and variation delta y of the second information
For linear relationship (Δ y=h (h in Δ x) is linear function), and Jacobian matrix represents
This linear relationship.I.e., in the present embodiment, it is contemplated that the most do not use image feature amount
Itself, and use image feature amount variable quantity.Thus, the means of present embodiment are being applied
In the case of control beyond visual manner etc., visual servo, the point that should notice is, no
Use the means only obtaining an image information, and need to use in the hope of publishing picture as Feature change
The mode of amount at least obtains the means of 2 images above information.If such as by present embodiment
Means are applied to visual manner, then need carry out repeatedly the acquisition of image information and become target
The computing of amount of movement.
Hereinafter, to the robot controller 1000 of present embodiment, the system structure of robot
After becoming example to illustrate, the summary of visual servo is illustrated.Under this premise, to this reality
The abnormality detection means executing mode illustrate, and the most also illustrate variation.Additionally,
Following as use image information robot control and as a example by visual servo, but below
Explanation can expand as using the control of the robot of other image information.
2. system configuration example
Figure 36 illustrates the detailed system of the robot controller 1000 of present embodiment
Configuration example.But, robot controller 1000 is not limited to the structure of Figure 36, and can
Carry out omitting the element of an above-mentioned part or to add other various deformation such as element real
Execute.
As shown in figure 36, robot controller 1000 include target characteristic amount input unit 111,
Target track generating unit 112, joint angle control portion 113, drive division 114, joint angle test section
115, image information obtaining section 116, image feature amount operational part 117, variable quantity operational part
1120, variable quantity inferring portion 1130 and abnormality determination unit 1140.
Target characteristic amount input unit 111 becomes the figure of target to target track generating unit 112 input
As characteristic quantity fg.Target characteristic amount input unit 111 such as can also be as accepting to be entered by user
The interface etc. of the input of target image characteristics amount fg of row realizes.In robot control,
Carry out the target image characteristics making image feature amount f obtained according to image information with inputting here
Amount fg is close to the control of (make for narrow sense them consistent).In addition it is also possible to obtain and mesh
The image information (with reference to image, target image) that mark state is corresponding, and according to this image information
Obtain target image characteristics amount fg.Or, it is also possible to do not keep with reference to image, and directly accept
The input of target image characteristics amount fg.
Target track generating unit 112 according to target image characteristics amount fg and is asked from image information
Image feature amount f gone out, generates the target track making robot 20000 carry out action.Specifically
For, carry out obtaining for making robot 20000 and dbjective state (state corresponding with fg)
The process of the variation delta θ g of close joint angle.This Δ θ g becomes the tentative mesh of joint angle
Scale value.Additionally, in target track generating unit 112, it is also possible to when Δ θ g obtains per unit
Between the drive volume (band point θ g in Figure 36) of joint angle.
Joint angle control portion 113 is according to the desired value Δ θ g of joint angle and current joint angle
Value θ, carry out the control of joint angle.Such as, owing to Δ θ g is the variable quantity of joint angle, institute
To use θ and Δ θ g, carry out obtaining the process why joint angle can be worth.Drive division 114 with
With the control in joint angle control portion 113, it is driven the control in the joint of robot 20000.
Joint angle test section 115 carries out the place why joint angle of measuring robots 20000 is worth
Reason.Specifically, after the driving by being carried out by drive division 114 controls to make joint angle change,
Detect the value of the joint angle after this change, and to make the value of current joint angle be θ and to joint angle
Control portion 113 exports.Joint angle test section 115 specifically can also be as obtaining encoder
The interface etc. of information realize.
Image information obtaining section 116 carries out the acquisition of image information from shoot part etc..Here bat
The portion of taking the photograph can be the shoot part being configured at environment as shown in figure 37, it is also possible to be disposed on machine
The shoot part (such as trick video camera) of arm 2210 grade of device people 20000.Image feature amount is transported
Calculation portion 117, according to the image information acquired in image information obtaining section 116, carries out characteristics of image
The calculation process of amount.Additionally, image feature amount to be carried out according to image information the means public affairs of computing
Know there are the various means such as edge detection process, matching treatment, and in the present embodiment can
Apply it widely, therefore omit detailed description.Obtained by image feature amount operational part 117
Image feature amount as up-to-date image feature amount f, and defeated to target track generating unit 112
Go out.
Variable quantity operational part 1120 keeps the image calculated by image feature amount operational part 117
Characteristic quantity, and image feature amount f obtained according to the pastold, and as process object image
The difference of characteristic quantity f (being up-to-date image feature amount for narrow sense), to characteristics of image quantitative change
Change amount Δ f carries out computing.
Variable quantity inferring portion 1130 keeps the joint angle letter detected by joint angle test section 115
Breath, and joint angle information θ obtained according to the pastold, and as process object joint angle letter
The difference of breath θ (being up-to-date joint angle information for narrow sense), the change to joint angle information
Amount Δ θ carries out computing.Further, according to Δ θ, deduction image feature amount variation delta fe is obtained.
Additionally, in Figure 36, the example that variable quantity deduction information is joint angle information is carried out
Illustrate, but as it has been described above, as variable quantity deduction information, it is possible to use robot
The end effector 2220 of 20000 or the posture information of object.
Additionally, the control portion of robot 1110 of Figure 35 can also be the target characteristic with Figure 36
Amount input unit 111, target track generating unit 112, joint angle control portion 113, drive division 114,
Joint angle test section 115, image information obtaining section 116 and image feature amount operational part 117
Corresponding control portion.
It addition, as shown in figure 38, the means of present embodiment can be applied to comprise following structure
The robot become: comprise and control robot according to image information and (specifically include arm
2210 and the robot body 3000 of end effector 2220) control portion of robot 1110;
The variable quantity operational part 1120 of image feature amount variable quantity is obtained according to image information;According to
As robot 20000 or the information of object and as the information beyond image information
Variable quantity deduction information, to the deduction amount of image feature amount variable quantity that is infer that image is special
The amount of levying variable quantity carries out the variable quantity inferring portion 1130 of computing;By image feature amount and deduction
The comparison process of image feature amount variable quantity carries out the abnormality determination unit 1140 of unusual determination.
As shown in Figure 19 A, Figure 19 B, robot here can also be to include controlling device
600 and the robot of robot body 300.If the structure of Figure 19 A, Figure 19 B, then
Control the control portion of robot 1110 etc. that device 600 includes Figure 38.In such manner, it is possible to carry out root
The action formed according to control based on image information such that it is able to realize in Automatic Detection and Control
Abnormal robot.
Additionally, the configuration example of the robot of present embodiment is not limited to Figure 19 A, Figure 19 B.
Such as, as shown in figure 39, robot can also include robot body 3000 and pedestal
Unit portion 350.The robot of present embodiment can also be tow-armed robot as shown in figure 39,
In addition to being equivalent to the part of head, trunk, also include the first arm 2210-1 and the second arm
2210-2.In Figure 39, the first arm 2210-1 be by joint 2211,2213 with set
The framework 2215,2217 being placed between joint is constituted, and the second arm 2210-2 is also same
, but it is not limited to this.Additionally, figure 39 illustrates the both arms with two support arms
The example of robot, but the robot of present embodiment can also have the arm of more than 3.
Base unit portion 350 is arranged at the bottom of robot body 3000, and supporting machine
Human agent 3000.In the example of Figure 39, base unit portion 350 is provided with wheel etc.,
Thus be formed as the structure that robot entirety can move.However, it can be base unit portion
350 do not have wheel etc., and are fixed on the structure on ground etc..In Figure 39, not shown with
Figure 19 A, the device controlling device 600 correspondence of Figure 19 B, but in the robot of Figure 39
In system, control device 600 by the storage in base unit portion 350, so that robot master
Body 3000 is constituted as one with controlling device 600.
Or, it is also possible to as controlled device 600, it is not provided with the machine specifically controlled,
And by being built in the substrate (being more specifically arranged at the IC etc. on substrate) of robot,
Realize control portion of above-mentioned robot 1110 etc..
It addition, as shown in figure 20, the function of robot controller 1000 can also pass through warp
It is connected with robot communication by including the network 400 of wired and wireless at least one party
Server 500 realizes.
Or in the present embodiment, it is also possible to be configured to, as the clothes of robot controller
Business device 500 carries out a part for the process of the robot controller of the present invention.Now, pass through
Dispersion with the robot controller being arranged at robot side processes, thus realizes this process.
And, in this case, the server 500 as robot controller carries out this
In each process of the robot controller of invention, to be allocated in server 500 robot control
The process of system processed.On the other hand, the robot controller being arranged at robot carries out this
In each process of bright robot controller, to be allocated in robot robot controller
Process.
Such as, the robot controller of the present invention carries out first~M (M is integer) place
Reason, it is considered to can so that first process by sub-process 1a and son process 1b realize and
The second process is made to process by the way of 2b realizes by sub-process 2a and son, by first~the
The each process of M is divided into the situation that many height process.In this case, it is considered to as machine
People controls the server 500 of device and carries out sub-process 1a, sub-process 2a, sub-process Ma,
The robot controller being arranged at robot side carries out sub-process 1b, sub-process 2b, son
Process this dispersion of Mb to process.Now, present embodiment robot controller, i.e. hold
The robot controller that row first~M processes can be carried out sub-process 1a~son processes
The robot controller of Ma, can be carried out sub-process 1b~the robot of son process Mb
Control device, it is also possible to be carried out sub-process 1a~son processes Ma and son processes at 1b~son
Whole robot controller of reason Mb.Furthermore, the robot of present embodiment
Controlling device is that each process theed process first~M at least performs the machine that a son processes
People controls device.
Thus, such as with termination (the such as control of Figure 19 A, Figure 19 B of robot side
Device 600) compare the higher server of disposal ability 500 and can carry out processing the high place of load
Reason etc..Further, server 500 can control the action of each robot in the lump, thus such as holds
Easily make multiple robot coordinated actions etc..
It addition, in recent years, the situation of the parts manufacturing multi items and minority had the trend of increase.
And, in the case of the kind of the parts manufactured in change, need to change that robot carries out is dynamic
Make.If structure as shown in figure 20, even if the most not re-starting for multiple robots
Each robot instructs operation, and server 500 also is able to change in the lump that robot carried out is dynamic
Make.Further, with the situation arranging a robot controller 1000 for each robot
Compare, it is possible to the fiber crops during software upgrading that carry out robot control system 1000 are greatly reduced
It is tired of.
3. Visual servoing control
Before the abnormality detection means of present embodiment are illustrated, to general visual servo control
System illustrates.Figure 40 illustrates the configuration example of general Visual servoing control system.By scheming
40 understand, and compare at the robot controller 1000 with the present embodiment shown in Figure 36
In the case of, be formed as remove variable quantity operational part 1120, variable quantity inferring portion 1130 and
The structure of abnormality determination unit 1140.
Dimension in the image feature amount by being used for visual servo is set to the feelings of n (n is integer)
Under condition, image feature amount f is vowed using the image feature amount as f=[f1, f2, fn] T
Measure and show.Each key element of f such as uses the coordinate of the image of characteristic point (control point)
Value etc..In this case, the target image from the input of target characteristic amount input unit 111 is special
The amount of levying fg similarly, shows as fg=[fg1, fg2, fgn] T.
It addition, joint angle also serves as being wrapped with robot 20000 (by arm 2210 for narrow sense)
The joint angle vector of the dimension that the pass joint number that contains is corresponding and show.Such as, if arm 2210
The arm of the 6DOF with 6 joints, then joint angle vector theta show as θ=[θ 1,
θ 2, θ 6] T.
In visual servo, in the case of obtaining current image feature amount f, by this image
Characteristic quantity f feeds back to the action of robot with the difference of target image characteristics amount fg.Concrete and
Speech, makes robot to the side reducing image feature amount f and the difference of target image characteristics amount fg
To carrying out action.For that purpose it is necessary to it is movable to know how to make joint angle θ, image feature amount f is just
How to change this relational.Usually this relational is formed as non-linear, such as at f1=g
In the case of (θ 1, θ 2, θ 3, θ 4, θ 5, θ 6), function g is nonlinear function.
Therefore, in visual servo, well-known have the means using Jacobian matrix J.I.e.
Make two spaces be in non-linear relation, also be able between the variable quantity of the pettiness in each space
Show with linear relationship.Jacobian matrix J is that above-mentioned pettiness variable quantity is set up connection each other
The matrix of system.
Specifically, robot 20000 hands point posture X be X=[x, y, z,
R1, R2, R3] in the case of T, the variable quantity of joint angle and the variable quantity of posture it
Between Jacobian matrix Ja show with following formula (1), the variable quantity of posture and image
Jacobian matrix Ji between Feature change amount shows with following formula (2).
Numerical expression 1
Numerical expression 2
And, by use Ja, Ji, it is possible to state as shown in following formula (3), (4) Δ θ,
Δ X, the relation of Δ f.Ja is commonly referred to as robot Jacobian matrix, and if having robot
The mechanism information of the length of connecting rod of 20000, rotary shaft etc., then can analytical Calculation Ja.Another
Aspect, Ji can in advance from make robot 20000 hands point posture trace change time
The changes of image feature amount etc. speculate out, and it is also proposed and infer Ji in action at any time
Means.
Δ X=Ja Δ θ (3)
Δ f=Ji Δ X (4)
Further, by using above formula (3), (4), it is possible to show figure as shown in following formula (5)
Relation as Feature change amount Δ f with variation delta θ of joint angle.
Δ f=Jv Δ θ (5)
Here, Jv=JiJa, and represent variable quantity and the image feature amount variable quantity of joint angle
Between Jacobian matrix.It addition, also Jv is expressed as image jacobian matrix.At Figure 41
In illustrate the relational of above formula (3)~(5).
According to above content, target track generating unit 112 using the difference of f Yu fg as Δ f,
And obtain drive volume (variable quantity of joint angle) the Δ θ of joint angle.In such manner, it is possible to ask
Go out the variable quantity for making the close joint angle of image feature amount f and fg.Specifically, for
Obtain Δ θ from Δ f, the both sides of above formula (5) are multiplied by from the left side inverse matrix Jv of Jv-1
But, further contemplate the control gain as λ, utilize following formula (6) to obtain into
Variation delta θ g for the joint angle of target.
Δ θ g=-λ Jv-1(f-fg) (6)
Additionally, obtained inverse matrix Jv of Jv in above formula (6)-1But, do not obtaining
Jv-1In the case of, it is possible to use generalized inverse matrix (doubtful inverse matrix) Jv# of Jv.
By using above formula (6), thus whenever obtaining new image, obtain new Δ θ g.
Thereby, it is possible to use the image obtained, while be modernized into the joint angle of target, while carry out and mesh
The control that the mark state state of fg (image feature amount become) is close.Figure 42 illustrates
This flow process.If obtaining image feature amount f from m-1 image (m is integer)M-1, then
By forming the f=f of above formula (6)M-1, it is possible to obtain Δ θ gM-1.Then, at m-1
Between image and next image that is m image, the Δ θ g that will obtainM-1As target
Carry out the control of robot 20000.Then, if obtaining m image, then from this
M image obtains image feature amount fg, and utilizes above formula (6) to calculate as new target
Δθgm.Between m image and m+1 image, use the Δ θ g calculated in the controllingm。
Hereinafter, before terminating this process (before image feature amount is substantial access to fg), continue
Carry out this process.
Although additionally, obtain the variable quantity of the joint angle becoming target, but being not necessarily required to make pass
Joint angle variation targets amount.Such as, between m image and m+1 image, by Δ θ gm
It is controlled as desired value, but also mostly considers following situation: at actual variable quantity
Also do not become Δ θ gmTime, obtain next image that is m+1 image, and counted by it
New desired value Δ θ gm+1。
4. abnormality detection means
The abnormality detection means of present embodiment are illustrated.As shown in Figure 43 A, in robot
When the joint angle of 20000 is θ p, obtain pth image information, and according to this pth image information
Calculate image feature amount fp.Then, in the acquisition moment rearward in moment than pth image information,
When the joint angle of robot 20000 is θ q, obtain q image information, and according to this q
Image information calculates image feature amount fq.Here, pth image information is permissible with q image information
Image information adjacent in time series, it is also possible to be do not adjoin (in pth image information
After acquisition, before the acquisition of q image information, obtain other image information) image information.
In visual servo, as described above the difference of fp, fq with fg is used for as Δ f
The calculating of Δ θ g, but difference fq-fp of fp Yu fq is nothing more than being also image feature amount change
Amount.Further, since joint angle θ p, joint angle θ q are from coding by joint angle test section 115
Devices etc. obtain, it is possible to obtain as measured value, and the difference θ q-θ p of θ p and θ q
It it is variation delta θ of joint angle.That is, for two image informations, in order to obtain respectively
Corresponding image feature amount f and joint angle θ, obtain image feature amount variable quantity as Δ f=fq
-fp, also obtains the variable quantity of joint angle of correspondence as Δ θ=θ q-θ p.
And, as shown in above formula (5), there is the relation of Δ f=Jv Δ θ.That is, if using in fact
Δ θ=θ q-θ the p and the Jacobian matrix Jv that survey and obtain Δ fe=Jv Δ θ, then the Δ fe obtained
Should be consistent with the Δ f=fq-fp of actual measurement in the environment of the ideal not producing error.
Thus, variable quantity inferring portion 1130 is by making pass to the variable quantity effect of joint angle information
Angle information is corresponding with image feature amount (specifically makes variable quantity and the figure of joint angle information for joint
As Feature change amount corresponding) Jacobian matrix Jv, thus to infer image feature amount
Variation delta fe carries out computing.If as it has been described above, preferable environment, then the deduction figure obtained
As Feature change amount Δ fe should ask as Δ f=fq-fp in variable quantity operational part 1120
Image feature amount variation delta f gone out is consistent, otherwise for, there is relatively big difference at Δ f and Δ fe
In the case of, it is possible to it is judged to create some abnormal.
Here, the factor of error is produced as Δ f and Δ fe, it is considered to according to image information to figure
As characteristic quantity carries out the error during value of error during computing, encoder reading joint angle, Ya Ke
The error etc. comprised than matrix J v.But, when encoder reads the value of joint angle, produce
The probability of raw error is relatively low compared with other two kinds time.It addition, Jacobian matrix Jv institute
The error comprised is not the biggest error.On the other hand, due in the picture shooting have non-knowledge
Most object of other object, thus cause, according to image information, image feature amount is carried out computing
Time the generation frequency ratio of error higher.It addition, produce exception in image feature amount computing
In the case of, there is the probability that error becomes very large.Such as, if identifying from image and wishing
Object identifying processing failure if, then have at the image different with original object space
On position, be mistakenly identified as existing the probability of object.Thus, in the present embodiment, main
Exception in the computing of image feature amount to be detected.But it is also possible to will be caused by other factors
Error detect as abnormal.
In unusual determination, such as, carry out using the determination processing of threshold value.Specifically,
Abnormality determination unit 1140 carries out image feature amount variation delta f and infers image feature amount variable quantity
The differential information of Δ fe and the comparison process of threshold value, and in differential information feelings bigger than threshold value
Under condition, it is determined that for exception.Such as set given threshold value Th, in the feelings meeting following formula (7)
Under condition, it is determined that abnormal for producing.In such manner, it is possible to utilize following formula (7) etc. readily to transport
Calculate and detect exception.
| Δ f-Δ fe | > Th (7)
It addition, threshold value Th is without for fixed value, it is also possible to make its value change accordingly with situation.
Such as, abnormality determination unit 1140 can also be configured to, the image in variable quantity operational part 1120
The difference in the acquisition moment of two image informations that the computing of Feature change amount is used is the biggest, then
Threshold value is set to the biggest.
As shown in Figure 41 etc., Jacobian matrix Jv is that Δ θ and Δ f is set up the matrix contacted.
And as shown in figure 44, even if in the case of making identical Jacobian matrix Jv effect, with
The Δ fe acting on Δ θ and obtain compares, and acts on the Δ θ ' bigger than Δ θ variable quantity and obtains
Δ fe ' one side's variable quantity is bigger.Now, it is difficult to consider that Jacobian matrix Jv does not the most produce by mistake
Difference, thus change with the preferable of image feature amount in the case of joint angle changes delta θ, Δ θ '
Amount Δ fi, Δ fi ' compare, and as shown in figure 44, Δ fe, Δ fe ' produce deviation.And, from Figure 44
A1 Yu A2 comparison understand, variable quantity is the biggest, and this deviation is the biggest.
If assuming the most not produce error in image feature amount computing, then ask according to image information
Image feature amount variation delta f gone out is equal with Δ fi, Δ fi '.In this case, above formula (7)
The left side represent because of Jacobian matrix produce error, and as Δ θ, Δ fe etc. changing
Measure less in the case of become the value suitable with A1, at variable quantity as Δ θ ', Δ fe ' etc.
Compared with becoming the value suitable with A2 in the case of big.But as it has been described above, two sides of Δ fe, Δ fe '
The Jacobian matrix Jv used is identical, although the value on the left side of above formula (7) becomes big,
But it is unsuitable for being judged to that A2 mono-side and A1 are in a ratio of the higher state of abnormality degree.That is,
Cannot say in the situation corresponding with A1, be unsatisfactory for above formula (7) (not being judged to exception),
It is suitable for meeting above formula (7) (being judged to exception) in the situation corresponding with A2.Therefore,
In abnormality determination unit 1140, if the variable quantity such as Δ θ, Δ fe is the biggest, the most further by threshold value
Th is also set to the biggest.So, due to compared with the situation of A1, corresponding to A2
Situation under threshold value Th big, it is possible to carry out suitable unusual determination.Consider two figures
Acquisition as information (if in Figure 43 A, be pth image information and q image information)
The difference in moment is the biggest, and Δ θ, Δ fe etc. are the biggest, therefore on processing, such as, obtains with image
The difference taking the moment sets threshold value Th accordingly.
Additionally, it is contemplated that the various controls in the case of detecting in abnormality determination unit 1140 extremely.
Such as, in the case of being detected by abnormality determination unit 1140 extremely, control portion of robot 1110
The control making robot 20000 stop can also be carried out.As it has been described above, abnormal feelings detected
Condition e.g. produces the situation etc. of bigger error from the computing of the image feature amount of image information.
That is, if using this image feature amount (if the example of Figure 43 A, be fq) to carry out machine
The control of people 20000, then exist to close with target image characteristics amount fg with image feature amount
Far apart direction, the direction probability that makes robot 20000 move.Because of this situation, probably
Fearness can make arm 2210 grade collide with other objects, and owing to taking irrational posture to make
The object that hand etc. are held falls.Thus, as an example of control during exception,
Consider to make the action of robot 20000 itself stop, and do not carry out bigger the moving of such risk
Make.
If it addition, infer that image feature amount fq produces bigger error, and being not intended to make
With the control of fq, then robot motion can not also be made immediately to stop, and fq is not used for
Control.The most such as, in the case of being detected by abnormality determination unit 1140 extremely, machine
People's control portion 1110 can also skip based on the characteristics of image quantitative change in variable quantity operational part 1120
In two image informations that the computing of change amount is used, in time series moment rearward obtain
The control that the image information taken that is unusual determination image information are realized, and carry out based on than
The control that the image information of the moment acquisition that unusual determination image information is forward is realized.
If the example of Figure 43 A, then unusual determination image information is q image.It addition,
In the example of Figure 42, use adjacent two image information to carry out unusual determination, and sentence
It is set in m-2 image information with m-1 image information without exception, at m-1
Image information is without exception with m image information, schemes with m+1 in m image information
Exception is had as in information.In this case, it is considered to understand fM-1And fmThere is not exception,
And fm+1Exist abnormal, thus Δ θ gM-1、ΔθgmCan be used in controlling, but by Δ θ gm+ 1It is unsuitable for controlling.Originally, at m+1 image information and next m+2
By Δ θ g between image informationm+1For controlling, but here, due to the inappropriate institute of this control
Not carry out.In this case, m+1 image information and m+2 image information it
Between, the Δ θ g obtained before also usingmAnd make robot 20000 carry out action.Due to
ΔθgmAt least at fmThe moment that calculates make, to target direction, the information that robot 20000 moves,
Even if so at fm+1Calculating after continue with, it is also difficult to think and can produce bigger error.
Such that make detect abnormal in the case of, it is also possible to by information before this, especially
It is to obtain in the moment more forward than the abnormality detection moment and be not detected by abnormal information, carries out
Substantially control, so that the action of robot 20000 proceeds.Afterwards, if obtaining new
Image information (if the example of Figure 42 is then m+2 image information), then utilize from this new
The new image feature amount obtained of image information be controlled.
In the flow chart of Figure 48, it is shown that this embodiment party considered to during abnormality detection
The handling process of formula.If proceeding by this process, first carry out by image information obtaining section
The acquisition of 116 images realized and the characteristics of image realized by image feature amount operational part 117
The computing of amount, and in variable quantity operational part 1120, image feature amount variable quantity is carried out computing
(S10001).It addition, carry out the detection of the joint angle realized by joint angle test section 115,
And to inferring that image feature amount variable quantity is inferred in variable quantity inferring portion 1130
(S10002).Then, according to image feature amount variable quantity and deduction image feature amount variable quantity
Difference whether below threshold value, carry out unusual determination (S10003).
In difference below threshold value in the case of (S10003 being yes), do not produce exception, from
And use the image feature amount obtained in S10001 to be controlled (S10004).Then, carry out
Whether current image feature amount is substantial access to (for narrow sense with the image feature amount becoming target
Be consistent) judgement, in the case of for being, normally arrive target and terminate process.Separately
On the one hand, in S10005 be no in the case of, action itself does not produce exception, but not
Arrive target, thus return to S10001 and proceed to control.
It addition, at image feature amount variable quantity and the differential ratio threshold inferring image feature amount variable quantity
In the case of value big (being no in S10003), it is determined that abnormal for producing.Then, to exception
Generation whether be that n times are carried out continuously judgement (S10006), in the case of continuously generating,
For not preferably continuing to the exception of the degree of action, thus stop action.On the other hand, in exception
Generation be not n times continuous print in the case of, use in the past and be judged to not produce exception
Image feature amount be controlled (S10007), and return to S10001 and proceed next
The image procossing in individual moment.In the flow chart of Figure 48, as it has been described above, to a certain extent
Exception before the exception of N continuous less than-1 time (be here produce), stop dynamic the most immediately
Make, and the control on the direction carrying out making action continue.
Additionally, in the above description, the acquisition moment of image information, pass are considered the most especially
Between the acquisition moment (computing finish time) obtaining moment, image feature amount of joint angle information
Time difference.But it practice, as shown in figure 45, even if obtaining image in the given moment
Information, joint angle information when encoder reads the acquisition of this image information and reading
Information sends and also can produce time lag before joint angle test section 115.Further, since at image
The computing of image feature amount is carried out, so also producing time lag at this, and due to because of figure after acquisition
The computational load making image feature amount as the difference of information is different, so the length of time lag is the most not
With.Such as, identifying that entirely without shooting the object beyond object and background are single element
When color, it is possible to carry out the computing of image feature amount at high speed, but have in shooting various
In the case of object etc., the computing of image feature amount requires time for.
That is, in Figure 43 A, simply to using pth image information and q image information
Unusual determination is illustrated, but the most as shown in figure 45, needs to consider from pth figure
As information get correspondence joint angle information acquisition time lag t θ p and from pth image
Time lag tfp that the computing getting image feature amount of information terminates, q image information is also same
Sample needs to consider t θ q and tfq.
Unusual determination process e.g. obtain q image information image feature amount fq time
Carve and start, but must suitably judge as the fp of object that obtains difference be how long before
When the image feature amount obtained or the acquisition moment of θ q and θ p are.
Specifically, the image spy of the first image information is obtained in the i-th (i is natural number) moment
The amount of levying f1 and obtain the second image information in jth (j is the natural number the meeting j ≠ i) moment
Above-mentioned image feature amount f2 in the case of, variable quantity operational part 1120 is by image feature amount f1
Obtaining as image feature amount variable quantity with the difference of image feature amount f2, variable quantity is inferred
Portion 1130 obtains the variable quantity corresponding with the first image information in kth (k the is natural number) moment
Deduction information p1 and l (l the is natural number) moment obtain with the second image information
In the case of corresponding variable quantity infers information p2 of using, according to variable quantity deduction information p1
With variable quantity deduction information p2, obtain deduction image feature amount variable quantity.
If the example of Figure 45, then image feature amount, joint angle information are to obtain in the various moment
The information taken, but in the situation on the basis of the acquisition moment (such as jth moment) of fq
Under, image feature amount fp corresponding with pth image information is at forward (tfq+ti-tfp)
Moment obtain image feature amount, i.e. determine the i-th moment (tfq+ more forward than the jth moment
Ti-tfp).Here, ti represents the difference in Image Acquisition moment as shown in figure 45.
Equally, determine that the l moment obtaining the moment as θ q is (tfq more forward than the jth moment
-t θ q) moment, as θ p obtain the moment the kth moment be (tfq more forward than the jth moment
+ ti-t θ p) moment.In the means of present embodiment, need to obtain Δ f and Δ θ's
Correspondence, specifically, if Δ f obtains with q image information according to pth image information,
Then Δ θ is also required to corresponding with pth image information and q image information.If not, root
Deduction image feature amount variation delta fe obtained according to Δ θ becomes basic and does not have corresponding closing with Δ f
System, and do not compare the meaning of process.Thus as set forth above, it is possible to say and determine the moment
Corresponding relation is important.Additionally, in Figure 45, due to very at a high speed and altofrequency enter
The driving of row joint angle itself, so processing as continuous print process.
Additionally, in existing robot 20000 and robot controller 1000, energy
Enough consider the difference foot obtaining moment and the acquisition moment of corresponding joint angle information of image information
Enough little.Thus, it is also possible to consider the acquisition moment that the kth moment is the first image information, l
Moment is the acquisition moment of the second image information.In this case, owing to can make in Figure 45
T θ p, t θ q be 0, so easily processing.
It addition, as more specifically example, it is considered to calculating image according to previous image information
The moment of characteristic quantity carries out the means of the acquisition of next image information.This is shown at Figure 46
The example of situation.The longitudinal axis of Figure 46 is the value of image feature amount, and " actual characteristic quantity " is false
If obtaining the value in the case of the image feature amount corresponding with the joint angle information in this moment, and
Cannot confirm on processing.Knowable to actual characteristic quantity successfully elapses this situation, permissible
The driving of consideration joint angle is continuous print.
In this case, special due to the image corresponding with the image information that the moment at B1 obtains
The amount of levying is to obtain in the moment of the B2 after t2, thus the actual characteristic quantity of B1 with
The image feature amount of B2 is corresponding (if there be not error, consistent).And under the moment of B2 obtains
One image information.
Equally, for the image feature amount of the image information obtained at B2, tie at B3
Bundle computing, and obtain next image information at B3.Below in the same manner, at B4
For the image feature amount of the image information obtained, terminate computing at the B5 after t1, and
And obtain next image information at B5.
If the example of Figure 46, then in the image feature amount that the moment of B5 is calculated and
The image feature amount that the moment of B2 calculates is in the case of unusual determination processes, and image is believed
The acquisition moment of breath is respectively B4 and B1.As it has been described above, image information the acquisition moment with
In the case of the difference in the acquisition moment of corresponding joint angle information is sufficiently small, joint angle information uses
The information in the moment of B4 and the information in the moment of B1.The most as shown in figure 46, by B2
With the difference of B5 as Ts, and in the case of making the benchmark in moment be B5, as comparison other
Image feature amount use the characteristic quantity in moment of forward Ts.It addition, obtaining joint angle letter
Two the joint angle information used during the difference of breath use the moment of forward t1 information and
The information in the moment of forward Ts+t2.It addition, the acquisition moment of two image informations it
Difference is (Ts+t2-t1).Thus, the difference in the acquisition moment according to image information determines
In the case of threshold value Th, use the value of (Ts+t2-t1).
Additionally, it is contemplated that the acquisition moment of various information, but as it has been described above, can be so that Δ f
And it is identical on this point that the mode that there is corresponding relation between Δ θ carries out the determination in moment.
5. variation
In the above description, obtain Δ f and Δ θ, and obtain deduction image feature amount according to Δ θ
Variation delta fe, thus Δ f is compared with Δ fe.But the means of present embodiment do not limit
Due to this.Such as above-mentioned mensuration means, it is also possible to obtain robot by some means
The posture information of the hands point of 20000 or carried out the position of object of holding etc. by hands point
Pose information.
In this case, obtain posture information X as variable quantity deduction information, because of
This can obtain its variation delta X.And as shown in above formula (4), by refined to Δ X effect
Gram ratio matrix J i, it is possible to obtain deduction image feature amount variable quantity identically with the situation of Δ θ
Δfe.If obtaining Δ fe, then after process identical with above-mentioned example.That is, variable quantity inferring portion
1130 by making posture information and characteristics of image to the variable quantity effect of position pose information
Measure and corresponding (specifically make variable quantity and the image feature amount variable quantity phase of posture information
Corresponding) Jacobian matrix, and to inferring that image feature amount variable quantity carries out computing.At figure
In 43B, with the flow process that Figure 43 A show correspondingly this process.
Here, as posture information, use robot 20000 hands point (hand or
Person's end effector 2220) posture in the case of, Ji be make hands point posture
The information that the variable quantity of information is corresponding with image feature amount variable quantity.It addition, as position appearance
Gesture information, in the case of the posture using object, Ji is the position appearance making object
The information that the variable quantity of gesture information is corresponding with image feature amount variable quantity.Or, if known profit
With end effector with which type of relative this information of posture holding object, then by
In end effector 2220 posture information and object posture information one to one
Correspondence, so also being able to be converted to the information of a side information of the opposing party.I.e., it is considered to obtaining
After taking the posture information of end effector 2220, it is converted into the position appearance of object
Gesture information, makes variable quantity and the characteristics of image quantitative change of the posture information of object afterwards
The corresponding Jacobian matrix Ji of change amount obtains the various embodiments such as Δ fe.
It addition, the comparison process of the unusual determination of present embodiment is not limited to use image special
The amount of levying variation delta f is carried out with deduction image feature amount variation delta fe.Characteristics of image quantitative change
Change amount Δ f, variation delta X of posture information and variation delta θ of joint angle information
Can (be generalized inverse in the broadest sense by the inverse matrix of use Jacobian matrix, Jacobian matrix
Matrix) that is against Jacobian matrix thus mutually change.
The most as shown in figure 49, the means of present embodiment can be applied to comprise following composition
Robot controller: comprise the robot control controlling robot 20000 according to image information
Portion 1110 processed;Obtain and represent the end effector 2220 of robot 20000 or object
The posture variable quantity of the variable quantity of posture information or represent robot 20000
The variable quantity operational part 1120 of the joint angle variable quantity of the variable quantity of joint angle information;According to figure
Obtain image feature amount variable quantity as information, and obtain position according to image feature amount variable quantity
Put the deduction amount of postural change amount that is inferred position postural change amount or joint angle variable quantity
Deduction amount that is infer joint angle variable quantity variable quantity inferring portion 1130;And pass through position
Put the comparison process of postural change amount and inferred position postural change amount or become by joint angle
Change amount carries out the abnormality determination unit of unusual determination with the comparison process inferring joint angle variable quantity
1140。
In Figure 49, in the case of comparing with Figure 36, be formed as variable quantity operational part
1120 and variable quantity inferring portion 1130 replace structure.I.e. variable quantity operational part 1120 is according to pass
Save angle information and obtain variable quantity (being joint angle variable quantity or posture variable quantity here),
According to the difference of image feature amount, variable quantity inferring portion 1130 infers that variable quantity (obtains deduction
Joint angle variable quantity or inferred position postural change amount).It addition, in Figure 49, variable quantity
Operational part 1120 is formed as obtaining the parts of joint angle information, but as it has been described above, in change
In amount operational part 1120, it is possible to use measurement result etc. obtain posture information.
Specifically, in the case of obtaining Δ f and Δ θ, it is also possible to utilize from above formula (5)
The following formula (8) obtained obtains deduction joint angle variation delta θ e, thus carries out Δ θ and Δ θ e
Comparison process.Specifically, use given threshold value Th2, set up at following formula (9)
In the case of be judged to abnormal.
Δ θ e=Jv-1Δf·····(8)
| Δ θ-Δ θ e | > Th2 (9)
Or, in the case of using mensuration means as above to obtain Δ f and Δ X, also
The following formula (10) obtained from above formula (4) can be utilized to obtain inferred position postural change amount
Δ Xe, thus carry out the comparison process of Δ X and Δ Xe.Specifically, given threshold value is used
Th3, in the case of following formula (11) is set up, it is determined that for abnormal.
Δ Xe=Ji-1Δf·····(10)
| Δ X-Δ Xe | > Th3 (11)
It addition, be also not limited to the situation utilizing the information directly obtained to compare.Such as,
In the case of obtaining Δ f and Δ θ, it is also possible to utilize above formula (10) to obtain according to Δ f and push away
Disconnected posture variation delta Xe, and utilize above formula (3) to obtain posture according to Δ θ
Variation delta X (this Δ X is not measured value but inferred value strictly speaking), thus make
Judgement with above formula (11).
Or, in the case of obtaining Δ f and Δ X, it is also possible to utilize above formula (8) according to Δ f
Obtain and infer joint angle variation delta θ e, and obtain from above formula (3) according to Δ X utilization
Following formula (12) obtain joint angle variation delta θ (strictly speaking this Δ θ be not measured value and
It is inferred value), thus carry out using the judgement of above formula (9).
Δ θ=Ja-1ΔX·····(12)
That is, variable quantity operational part 1120 carries out obtaining multiple posture information and as position
Postural change amount and obtain the process of the difference of multiple posture information, obtain multiple positions appearance
Gesture information is also obtained the process of joint variable quantity according to the difference of multiple posture information, is obtained
Take multiple joint angle information and obtain multiple joint angle information as above-mentioned joint angle variable quantity
The process of difference and obtain multiple joint angle information the difference according to multiple joint angle information
Divide and obtain any one in the process of posture variable quantity and process.
In Figure 47, by mark in the lump the numerical expression in this specification numbering in the way of summarize with
Shown on Δ f, Δ X, the relation of Δ θ.That is, for the means of present embodiment, if
Obtain any two information in Δ f, Δ X and Δ θ, then by convert them to Δ f,
Any one information in Δ X, Δ θ and compare such that it is able to realize the hands of present embodiment
Section, and can be to the information obtained, carry out various deformation implementation for the information that compares.
Additionally, the robot controller 1000 etc. of present embodiment can also utilize program
Realize its part processed or major part.Now, journey is performed by processors such as CPU
Sequence, thus realize the robot controller 1000 etc. of present embodiment.Specifically, read
Go out to be stored in the program of non-transitory information storage medium, and the processor such as CPU performs reading
The program gone out.Here, information storage medium (can utilize the medium that computer reads) is storage
Depositing the medium of program, data etc., its function can pass through CD (DVD, CD etc.), HDD
(hard disk drive) or memorizer (card type reservoir, ROM etc.) etc. realize.
And, the processor such as CPU, according to being stored in the program (data) of information storage medium, is carried out
The various process of present embodiment.That is, it is used for making computer (tool in information storage medium storage
Standby operating portion, process portion, storage part, the device of output unit) as each portion of present embodiment
And the program of function (for making the program of the process in each portion of computer execution).
Additionally, above, present embodiment is described in detail, but can the most not
Under conditions of departing from the new content of the present invention and effect, carry out diversified change, this for
It is readily appreciated that for those skilled in the art.Therefore, this change example is also all contained in
In the scope of the present invention.Such as, in description or accompanying drawing, at least one times with more broad sense or
The term that the different terms of synonym are described together, any position in description or accompanying drawing,
All can replace to this difference term.It addition, the structure of robot controller 1000 grade,
Action is also not limited in present embodiment the structure of explanation, action, and can carry out various change
Shape is implemented.
7th embodiment
1. the means of present embodiment
First the means of present embodiment are illustrated.It is right to use in most cases for checking
Inspection (particularly visual examination) as thing.For visual examination (visual inspection), use
The inspection method that the eyes of people carry out watching and observe is basic, but from the user checked
Labor-saving, inspection the viewpoint such as high precision int from the point of view of, it is proposed that utilize and check that device makes inspection automatic
The means changed.
Here inspection device can also be special device, such as special inspection dress
Put, as shown in Figure 54, it is considered to include shoot part CA, process portion PR and interface portion IF
Device.In this case, the inspection checking device to obtain use shoot part CA and to shoot is right
As the shooting image of thing OB, and shooting image is used to carry out inspection department in process portion PR
Reason.The content that consideration various inspections here process, but such as, as qualified in inspection
Image and obtain the image of the inspection object OB being judged to qualified state in advance and (can be
Shooting image, it is also possible to be made up of model data), and carry out this qualified images and reality
The comparison process of shooting image.If shooting image is close with qualified images, then can determine that
Inspection object OB captured by this shooting image is qualified, if shooting image and qualified figure
The difference of picture is big, then can determine that inspection object OB exists some problem and defective.Separately
Outward, in patent documentation 1, disclose and utilize robot as the means checking device.
But, from the example of above-mentioned qualified images also knowable to, in order to utilize inspection device examine
Look into, need to preset the information for this inspection.Such as, although depend on checking object
The configuration mode of OB, however it is necessary that to preset what direction to observe inspection object OB from
Etc information.
Usually, inspection object OB how is observed (for shoot in shooting figure for narrow sense
It is which kind of shape, size during picture) because checking the object OB position with observation, the phase in direction
To relation and change.Hereinafter, it is expressed as viewpoint position by observing the position checking object OB
Put, for representing the meaning of the position of the configuration of shoot part for viewpoint position narrow sense.It addition, will
Observe and check that the direction of object OB is expressed as direction of visual lines, be table for direction of visual lines narrow sense
Show the meaning of the shooting direction (direction of optical axis) of shoot part.If being not provided with viewpoint position, regarding
The benchmark in line direction, then owing to checking that when checking every time the view mode of object OB may
Change, so may not carry out accordingly with view mode judging to check object OB's
Normal abnormal visual examination.
It addition, as this inspection object OB is judged to the qualified of N/R benchmark
Image, it is impossible to determine the image that can keep observing from what viewpoint position, direction of visual lines.That is,
If the locality carrying out observing when checking is uncertain, then relative to the shooting obtained when checking
The comparison other (inspection benchmark) of image is the most uncertain, thus cannot be carried out suitable inspection.
As long as additionally, keeping observing be judged to that qualified inspection is right from all viewpoint positions, direction of visual lines
As the image in the case of thing OB, then it can be avoided that there is no the situation of qualified images.But,
Viewpoint position in the case of Gai, that direction of visual lines can become comparison is huge, thus qualified images
It is huge that number also becomes comparison, the most unrealistic.According to above point, it is also desirable to keep in advance closing
Table images.
Further, usually, qualified images, shooting image also can comprise in inspection unnecessary
Information, therefore when using image entirety to carry out inspection process (comparison process), probably inspection
The precision looked into can step-down.Such as, at shooting image in addition to checking object, there is also bat
Take the photograph the situation of instrument, fixture etc., the most preferably be used for checking by above-mentioned information.It addition,
In the case of a part for inspection object is inspection object, the most also can be because checking object
It not to check that the information in region of object makes inspection precision reduce.Specifically, as with figure
64A~Figure 64 D carries out aftermentioned, assembles compare object being considered for bigger object A
In the case of the operation of object B little for A, the object of inspection should be assembled object B's
Around, and check that necessity overall for object A is relatively low, and because make A generally examine
Check the probability as misinterpretation also can be improved.Thus, if considering to improve the precision that inspection processes,
Inspection area also becomes important information in inspection.
But in the past, above-mentioned inspection area, viewpoint position, direction of visual lines or qualified images
Etc for check information be to be carried out by the user of the Professional knowledge with image procossing
Set.It is because while to carry out qualified images and shooting image by image procossing
Comparison process, but also require that the concrete content with this image procossing changes inspection institute accordingly
The setting of the information needed.
Such as, the figure of the image procossing at the edge during application uses image, all use pixel value
As processing, using in brightness, the image procossing of aberration form and aspect or other image procossing
Whether a kind of situation is suitable to the qualified images comparison process with shooting image (for phase for narrow sense
Determination processing like degree), may be right with the inspection shape of object OB, tone, texture etc.
Should ground change.If the inspection of the content of image procossing thus can be changed, then carry out checking
Which kind of image procossing user must suitably be set for.
Even if it addition, in the case of having set the content of image procossing or in versatility relatively
The content of high image procossing be previously set complete in the case of, user is also required to suitably slap
Hold the content of this image procossing.If this is because the concrete content of image procossing changes,
Then suitable mutually with inspection viewpoint position, direction of visual lines is it can also happen that change.Such as, using
In the case of marginal information compares process, can be by can be to checking object OB
The locality that complicated part in shape carries out observing is set to viewpoint position, direction of visual lines,
And the locality observing smooth part is viewpoint position direction is unsuitable.If it addition,
Use pixel value to compare process, the most preferably by can to changing greatly because of tone or
The region that person can observe brightly because fully irradiating from the light of light source carries out the position observed
Put direction as viewpoint position, direction of visual lines.That is, in existing means, viewpoint is being comprised
Position or direction of visual lines, qualified images information required for interior inspection setting in, need
The Professional knowledge of image procossing.Furthermore, if the content of image procossing is different, need to make
Qualified images is also carried out change with the benchmark of the comparison process of shooting image.For example, it is desired to figure
Content as processing determines that qualified images has how many degree similar to shooting image accordingly, closes
Lattice, have that how many degree differences are the most underproof benchmark, but if do not have the specialty of image procossing to know
Know, the most also cannot set this benchmark.
I.e., i.e. allow to by using robot etc. to make inspection automatization, it is also difficult to set this inspection
Look into required information, for not there is the user of the knowledge of specialty, it cannot be said that realize
The automatization checked is easy.
It addition, the robot of the applicant's imagination is by easily holding for doing paired user
Guidance during row robot manipulating task, and be provided with various sensors etc. and make the energy of robot own
Enough identify the robot of working environment, it is possible to carry out various operation neatly.Such machine
People is suitable to multi items manufacture (for the less multi items of manufacture of an average kind for narrow sense
A small amount of manufacture).But, even if easily carrying out manufacturing the guidance etc. in moment, if easily carry out
The inspection of the goods being manufactured into also becomes another problem.If this is because goods difference, should examine
The position of the object looked into is the most different, as result, in shooting image and qualified images,
The inspection area that should become comparison other is the most different for each goods.That is, at imagination multi items
In the case of manufacture, if user is entrusted in the setting of inspection area, then process with this setting
Relevant burden is relatively big, and causes productive reduction.
Therefore, the applicant proposes following means: generate for examining according to the first inspection information
Investigate and prosecute the second inspection information of reason, thus reduce the burden of the user during inspection processes, and carry
Productivity during high robot manipulating task.Specifically, the robot 30000 of present embodiment is
Use the shooting checking object shot by shoot part (shoot part 5000 of such as Figure 52)
Image, and carry out the robot checking that the inspection that object checks processes, according to first
Inspection information, generates the second inspection information comprising the inspection area that inspection processes, and root
Inspection process is carried out according to the second inspection information.
Here, the first inspection information is that robot 30000 is when processing forward than execution inspection
The information that quarter can obtain, and represent the information that the generation of the second inspection information is used.Due to
First inspection information is the information obtained in advance, it is also possible to show as prior information.In this enforcement
In mode, the first inspection information can be inputted by user, it is also possible in robot 30000
Generate.Even if in the case of inputted the first inspection information by user, this first inspection information
The most do not require the Professional knowledge of image procossing when input, and become and can easily carry out inputting
Information.Specifically, can be shape information, the inspection object comprising and checking object OB
The posture information of thing OB and the relative inspection for inspection object OB process
The information of at least one in object's position.
As described later, by use shape information (being three-dimensional modeling data for narrow sense),
Posture information, inspection process the information of object's position, it is possible to generate the second inspection information.
And shape information is as cad data etc., is usually acquisition in advance, inputs at user
During shape information, select existing information.Such as, maintaining as checking object
Under the situation of the data of the various objects of the candidate of OB, user selects inspection among this candidate
Look into object OB.It addition, for posture information, if knowing when checking
Check object OB be how to configure (such as with which kind of posture be configured on operation post assorted
Position), then it also is able to easily setting position pose information, the input of posture information
Do not require the Professional knowledge of image procossing.It addition, it is to represent that inspection is right that inspection processes object's position
As thing OB is intended to carry out the information of position checked, if such as to checking that object OB gives
The breakage of fixed part checks, then be the information of the position representing this given part.Separately
Outward, using the inspection object OB that assembles object B for object A as object, in inspection
Look in the case of whether being normally carried out the assembling of object A and object B, object A and object B
Rigging position (contact surface, contact point, on position etc.) become inspection process object's position.
Inspection processes object's position the most in the same manner, if will appreciate that the content of inspection, can easily carry out
Input, and the Professional knowledge of image procossing is need not when input.
Additionally, the means of present embodiment are not limited to automatically generate the complete of the second inspection information
Portion.For example, it is also possible to utilize the means of present embodiment to generate in the second inspection information
Point, and other the second inspection information is manually entered by user.In this case, use
Person not can omit the input of the second inspection information completely, but at least can automatically generate
On this point of being difficult to the view information that sets etc., it is possible to utilize the means of present embodiment and easy
Carry out checking that this advantage is constant.
It addition, inspection process be the result for robot manipulating task based on robot 30000 and
The process carried out, thus the first inspection information can also be the letter obtained in robot manipulating task
Breath.
Here robot manipulating task refers to the operation carried out by robot, it is considered to by screw fastening, weldering
Connect, pressure welding, the combination of the formation such as snap shot (Snapshot) and use hand, instrument,
The various operation such as the deformation of fixture.Inspection process is carried out in the result for robot manipulating task
In the case of, this inspection processes and determines whether to be normally carried out robot manipulating task.In this case,
In order to start to perform robot manipulating task, need to obtain and check object OB, job content phase
The various information closed.Such as, operation object (checks object OB whole or one
Point) where be arranged in before operation, with which kind of posture, after operation, be changed to which kind of position
Put posture and become known information.If it addition, carrying out screw fastening, welding, then manipulating object
The position of the trip bolt in thing, the position carrying out welding are known.Equally, if making multiple thing
Body combines, then object A where from what direction with what object is combined into known
Information, if applying deformation, the then deformation position in operation object and deformation to operation object
After shape be the most all known information.
That is, in the case of robot manipulating task becomes object, for above-mentioned shape information,
Posture information and inspection process information corresponding to object's position, other the first inspection letters
For the information that breath is comprised, on the premise of completing robot manipulating task, sizable part
(the first according to circumstances required inspection information whole) become known.That is, this embodiment party
In the robot 30000 of formula, the first inspection information diverts the unit of the control carrying out robot
The information that (the process portion 11120 of such as Figure 50) etc. are kept.Even if it addition,
Figure 51 A, Figure 51 B is used the means of present embodiment to be applied to and robot as described later
In the case of 30000 different processing meanss 10000, processing means 10000 is from being included in machine
Control portion 3500 grade in device people obtains the first inspection information.Therefore, come from user
See, in order to check, it is not necessary to re-enter the first inspection information, it becomes possible to easily produce
Second checks information.
Thus, even not having the user of the Professional knowledge of image procossing, it is also possible to easily
Ground performs inspection (at least obtaining the second inspection information), or can reduce when checking and performing
Set the burden etc. of the second inspection information.Additionally, in the following description of this specification, right
The example to the result liking robot manipulating task that inspection processes illustrates.That is, user without
The first inspection information need to be inputted, but as it has been described above, user input first checks the one of information
Even some or all of.The situation of the first inspection information is inputted even with person, for
For user, on this point of not requiring Professional knowledge in the input of this first inspection information,
Easily carry out checking that this advantage is constant.
It addition, in the following description, use Figure 52, Figure 53 as described later, the most right
Generated the second inspection information by robot 30000 and perform inspection in this robot 30000
The example investigating and prosecuting reason illustrates.But the means of present embodiment are not limited to this, below
Explanation can expand as shown in Figure 51 A in processing means 10000 generate second inspection
Look into information and robot 30000 obtains this second inspection information and performs the hands that inspection processes
Section.Or, it is also possible to expand to generate in processing means 10000 as shown in Figure 51 B
Second checks information and but performs use at special inspection device etc. not in robot
The means that the inspection of this second inspection information processes.
Hereinafter, robot 30000, the system structure of processing means 10000 to present embodiment
Become example to illustrate, afterwards concrete handling process is illustrated.More specifically,
Check that information gets the generation of the second inspection information as processed offline to from first
Flow process illustrates, and as online treatment (online) to employing generated
Two check that the flow process that the actual inspection that the robot of information is carried out processes illustrates.
2. system configuration example
Next the robot 30000 of present embodiment, the system of processing means 10000 are constituted
Example illustrates.As shown in figure 50, the robot of present embodiment include information acquiring section 11110,
Process portion 11120, robot mechanism 300000 and shoot part 5000.But, robot 30000
It is not limited to the structure of Figure 50, and the element of an above-mentioned part can be omitted or add
Other elements etc. and implement various deformation.
Information acquiring section 11110 obtained the first inspection information before inspection processes.By using
In the case of person have input the first inspection information, information acquiring section 11110 carries out accepting from making
The process of the input information of user.It addition, be first in the information making robot manipulating task be used
In the case of inspection information, information acquiring section 11110 carries out not shown the depositing from Figure 50
The process etc. of the control information that storage portion etc. use when reading operation in process portion 11120
Can.
Process portion 11120, according to the first inspection information acquired in information acquiring section 11110, is entered
Row second checks that the generation of information processes, and carries out using the inspection department of the second inspection information
Reason.Later the process in process portion 11120 is described in detail.It addition, process portion
11120 beyond inspection processes and inspection processes robot manipulating tasks such as () such as assemblings,
Carry out the control of robot 30000.Such as in process portion 11120, carry out being included in machine
The control of arm 3100 in robot mechanism 300000 and shoot part 5000 etc..Additionally, shooting
Portion 5000 can also be mounted to the trick video camera of the arm 3100 of robot.
It addition, as shown in Figure 51 A, the means of present embodiment can be applied to process dress as follows
Putting, it is (to be shown shoot part 5000 in Figure 51 A, but also for using by shoot part
Be not limited to this) shoot check object shooting image and carry out above-mentioned inspection object
The device that inspection processes, output is for checking the processing means 10000 of the information of process, its root
Check information according to first, generate the viewpoint position comprising shoot part and sight line inspection processed
The second inspection information that the inspection area that the view information in direction and inspection process is included,
And the device carrying out inspection process is exported the second inspection information.In this case, first check
With second, obtaining of information checks that the generation of information is carried out by processing means 10000, place
Shown in reason device 10000 such as Figure 51 A, it is possible to as including information acquiring section 11110 and place
The processing means in reason portion 11120 realizes.
Here, the device carrying out inspection process can be robot 30000 as mentioned above.At this
In the case of, as shown in Figure 51 A, robot 30000 includes arm 3100, for checking object
Shoot part 5000 that the inspection of thing processes and carry out arm 3100 and the control of shoot part 5000
The control portion 3500 of system.Control portion 3500 as second check information and from processing means 10000
Obtain and would indicate that the viewpoint position of shoot part 5000 and the view information of direction of visual lines and inspection
Look into the information that region is included, and according to this second inspection information, carry out making shoot part 5000
The control moved to the viewpoint position corresponding with view information and direction of visual lines, so that with obtaining
The shooting image taken and inspection area and perform inspection and process.
In such manner, it is possible to carry out the generation of the second inspection information in processing means 10000, and
Other machines use this second inspection information suitably perform inspection and process.If carrying out inspection department
The device of reason is robot 30000, then identically with Figure 50, it is possible to realize using the second inspection
Information of looking into is to carry out the robot of inspection process, but is being formed as the generation of the second inspection information
Process and use second check the machine this point that the executive agent of the inspection process of information is different
On, Figure 51 A from Figure 50 is different.
It addition, the generation that processing means 10000 not only carries out the second inspection information processes, it is possible to
To carry out the control process of robot 30000 ordinatedly.Such as, the place of processing means 10000
Reason portion 11120 generates the second inspection information, and carries out machine based on this second inspection information
The generation of the control information of people.In this case, the control portion 3500 of robot according to by
The control information that the process portion 11120 of processing means 10000 generates, makes arm 3100 etc. enter
Action is made.That is, processing means 10000 undertakes the part of essence of control of robot, these feelings
Processing means 10000 under condition will also appreciate that the control device into robot.
It addition, use the second inspection information generated by processing means 10000 and perform inspection department
The main body of reason is not limited to robot 30000.For example, it is also possible to as shown in Figure 54
Using the second inspection information to carry out inspection process in special machine, the composition of this situation is such as
Shown in Figure 51 B.In Figure 51 B, it is shown that check that device accepts the defeated of the first inspection information
Enter (this interface portion IF such as using Figure 54) and processing means 10000 is exported
The example of this first inspection information.In this case, processing means 10000 uses from checking dress
Put the first inspection information of input and generate the second inspection information.But, first checks information energy
Enough as from user directly to the example that processing means inputs, carry out various deformation implementation.
As in figure 52, the robot 30000 of present embodiment can also be arm be 1
One armed robot.In Figure 52, the end effector of arm 3100 is provided with shoot part
5000 (trick video cameras).But it is possible to arrange the handle parts such as hand as end effector,
Shoot part 5000 etc. is set in other positions etc. of this handle part, arm 3100 and implements various change
Shape.It addition, in Figure 52, as the machine corresponding with the control portion 3500 of Figure 51 A
Show the machines such as PC, but this machine can also be the information acquiring section 11110 with Figure 50
And the machine of process portion 11120 correspondence.It addition, in Figure 52, including interface portion 6000
Including, operating portion 6100 and display part 6200 is shown as interface portion 6000, but
To whether including interface portion 6000 or how can be formed at and include interface portion 6000 situation
Under the structure of this interface portion 6000 carry out deformation implementation.
It addition, the structure of the robot 30000 of present embodiment is not limited to Figure 52.Such as,
As shown in Figure 53, robot 30000 can also at least include the first arm 3100 and with
The second arm 3200 that one arm 3100 is different, and shoot part 5000 is disposed on the first arm 3100
And the second trick video camera of at least one party of arm 3200.In Figure 53, the first arm 3100
It is to be made up of joint 3110,3130 and the framework 3150,3170 being arranged between joint,
Second arm 3200 too, but is not limited to this.It addition, in Figure 53, it is shown that
There is the example of the tow-armed robot of two support arms, but the robot of present embodiment can also have
There is the arm of more than 3.Although also describing shoot part 5000 to be disposed on the first arm 3100
Trick video camera (5000-1) and the trick video camera (5000 being arranged at the second arm 3200
-2) on two sides, but can also be provided in wherein on a side.
It addition, the robot 30000 of Figure 53 includes base unit portion 4000.Base unit portion
4000 bottoms being arranged at robot body, and supporting machine human agent.Example at Figure 53
In, be formed as being provided with wheel etc. in base unit portion 4000 and robot entirety can
The structure of movement.However, it can be that base unit portion 4000 does not have wheel etc., and consolidate
Structure due to ground etc..In the robot of Figure 53, by receiving in base unit portion 4000
Receive and control device (being the device illustrated as control portion 3500 in Figure 52), so that
Robot mechanism 300000 is constituted as one with control portion 3500.Or, it is also possible to as
The device in the control portion 3500 being equivalent to Figure 52 is such, is not provided with the machine specifically controlled,
And by being built in the substrate (being more specifically arranged at the IC etc. on substrate) of robot,
Realize above-mentioned control portion 3500.
In the case of the robot using the arm with more than two, it is possible to examine flexibly
Investigate and prosecute reason.Such as, in the case of multiple shoot part 5000 is set, it is possible to from multiple viewpoints
Position, direction of visual lines carry out inspection process simultaneously.It addition, also be able to being arranged at given arm
Trick video camera, to by be arranged at other arms handle part hold inspection object OB enter
Row checks.In this case, it is not only the viewpoint position of shoot part 5000, direction of visual lines,
The posture also being able to make inspection object OB changes.
Additionally, as shown in figure 20, with processing means or the robot 30000 of present embodiment
In part corresponding to process portion 11120 grade function can also by via comprise wired with
And the network 20 of wireless at least one party and with robot 30 communication connection server 700
Realize.
Or in the present embodiment, it is also possible in server 700 side as processing means
Carry out a part for the process of the processing means etc. of the present invention.Now, by being arranged at robot
The processing means of side processes with the dispersion of the server 700 as processing means, realizes at this
Reason.Specifically, server 700 side carry out during processing means each of the present invention processes,
It is allocated in the process of server 700.On the other hand, the processing means 10000 of robot it is arranged at
The place in each process of the processing means carrying out the present invention, to be allocated in robot process portion etc.
Reason.
Such as, the processing means of the present invention carries out first~M (M is integer) and processes, and examines
Worry can be so that the first process be realized by sub-process 1a and son process 1b and is made second
Process and process by the way of 2b realizes by sub-process 2a and son, by first~M each
Process and be divided into the situation that many height process.In this case, it is considered to server 700 side is carried out
Son processes 1a, sub-process 2a, sub-process Ma, is arranged at the processing means of robot side
The 100 dispersion process carrying out sub-process 1b, sub-process 2b, sub-process Mb.Now, originally
The processing means of embodiment, the i.e. processing means of execution the first~M process can be carried out
Son processes 1a~the processing means of son process Ma, can be carried out sub-process 1b~son processes
The processing means of Mb, it is also possible to be carried out sub-process 1a~son process Ma and son process 1b~
Son processes whole processing means of Mb.Furthermore, the processing means of present embodiment
It is that each process theed process first~M at least performs the processing means that a son processes.
Thus, the clothes that such as disposal ability is higher compared with the processing means 10000 of robot side
Business device 700 can carry out processing the process etc. that load is high.Further, it is also carried out machine in processing means
In the case of device people controls, server 700 can control the action of each robot in the lump, thus
The most easily make multiple robot coordinated actions etc..
It addition, in recent years, the situation of the parts manufacturing multi items and minority had the trend of increase.
And, in the case of the kind of the parts manufactured in change, need to change that robot carries out is dynamic
Make.If structure as shown in figure 20, even if the most not re-starting for multiple robots
Each robot instructs operation, and server 700 also is able to change the action that robot is carried out in the lump
Deng.Further, compared with the situation that a processing means is set for each robot, it is possible to significantly
Degree reduces the trouble the etc. during software upgrading carrying out processing means.
3. handling process
Next the handling process of present embodiment is illustrated.Specifically, to carrying out first
The acquisition of inspection information and second checks the flow process of the generation of information and according to the second inspection generated
Information of looking into illustrates to the flow process performed when inspection processes.Inspection department is performed by robot in imagination
In the case of reason, even if owing to the first acquisition checking information and second checks the generation of information not
Action in processing with the inspection of robot also is able to perform, so being expressed as processed offline
(offline).On the other hand, owing to the execution of inspection process is with robot motion, so statement
For online treatment.
Additionally, below, inspection is processed to as if the result of assembling work based on robot,
And check that processing the example also performed by robot illustrates, but exist as described above
The point of various deformation implementation can be carried out.
3.1 processed offline
First, the first inspection information that figure 55 illustrates present embodiment checks letter with second
The object lesson of breath.Second inspection information comprise view information (viewpoint position and direction of visual lines),
Inspection area (ROI confirms region) and qualified images.It addition, the first inspection information comprises
Shape information (three-dimensional modeling data), the posture information checking object and inspection process
Object's position.
The flow process of concrete processed offline is shown in the flow chart of Figure 56.If proceeding by
It is right that processed offline, first information acquiring section 11110 obtain inspection as the first inspection information
Three-dimensional modeling data (shape information) (S100001) as thing.Checking (visual examination)
In, it is important that how to observe inspection object, from given viewpoint position, sight line side
The shape checking object is depended on to the view mode observed.Particularly, for threedimensional model
For data, due to be N/D, without deformation preferable state under check object letter
Breath, so becoming for checking information useful in the inspection process in kind of object.
It is the process that the result for robot manipulating task based on robot is carried out in inspection process
In the case of, information acquiring section 11110 obtains the inspection obtained as the result of robot manipulating task
Look into after the three-dimensional modeling data of object that is operation before three-dimensional modeling data and robot manipulating task
Check three-dimensional modeling data before the three-dimensional modeling data of object that is operation.
In the case of the result of robot manipulating task is checked, need the most suitably entering
Operation of having gone judges.It is that the assembling carrying out assembling by object A and object B is made in operation
In the case of industry, to whether assembling thing in the position of regulation from the direction of regulation for object A
Body B judges.That is, the acquisition of the three-dimensional modeling data that object A, object B are individual is
Inadequate, it is important that for object A in the position of regulation from the direction group of regulation
Equipped with object B data, terminate operation the most ideally state under three-dimensional modeling data.
Thus the information acquiring section 11110 of present embodiment obtains three-dimensional modeling data after operation.It addition,
As be described hereinafter inspection area, qualified threshold value setting such, there is also the view mode before and after operation
Difference become the scene of main points, obtain threedimensional model number before operation the most ordinatedly
According to.
Three-dimensional mould after three-dimensional modeling data and operation before showing operation in Figure 57 A, Figure 57 B
The example of type data.In Figure 57 A, Figure 57 B, with the object A for cubical bulk,
Assembling vertical with the posture identical with object A along the position of 1 given direction of principal axis skew
Illustrate as a example by the operation of the block object B of cube.In this case, before operation
Three-dimensional modeling data due to be object B assembling before, so being object A as shown in Figure 57 A
Three-dimensional modeling data.It addition, as shown in Figure 57 B, after operation three-dimensional modeling data be with
Above-mentioned condition assembles the data of object A and object B.Additionally, at Figure 57 A, Figure 57 B
In, fasten in the pass illustrated in a planar manner, whether three-dimensional modeling data becomes from given
Viewpoint position, direction of visual lines carry out observing such data, but from " three-dimensional " this word
Also understanding, the shape data obtained as the first inspection information becomes the position of observation, direction
The three-dimensional data being not limited.
It addition, in S100001, obtain the most ordinatedly and (comprise viewpoint position as view information
Put and the information of direction of visual lines) the viewpoint candidate information of candidate.Imagine this viewpoint candidate letter
Breath is not the information being inputted by user or being generated by process portion 11120 etc., and e.g. locates
Manufacturer's letter set in advance before dispatching from the factory of reason device 10000 (or robot 30000)
Breath.
Although viewpoint candidate information is described above as the information of the candidate as view information, but
Consider to be likely to become the point the most (being unlimited for narrow sense) of this viewpoint candidate information.Such as,
Viewpoint letter is set in the object coordinate system (object coordinates system) on the basis of checking object
In the case of breath, beyond the point that the inside checking object in object coordinate system is comprised
Point is all likely to become viewpoint candidate information.Certainly, so many viewpoint candidate information (places are used
Viewpoint candidate information is not limited in reason), it is possible to set the most flexibly or subtly with situation
View information.Thus, if the Timing Processing load that sets in view information does not become problem, then exist
S100001 can not obtain viewpoint candidate information.But in the following description, set in advance
Determine viewpoint candidate information, even if so that in the case of various objects become inspection object also
Can universally utilize, and not make the process load in the setting of view information increase.
Now, check that the posture etc. of the configuration of the inspection object OB in moment is not limited to
Known.Accordingly, because shoot part 5000 whether can be made to the position corresponding with view information
Posture move in confused situation really etc., so view information to be defined to considerably less quantity (example
Such as one, two) it is unpractical.This is because in the feelings of the view information only generating minority
Under condition, if the whole mobile of the shoot part 5000 view information to this minority cannot be made, then without
Method performs inspection and processes.In order to restrain this danger, need to make view information generate certain journey
The quantity of degree, as result, the quantity of viewpoint candidate information also becomes a certain degree of quantity.
Figure 58 illustrates the example of viewpoint candidate information.In Figure 58, sit at object
The surrounding of the initial point of mark system is set with 18 viewpoint candidate information.Concrete coordinate figure such as Figure 59
Shown in.If viewpoint candidate information A, then viewpoint position is in x-axis, and is positioned at from initial point
Leave the position of given distance (if the example of Figure 59 is then 200).It addition, sight line
Direction is corresponding with the vector represented by (ax, ay, az), for viewpoint candidate information A
In the case of, become x-axis negative direction, i.e. initial point direction.Even if additionally, only determining sight line side
To vector, also due to shoot part 5000 can carry out the rotation around direction of visual lines vector, institute
One it is not fixed as with posture.The most here, the rotation around direction of visual lines vector will be specified in advance
Another vector of gyration sets as (bx, by, bz).It addition, as shown in Figure 58, will
18 points of total of the point between 2 axles in 2 and xyz on each axle of xyz are as viewpoint
Candidate information.If in this wise to set viewpoint candidate in the way of the initial point of object coordinate system
Information, then in world coordinate system (robot coordinate system), no matter checking how object joins
Put, the setting of suitable view information can be realized.Specifically, it is possible to restrain and cannot make
Whole (or big to the view information set according to viewpoint candidate information of shoot part 5000
Most) though the mobile or mobile probability that also cannot check because of shelter etc. etc,
It is thus possible to the inspection realized under the most sufficient amount of view information.
In visual examination, although only carrying out inspection from a viewpoint position and direction of visual lines is
Harmless, but if considering precision, the most preferably carry out from multiple viewpoint positions and direction of visual lines
Check.This is because consider, when only checking from 1 direction, to exist and cannot fully (such as exist
With sufficiently large size on image) observe the probability etc. in region that should check.It is therefore preferable that
Second inspection information is not a view information, but comprises the view information of multiple view information
Group.This is e.g. by using the multiple candidate information in above-mentioned viewpoint candidate information (substantially
Whole candidate information) and generate view information thus realize.Even if it addition, not using
In the case of above-mentioned viewpoint candidate information, obtain multiple view information.That is, second checks
Information comprises the view information group multiple view information being included, respectively regarding of view information group
Dot information comprises viewpoint position and the direction of visual lines of the shoot part 5000 during inspection processes.Tool
For body, process portion 11120 checks information according to first, generates as the second inspection information
The view information group that multiple view information of shoot part 5000 are included.
Above-mentioned viewpoint candidate information is the position in object coordinate system, but believes in viewpoint candidate
In the setting stage of breath, the shape of object, size are uncertain in concrete checking.Concrete and
Speech, although Figure 58 is the object coordinate system on the basis of checking object, but this object
The posture of the object in article coordinate system becomes indefinite state.For view information,
Owing at least needing to specify the position relationship relative with inspection object, so in order to from viewpoint
Candidate information generates concrete view information, needs corresponding with check object.
Here, in view of above-mentioned viewpoint candidate information, for setting the coordinate of viewpoint candidate information
For the initial point of system, it is the position at the center as whole viewpoint candidate information, and
In the case of each viewpoint candidate information configuration shoot part 5000, it is positioned at this shoot part 5000
Shooting direction (optical axis direction).I.e., it is possible to say that the initial point of coordinate system is to use shoot part 5000
Observation under optimal position.Position owing to should observe in processing in inspection is above-mentioned inspection
Processing object's position (can be rigging position for narrow sense, it is also possible to be group as shown in Figure 58
Holding position), thus use the inspection that obtains as the first inspection information to process object's position and
Generate the view information corresponding with checking object.
That is, the first inspection information comprise the relative inspection for above-mentioned inspection object process right
As position, robot 30000 processes on the basis of object's position by inspection, sets and check object
The object coordinate system that thing is corresponding, and use object coordinate system, generate view information.Specifically
For, information acquiring section 11110 obtains as the first inspection information for checking object
Relative inspection processes object's position, and process portion 11120 processes object's position as base with inspection
Standard, sets the object coordinate system corresponding with checking object, and uses object coordinate system,
Generate view information (S100002).
Such as, check that the shape data of object is the shape shown in Figure 60, obtain therein
Point O is the first inspection information that inspection processes object's position.In this case, some O is made
For initial point, and set the posture checking object as the object coordinate system as shown in Figure 60
?.If it is determined that the posture checking object in object coordinate system, the most each viewpoint is waited
Benefit information is clear and definite, therefore, it is possible to use each viewpoint candidate information with the relativeness checking object
As view information.
If generating the view information group comprising multiple view information, then carry out various second and check letter
The generation of breath.First, the generation of the qualified images corresponding with each view information is carried out
(S100003).Specifically, process portion 11120 processes the qualified figure used as inspection
As and obtain by the imagination being configured at the viewpoint position corresponding with view information and direction of visual lines
The image of the three-dimensional modeling data of video camera shooting.
Qualified images needs to become the image of the preferable state that display checks object.Owing to making
Be the three-dimensional modeling data (shape information) that the first inspection information obtains be check object
Preferably shape data, so should by observe from the shoot part configured accordingly with view information
The image of three-dimensional modeling data uses as qualified images.Additionally, using three-dimensional mould
In the case of type data, it is actually not shooting based on shoot part 5000, and is by using
Three-dimensional data (is specifically projected as at the conversion of 2-D data by the process of imagination video camera
Reason).If additionally, imagination is for the inspection process of the result of robot manipulating task, then qualified images
It it is the image of the state showing the preferably inspection object at the end of robot manipulating task.And,
Owing to the state of object that preferably checks at the end of robot manipulating task is by after above-mentioned operation three
Dimension module data show, so by three-dimensional modeling data after the operation by imagination video camera shooting
Image is as qualified images.Owing to qualified images is obtained in each view information,
So as mentioned above in the case of setting 18 view information, qualified images is also 18.
The image on Figure 61 A~the respective right side of Figure 61 G and the assembling work carried out corresponding to Figure 57 B
In the case of qualified images corresponding.In Figure 61 A~Figure 61 G, describe 7 viewpoint amounts
Image, and as it has been described above, obtain amount of images with the quantity of view information.Additionally,
In the process of S100003, it is contemplated that aftermentioned inspection area, the process of qualified threshold value, the most in advance
Before obtaining the operation by imagination video camera shooting, before the operation of three-dimensional modeling data, image (is schemed
The respective left side of 61A~Figure 61 G).
It follows that obtain as the second inspection information the qualified images for checking process and
Region in the image of shooting image that is inspection area (S100004).Inspection area as above institute
State.Additionally, the view mode of part and parcel changes accordingly with view information in inspection,
Therefore, each view information comprised for view information group carries out the setting of inspection area.
Specifically, process portion 11120 obtains by being configured at and viewpoint letter as qualified images
Three-dimensional mould after the operation of the viewpoint position of breath correspondence and the imaginary video camera shooting of direction of visual lines
The image of type data, obtains by being configured at regard corresponding with view information as image before operation
The figure of three-dimensional modeling data before the operation of the imaginary video camera shooting of some position and direction of visual lines
Picture, and according to the comparison process of image before operation Yu qualified images, obtain for checking process
Region in image that is inspection area.
The object lesson that the setting of inspection area processes is shown in Figure 62 A~Figure 62 D.?
It is the situation of inspection object from the result of the robot manipulating task of right assembling object B for object A
Under, obtain as qualified images as shown in Figure 62 B by the operation of imagination video camera shooting
The image of rear three-dimensional modeling data, as shown in Figure 62 A as operation before image and obtain by
The image of three-dimensional modeling data before the operation of imagination video camera shooting.So, in operation front and back belt
Have in the robot manipulating task of change of state, it is often more important that the changing unit of state.If figure
The example of 62B, then should judge in inspection is whether object B is assembled in object A
Together, the condition that its assembling position is the most correct etc.Although can also be in object A
Part part (such as composition surface in assembling) beyond relevant to operation checks, but
It is that importance degree ratio is relatively low.
I.e., it is possible to by the region in image higher for importance degree in qualified images and shooting image
It is thought of as before and after operation producing the region of change.The most in the present embodiment, process portion
120 difference carrying out obtaining image and qualified images before operation as comparison process that is difference
The process of image, and obtain as inspection area and to check object comprising in difference image
Region.If the example of Figure 62 A, Figure 62 B, then difference image is Figure 62 C, therefore sets
The inspection area that the region of the object B comprised by Figure 62 C is included.
In such manner, it is possible to the region of object will be checked comprising in difference image, is i.e. inferred as examining
The higher region of the importance degree looked into is as inspection area.
Now, inspection processes object's position (rigging positions in Figure 62 A etc.) as the first inspection
The information of looking into is known, and this inspection processes object's position and is in the picture is the most also
Know.Being as noted previously, as inspection and processing object's position is as the base checked in inspection process
Accurate position, so object's position can also be processed from difference image and inspection to obtain test zone
Territory.Such as, as shown in Figure 62 C, process portion 11120 obtains the inspection in difference image and processes
The maximum of longitudinal length in the region that object's position is remaining with difference image
The maximum Blobwidth of BlobHeight and horizontal length.So, if to check
Centered by processing object's position, by respective BlobHeight, left and right each Blobwidth up and down
Distance in the range of region as inspection area, then can obtain difference as inspection area
The region comprising inspection object in partial image.Additionally, in the present embodiment, it is also possible to
Remaining white longitudinally and being laterally respectively provided with, if the example of Figure 62 D, then will be in upper bottom left
The right side is respectively provided with the remaining white region of 30 pixels as inspection area.
Figure 63 A~Figure 63 D, Figure 64 A~Figure 64 D are also same.Figure 63 A~Figure 63 D is
In the case of observing from viewpoint position, assemble thinner object B in the inboard of object A
Operation (or the most porose object A in the picture inserts the work of bar-shaped object B
Industry) example.In this case, the region checking object in difference image is divided into not
The multiple region of continuous print, but can process identically with Figure 62 A~Figure 62 D.Here,
Imagination object B is thinner than object A, near the upper end in the image of object A or lower end
Near portion, the importance degree in inspection is relatively low.If the means of present embodiment, then such as Figure 63 D
Shown in, it is possible to the region being considered the relatively low object A of importance degree is removed from inspection area.
Figure 64 A~Figure 64 D is to assemble the object B less than object A for bigger object A
Operation.This is such as anchored on the thing as PC, printer etc. with using the screw as object B
The operation of the position of the regulation of body A is corresponding.In this operation, check that PC, printer are whole
The necessity of body is low, and the importance degree carrying out the position of screw fastening is high.In this, if
For the means of present embodiment, then as shown in Figure 64 D, can by the major part of object A from
Inspection area removes, using the surrounding's setting of object B that should check as inspection area.
Additionally, the setting means of the above-mentioned means inspection area that to be versatility higher, but this reality
The setting means of the inspection area executing mode are not limited to this, it is also possible to come by other means
Set inspection area.Such as, in Figure 62 D, even due to narrower and small inspection area
Also it is enough, so the means setting narrower and small region can also be used.
It follows that carry out making in the comparison process of the shooting image of actual photographed at qualified images
Threshold value (qualified threshold value) setting process (S100005).Specifically, process portion
11120 obtain image before above-mentioned qualified images and operation, and according to image before operation and qualified figure
The similarity of picture, sets inspection based on shooting image with qualified images and processes the threshold used
Value.
Show, in Figure 65 A~Figure 65 D, the object lesson that threshold value setting processes.Figure 65 A is
Qualified images, if carrying out robot manipulating task ideally, (if checking in the broadest sense, object is for ideal
State), then the shooting image of actual photographed also should be consistent with qualified images, and similarity is
Big value (being 1000 here).On the other hand, if entirely without the key element consistent with qualified images,
Then similarity is minima (being 0 here).Here threshold value is following value: if qualified images
More than this threshold value, passed examination then it is judged to, if similarity compares threshold with the similarity of shooting image
Value is little, is judged to check defective.That is, threshold value is the specified value between 0 and 1000.
Here, Figure 65 B is image before the operation corresponding with Figure 65 A, but due to Figure 65 B
Also the parts general with Figure 65 A are comprised, so image and the similarity of qualified images before operation
Become be not 0 value.Such as, the judgement of similarity is carried out in the marginal information using image
In the case of, Figure 65 C of the marginal information as Figure 65 A is used for comparison process, but
Also the part consistent with Figure 65 C is comprised as Figure 65 D of the marginal information of image before operation.
If the example of Figure 65 C, Figure 65 D, then the value of similarity is less than 700.Thus, even if will
Inspection object entirely without the state carrying out operation shoots in shooting image, this shooting image
Similarity with qualified images also keeps the value of about 700.By entirely without the shape carrying out operation
The inspection object of state shoot in shooting image such as refer to, it is impossible to perform operation itself or
The object performing operation still assembling side falls and does not shoots the state in image etc, and
The failed probability of robot manipulating task is higher.That is, even if owing to " not conforming to as checking to become
Lattice " situation also occur about 700 similarity, so being set at less than this as threshold value
The value of value can be described as unsuitable.
The most in the present embodiment, by before the maximum (such as 1000) of similarity and operation
Value between the similarity (such as 700) of image and qualified images sets as threshold value.As
One example, uses meansigma methods, it is also possible to utilize following formula (13) to obtain threshold value.
Threshold value={ 1000+ (similarity of image before qualified images and operation) }/2 (13)
It addition, the setting of threshold value can carry out various deformation implementation, such as can also be with qualified figure
Before picture and operation, the value of the similarity of image is accordingly, and the formula of threshold value is obtained in change.
For instance, it is possible to carry out following deformation implementation: qualified images and image similar before operation
In the case of degree is in less than 600, threshold value is fixed as 800, before qualified images with operation
In the case of the similarity of image is in more than 900, threshold value is fixed as 1000, except this
In the case of outside, use above formula (13).
Additionally, the similarity of image is because of the inspection object before and after operation before qualified images and operation
The change of view mode and change.Such as, be different from Figure 65 (A)~Figure 65 (D)
Figure 66 A corresponding to view information~Figure 66 D in the case of, the inspection object before and after assembling
The difference of view mode less, as result, the similarity of image before qualified images and operation
Higher than above-mentioned example.I.e., identically with qualified images, inspection area, for view information group
The each view information comprised is also carried out the calculating of similarity and threshold value and processes.
Finally, process portion 11120, for each view information of view information group, sets expression and makes
Excellent when the viewpoint position corresponding with view information and direction of visual lines move of shoot part 5000
The priority level information (S100006) first spent.As it has been described above, check object view mode with
The viewpoint position and the direction of visual lines that are represented by view information change accordingly.Therefore it is possible to
Produce following situation: can observe well from given view information and check should examine object
The region looked into, on the other hand, cannot observe this region from other view information.It addition, as above
Described, owing to view information group comprises sufficient amount of view information, so in inspection processes,
Without it is all checked as object, if (at such as 2 position in the view information specified
Put) qualified if, then final result is the most qualified, such that it is able to not to the most not becoming object
View information process.According to foregoing, if considering the efficient activity that inspection processes, then
Preferably further can to observe the region that should check etc., inspection well useful in processing for priority treatment
View information.The most in the present embodiment, relative importance value is set for each view information.
Here, in the case of inspection process is for the process of the result of robot manipulating task, bright
Really the difference before and after operation is useful in inspection.As extreme example, such as Figure 67 A
Shown in, it is considered to assemble the operation of less object B at bigger object A from accompanying drawing upper left side.
In this case, corresponding with the viewpoint position 1 of Figure 67 A and direction of visual lines 1 in use
In the case of view information 1, before operation, image is as shown in Figure 67 B, qualified images such as Figure 67 C
Shown in, do not produce difference.That is, view information 1 when checking operation is not here
Useful view information.On the other hand, right with viewpoint position 2 and direction of visual lines 2 in use
In the case of the view information 2 answered, before operation, image is as shown in Figure 67 D, and qualified images is such as
Shown in Figure 67 E, difference is clear and definite.In such a case it is possible to make the relative importance value of view information 2
Higher than the relative importance value of view information 1.
That is, the variable quantity before and after operation is the biggest, then be set to by relative importance value the highest, makees
The situation that variable quantity before and after industry is big represents image before the operation with Figure 65 A~Figure 66 D explanation
Low with the similarity of qualified images.Thus in the process of S100006, calculate multiple viewpoint letter
Before ceasing respective operation, (this sets the similarity of image and qualified images in the threshold value of S100005
Time obtain), similarity is the lowest, then set the highest relative importance value.Process performing inspection
Time, make shoot part 5000 move in order from the view information that relative importance value is high, thus examine
Look into.
3.2 online treatment
It follows that with the flow chart of Figure 68 to use second inspection information inspection process that is
The flow process that line processes illustrates.If proceeding by online treatment, first carry out by above-mentioned off-line
Process the reading (S2001) of the second inspection information generated.
Then, robot 30000 sets according to based on the relative importance value represented by priority level information
Mobile order, make shoot part 5000 to corresponding the regarding of each view information with view information group
Point position and direction of visual lines move.This such as can by the process portion 11120 of Figure 50,
Or the control in the control portion 3500 of Figure 51 A realizes.Specifically, view information is selected
Organize the view information (S2002) that the relative importance value in the multiple view information comprised is the highest,
And make shoot part 5000 move to the viewpoint position corresponding with this view information and direction of visual lines
(S2003).So, the control of the shoot part 5000 by carrying out according to above-mentioned relative importance value,
It is capable of efficient inspection to process.
But, the view information in processed offline is substantially the letter of object coordinate system defined
Breath, is not the letter considering the position in realistic space (world coordinate system, robot coordinate system)
Breath.Such as, as shown in Figure 69 A, in object coordinate system, checking giving of object
The direction setting viewpoint position of fixed face F1 and viewpoint direction.In this case, in this inspection
Look into during the inspection of object processes, as shown in Figure 69 B, checking that object is so that face F1 court
Under mode be configured at operation post in the case of, above-mentioned viewpoint position and direction of visual lines are operation
Under platform, and shoot part 5000 (the trick video camera of robot 30000) cannot be made mobile extremely
This locality.
That is, S2003 becomes following control: may not make shoot part 5000 move to view information
Corresponding posture, and it is made whether the judgement (S2004) that can move, can move
In the case of move.Specifically, process portion 11120 is at the movable model according to robot
Enclose information, it is determined that for shoot part 5000 cannot be made to (i is with i-th in multiple view information
Natural number) in the case of viewpoint position corresponding to view information and direction of visual lines move, skip
The movement of the shoot part 5000 corresponding with the i-th view information, and carry out and move in order
The view information phase of the Next jth (j is the natural number meeting i ≠ j) of the i-th view information
To control.Specifically, as in the case of no in judging in S2004, S2005 is skipped
Following inspection processes, and is back to S2002, and then carries out the selection of view information.
Here, if the quantity making the view information that view information group comprised is that (N is nature to N
Number, and if the example of above-mentioned Figure 58, then N=18), i is the integer meeting 1≤i≤N,
J is the integer meeting 1≤j≤N and j ≠ i.It addition, the movable range information of robot is
The scope that the part being particularly provided with shoot part 5000 in expression robot can move
Information.For each joint that robot is comprised, determine choosing of this joint in design
The scope of joint angle.And, if determining the value of the joint angle in each joint, then can be according to just
To the given position of kinematic calculation robot.That is, movable range information is from robot
The design information obtained of item, can be joint angle can the group of value, can be shoot part
The posture in the desirable space of 5000, it is also possible to be other information.
Additionally, movable range information robot coordinate system or the world coordinate system of robot
Show.Therefore, in order to carry out the comparison of view information and movable range information, needs will be as
View information in object coordinate system shown in Figure 69 A is converted to showing as shown in Figure 69 B
View information in position relationship in the real space, i.e. robot coordinate system.
Thus information acquiring section 11110 obtains as the first inspection information in advance and represents that inspection is right
As the object posture information of the posture in the global coordinate system of thing, at S2004
Process in, process portion 11120 according to obtain based on object posture information the overall situation seat
Mark system and the relativeness of object coordinate system, obtain the viewpoint letter that global coordinate system is showed
Breath, and the movable range information of the robot showed according to global coordinate system and world coordinates
Whether the view information that system is showed, to shoot part 5000 being made to represent to by view information
Viewpoint position and direction of visual lines move and judge.
Owing to this process is coordinate transform processing, so needing the relativeness of two coordinate systems,
Can be according to the posture of benchmark of the object coordinate system in global coordinate system, i.e. object
Posture information, obtains this relativeness.
Additionally, the direction of visual lines represented by view information needs not to be unique determines shoot part 5000
The information of posture.Specifically, in the explanation of the viewpoint candidate information of Figure 58, Figure 59
As it has been described above, by (x, y, z) determine viewpoint position, and by (ax, ay, az) with
And the posture of (bx, by, bz) decision shoot part 5000, but can not also consider (bx,
by、bz).To whether shoot part 5000 being made to the viewpoint position represented by view information
And direction of visual lines moves in the judgement carried out, if by (x, y, z), (ax, ay, az)
And (bx, by, bz) whole as condition, then it is difficult to meet the shooting of this condition
The movement in portion 5000.Specifically, i.e. allow to clap from the position represented by (x, y, z)
Take the photograph (ax, ay, the az) in the direction as initial point, represent now around (ax, ay, az)
The vector of the anglec of rotation the most only take prescribed limit, and possibly cannot meet (bx, by, bz).
The most in the present embodiment, direction of visual lines can not also comprise (bx, by, bz), if full
Foot (x, y, z) and (ax, ay, az), then be judged to that shoot part 5000 can move
To view information.
At shoot part 5000 in the case of the posture corresponding with view information has moved,
Carry out the shooting that realized by the shoot part 5000 of this posture and obtain shooting image
(S2005).Inspection process is relatively carried out by shooting image and qualified images, wherein,
Above-mentioned (bx, by, bz) value using regulation in qualified images, on the other hand, for clapping
Take the photograph shooting image shoot part 5000 for, exist relative to direction of visual lines the anglec of rotation with
The probability that the angle that represented by (bx, by, bz) is different.Such as, if qualified images is for scheming
70A and to shoot situation that image is Figure 70 B such, deposit produce between two images given
Angle under the situation of rotation.Cutting out of inspection area is unsuitable in this case,
The calculating of similarity is unsuitable too.Additionally, for convenience of description, Figure 70 B with
In the same manner, make background is single plain color to the qualified images made from model data, but due to figure
70B is shooting image, so being likely to be shining into other objects.It addition, from the pass of illumination light etc.
From the point of view of Xi, it is also considered that check the situation that the tone of object is different from qualified images.
The most here, carry out shooting the calculating of the image rotation angle between image and qualified images
Process (S2006).Specifically, generate qualified images time use above-mentioned (bx, by,
Bz), therefore corresponding with qualified images shoot part (imagination video camera) relative to sight line side
To the anglec of rotation be known information.It addition, the shoot part 5000 during shooting shooting image
Posture for making shoot part 5000 move to the posture corresponding with view information
Robot control in also should become it is known that if not, cannot move the most at all.Thus,
The anglec of rotation relative to direction of visual lines of the shoot part 5000 when being formed as also being able to obtain shooting
The information of degree.In the process of S2006, according to two anglecs of rotation relative to direction of visual lines
Difference, obtain the anglec of rotation between image.Further, the image rotation angle obtained is used,
Carry out qualified images with shooting image at least one party rotational deformation process, thus revise qualified
Image is different from the angle of shooting image.
Owing to being achieved qualified images and angle right of shooting image by above process
Should, so the inspection area (S2007) obtained by S100004 extracted in each image, and make
Confirmation process (S2008) is carried out with this region.In S2008, calculate qualified images and bat
Take the photograph the similarity of image, if this similarity is more than the threshold value obtained by S100005, be judged to
Qualified, the most then it is judged to defective.
But it is also possible to the most only carry out inspection process from a view information, and
Use multiple view information.Thus, the confirmation whether performing predetermined number of times is processed sentence
Fixed (S2009), if performing predetermined number of times, terminates to process.Such as, if at 3 position
Become qualified inspection in the case of confirmation is no problem in processing to process, then enter in S2008
Go in the case of 3 qualified judgements, be yes in S2009 judges, and make inspection object
Thing is qualified and terminates inspection process.On the other hand, even if being qualified in S2008, if should
It is judged to first time or second time, is then no in judging in S2009, and proceeds pin
Process to next view information.
Additionally, in the above description, online treatment be also by information acquiring section 11110,
Reason portion 11120 is carried out, but is not limited to this, it is also possible to utilize machine as described above
The control portion 3500 of people 30000 carries out above-mentioned process.I.e. online treatment can be by Figure 50's
The information acquiring section 11110 of robot 30000, process portion 11120 are carried out, it is also possible to by
The control portion 3500 of the robot of Figure 51 A is carried out.Or, it is also possible to by the process of Figure 51 A
The information acquiring section 11110 of device, process portion 11120 are carried out, the processing means in the case of being somebody's turn to do
The 10000 control devices that can be thought of as robot as described above.
In the case of the control portion 3500 utilizing robot 30000 carries out online treatment, machine
The control portion 3500 of device people 30000, in the movable range information according to robot 30000, is sentenced
It is set to and shoot part 5000 cannot be made to regard to i-th (i is natural number) in multiple view information
In the case of viewpoint position that dot information is corresponding and direction of visual lines move, skip and the i-th viewpoint
The movement of the shoot part 5000 that information is corresponding, and carry out and the i-th viewpoint letter in mobile order
The relative control of Next jth (j is the natural number meeting i ≠ j) view information of breath.
Additionally, the processing means 10000 etc. of present embodiment can also realize it by program
The part processed or major part.In this case, the processor such as CPU performs program,
Thus realize the processing means 10000 etc. of present embodiment.Specifically, reading is stored in non-
The program of temporary information storage medium, and the processor such as CPU performs the program that reads.
Here, information storage medium (can utilize the medium that computer reads) is storage program, number
According to wait medium, its function can by CD (DVD, CD etc.), (hard disk drives HDD
Dynamic device) or memorizer (card type reservoir, ROM etc.) etc. realize.And, CPU
Deng processor according to being stored in the program (data) of information storage medium, carry out present embodiment
Various process.That is, information storage medium storage be used for making computer (possess operating portion,
Process portion, storage part, the device of output unit) as each portion of present embodiment and function
Program (for making computer perform the program of process in each portion).
Claims (17)
1. a robot, it is characterised in that be to use to be checked object by what shoot part shot
Shooting image carries out checking the robot that the inspection of above-mentioned inspection object processes, and according to the
One checks information, generates the second inspection information comprising the inspection area that above-mentioned inspection processes, and root
Check information according to above-mentioned second, carry out above-mentioned inspection process,
Above-mentioned second inspection information comprises the view information group multiple view information being included, and
And each view information of above-mentioned view information group comprises above-mentioned shoot part during above-mentioned inspection processes
Viewpoint position and direction of visual lines.
Robot the most according to claim 1, it is characterised in that
Each view information to above-mentioned view information group, set make above-mentioned shoot part to above-mentioned viewpoint
Relative importance value when above-mentioned viewpoint position that information is corresponding and above-mentioned direction of visual lines move.
Robot the most according to claim 2, it is characterised in that
According to the mobile order set based on above-mentioned relative importance value, above-mentioned shoot part is made to regard to above-mentioned
Above-mentioned viewpoint position and above-mentioned direction of visual lines that above-mentioned each view information of dot information group is corresponding move
Dynamic.
Robot the most according to claim 3, it is characterised in that
According to movable range information, it is determined that for cannot make above-mentioned shoot part to multiple above-mentioned viewpoints
Above-mentioned viewpoint position that the i-th view information in information is corresponding and the feelings that above-mentioned direction of visual lines moves
Under condition, do not carry out the movement of above-mentioned shoot part based on above-mentioned i-th view information, and according to above-mentioned
The Next jth view information of above-mentioned i-th view information in mobile order, makes above-mentioned shooting
Portion moves, and wherein i, j are natural number, and i ≠ j.
5. according to the robot according to any one of Claims 1 to 4, it is characterised in that
Above-mentioned first inspection information comprise the relative inspection for above-mentioned inspection object process right
As position, and on the basis of above-mentioned inspection department reason object's position, set and above-mentioned inspection object pair
The object coordinate system answered, so that by above-mentioned object coordinate system, generating above-mentioned view information.
Robot the most according to claim 5, it is characterised in that
Above-mentioned first inspection information comprises the position in the global coordinate system representing above-mentioned inspection object
Put the object posture information of posture,
According to the above-mentioned global coordinate system obtained based on above-mentioned object posture information with upper
State the relativeness of object coordinate system, obtain the above-mentioned view information in above-mentioned global coordinate system,
And according in the movable range information in above-mentioned global coordinate system and above-mentioned global coordinate system
Whether above-mentioned view information, to above-mentioned shoot part being made to above-mentioned viewpoint position and above-mentioned sight line
Direction is moved and is judged.
Robot the most according to claim 1, it is characterised in that
Above-mentioned inspection process is the process that the result for robot manipulating task is carried out, and above-mentioned first checks
Information is the information obtained in above-mentioned robot manipulating task.
Robot the most according to claim 1, it is characterised in that
Above-mentioned first inspection information is that to comprise the shape information of above-mentioned inspection object, above-mentioned inspection right
Posture information as thing and the relative inspection for above-mentioned inspection object process object
The information of at least one in position.
Robot the most according to claim 1, it is characterised in that
Above-mentioned first inspection information comprises the three-dimensional modeling data of above-mentioned inspection object.
Robot the most according to claim 9, it is characterised in that
Above-mentioned inspection process is the process that the result for robot manipulating task is carried out,
After above-mentioned three-dimensional modeling data comprises the operation obtained by carrying out above-mentioned robot manipulating task
The above-mentioned threedimensional model of the above-mentioned inspection object before three-dimensional modeling data and above-mentioned robot manipulating task
Three-dimensional modeling data before data that is operation.
11. according to the robot described in claim 9 or 10, it is characterised in that
Above-mentioned second inspection information comprises qualified images, and above-mentioned qualified images is by being configured at above-mentioned
Above-mentioned viewpoint position and the imaginary video camera of above-mentioned direction of visual lines that view information is corresponding shoot
State image obtained by three-dimensional modeling data.
12. robots according to claim 10, it is characterised in that
Above-mentioned second inspection information comprises image before qualified images and operation,
Above-mentioned qualified images be by be configured at the above-mentioned viewpoint position corresponding with above-mentioned view information with
And the imaginary video camera of above-mentioned direction of visual lines shoot above-mentioned operation after scheme obtained by three-dimensional modeling data
Picture,
Before above-mentioned operation, image is by being configured at the above-mentioned viewpoint position corresponding with above-mentioned view information
And the above-mentioned imagination video camera of above-mentioned direction of visual lines shoot three-dimensional modeling data before above-mentioned operation and
The image obtained,
By image before above-mentioned operation is compared with above-mentioned qualified images, obtain above-mentioned test zone
Territory.
13. robots according to claim 12, it is characterised in that
In above-mentioned comparison, obtain difference that is the difference of image and above-mentioned qualified images before above-mentioned operation
Partial image, above-mentioned inspection area is the district comprising above-mentioned inspection object in above-mentioned difference image
Territory.
14. robots according to claim 10, it is characterised in that
Above-mentioned second inspection information comprises image before qualified images and operation,
Above-mentioned qualified images be by be configured at the above-mentioned viewpoint position corresponding with above-mentioned view information with
And the imaginary video camera of above-mentioned direction of visual lines shoot above-mentioned operation after scheme obtained by three-dimensional modeling data
Picture,
Before above-mentioned operation, image is by being configured at the above-mentioned viewpoint position corresponding with above-mentioned view information
And the above-mentioned imagination video camera of above-mentioned direction of visual lines shoot three-dimensional modeling data before above-mentioned operation and
The image obtained,
According to the similarity of image before above-mentioned operation Yu above-mentioned qualified images, set based on above-mentioned shooting
The above-mentioned inspection that image and above-mentioned qualified images are carried out processes the threshold value used.
15. robots according to claim 1, it is characterised in that
At least include the first arm and the second arm, above-mentioned shoot part be disposed on above-mentioned first arm and on
State the trick video camera of at least one party of the second arm.
16. 1 kinds of processing meanss, it is characterised in that be the inspection shot by shoot part for use
Object shooting image and carry out above-mentioned inspection object inspection process device, export above-mentioned
Inspection processes the processing means of the information used,
And checking information according to first, what above-mentioned inspection was processed by generation comprises above-mentioned shoot part
The inspection area that the view information of viewpoint position and direction of visual lines processes with above-mentioned inspection is included in
The second interior inspection information,
And export above-mentioned second inspection information for the said apparatus carrying out above-mentioned inspection process.
17. 1 kinds of inspection methods, it is characterised in that be to use the inspection object shot by shoot part
The shooting image of thing, and carry out checking the inspection method that the inspection of above-mentioned inspection object processes,
Include, according to the first inspection information, generating the bag above-mentioned inspection processed in this inspection method
Process containing the viewpoint position of above-mentioned shoot part and the view information of direction of visual lines and above-mentioned inspection
The step of the second inspection information that inspection area is included.
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-212930 | 2013-10-10 | ||
JP2013212930A JP6322949B2 (en) | 2013-10-10 | 2013-10-10 | Robot control apparatus, robot system, robot, robot control method, and robot control program |
JP2013226536A JP6390088B2 (en) | 2013-10-31 | 2013-10-31 | Robot control system, robot, program, and robot control method |
JP2013-226536 | 2013-10-31 | ||
JP2013-228653 | 2013-11-01 | ||
JP2013228653A JP6217322B2 (en) | 2013-11-01 | 2013-11-01 | Robot control apparatus, robot, and robot control method |
JP2013228655A JP6337445B2 (en) | 2013-11-01 | 2013-11-01 | Robot, processing apparatus, and inspection method |
JP2013-228655 | 2013-11-01 | ||
CN201410531769.6A CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410531769.6A Division CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104802166A CN104802166A (en) | 2015-07-29 |
CN104802166B true CN104802166B (en) | 2016-09-28 |
Family
ID=53069890
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711203574.9A Pending CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137542.8A Expired - Fee Related CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137541.3A Expired - Fee Related CN104802166B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201410531769.6A Pending CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
CN201510136619.XA Pending CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711203574.9A Pending CN108081268A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
CN201510137542.8A Expired - Fee Related CN104802174B (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410531769.6A Pending CN104552292A (en) | 2013-10-10 | 2014-10-10 | Control system of robot, robot, program and control method of robot |
CN201510136619.XA Pending CN104959982A (en) | 2013-10-10 | 2014-10-10 | Robot control system, robot, program and robot control method |
Country Status (1)
Country | Link |
---|---|
CN (5) | CN108081268A (en) |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108081268A (en) * | 2013-10-10 | 2018-05-29 | 精工爱普生株式会社 | Robot control system, robot, program and robot control method |
CN104965489A (en) * | 2015-07-03 | 2015-10-07 | 昆山市佰奥自动化设备科技有限公司 | CCD automatic positioning assembly system and method based on robot |
KR20180044946A (en) * | 2015-08-25 | 2018-05-03 | 카와사키 주코교 카부시키 카이샤 | Information sharing system and information sharing method between a plurality of robot systems |
JP6689974B2 (en) * | 2016-07-06 | 2020-04-28 | 株式会社Fuji | Imaging device and imaging system |
JP6490032B2 (en) * | 2016-08-10 | 2019-03-27 | ファナック株式会社 | Robot controller for assembly robot |
US11373286B2 (en) * | 2016-11-07 | 2022-06-28 | Nabtesco Corporation | Status checking device for built-in object, operation checking device and method for checking built-in object |
JP6833460B2 (en) * | 2016-11-08 | 2021-02-24 | 株式会社東芝 | Work support system, work method, and processing equipment |
JP7314475B2 (en) * | 2016-11-11 | 2023-07-26 | セイコーエプソン株式会社 | ROBOT CONTROL DEVICE AND ROBOT CONTROL METHOD |
JP2018126799A (en) * | 2017-02-06 | 2018-08-16 | セイコーエプソン株式会社 | Control device, robot, and robot system |
KR101963643B1 (en) * | 2017-03-13 | 2019-04-01 | 한국과학기술연구원 | 3D Image Generating Method And System For A Plant Phenotype Analysis |
CN106926241A (en) * | 2017-03-20 | 2017-07-07 | 深圳市智能机器人研究院 | A kind of the tow-armed robot assembly method and system of view-based access control model guiding |
EP3432099B1 (en) * | 2017-07-20 | 2021-09-01 | Siemens Aktiengesellschaft | Method and system for detection of an abnormal state of a machine |
JP6795471B2 (en) * | 2017-08-25 | 2020-12-02 | ファナック株式会社 | Robot system |
JP6963748B2 (en) * | 2017-11-24 | 2021-11-10 | 株式会社安川電機 | Robot system and robot system control method |
JP6873941B2 (en) * | 2018-03-02 | 2021-05-19 | 株式会社日立製作所 | Robot work system and control method of robot work system |
JP6845180B2 (en) * | 2018-04-16 | 2021-03-17 | ファナック株式会社 | Control device and control system |
TWI681487B (en) * | 2018-10-02 | 2020-01-01 | 南韓商Komico有限公司 | System for obtaining image of 3d shape |
JP6904327B2 (en) * | 2018-11-30 | 2021-07-14 | オムロン株式会社 | Control device, control method, and control program |
CN109571477B (en) * | 2018-12-17 | 2020-09-22 | 西安工程大学 | Improved comprehensive calibration method for robot vision and conveyor belt |
CN109625118B (en) * | 2018-12-29 | 2020-09-01 | 深圳市优必选科技有限公司 | Impedance control method and device for biped robot |
JP6892461B2 (en) * | 2019-02-05 | 2021-06-23 | ファナック株式会社 | Machine control device |
EP3696772A3 (en) * | 2019-02-14 | 2020-09-09 | Denso Wave Incorporated | Device and method for analyzing state of manual work by worker, and work analysis program |
JP2020142323A (en) * | 2019-03-06 | 2020-09-10 | オムロン株式会社 | Robot control device, robot control method and robot control program |
JP6717401B1 (en) * | 2019-04-01 | 2020-07-01 | 株式会社安川電機 | Programming support device, robot system, and programming support method |
CN110102490B (en) * | 2019-05-23 | 2021-06-01 | 北京阿丘机器人科技有限公司 | Assembly line parcel sorting device based on vision technology and electronic equipment |
JP2021094677A (en) * | 2019-12-19 | 2021-06-24 | 本田技研工業株式会社 | Robot control device, robot control method, program and learning model |
JP2021133470A (en) * | 2020-02-28 | 2021-09-13 | セイコーエプソン株式会社 | Control method of robot and robot system |
CN111482800B (en) * | 2020-04-15 | 2021-07-06 | 深圳市欧盛自动化有限公司 | Electricity core top bridge equipment |
CN111761575B (en) * | 2020-06-01 | 2023-03-03 | 湖南视比特机器人有限公司 | Workpiece, grabbing method thereof and production line |
CN111993423A (en) * | 2020-08-17 | 2020-11-27 | 北京理工大学 | Modular intelligent assembling system |
CN112076947A (en) * | 2020-08-31 | 2020-12-15 | 博众精工科技股份有限公司 | Bonding equipment |
JP2022073192A (en) * | 2020-10-30 | 2022-05-17 | セイコーエプソン株式会社 | Control method of robot |
CN113305839B (en) * | 2021-05-26 | 2022-08-19 | 深圳市优必选科技股份有限公司 | Admittance control method and admittance control system of robot and robot |
CN114310063B (en) * | 2022-01-28 | 2023-06-06 | 长春职业技术学院 | Welding optimization method based on six-axis robot |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5608847A (en) * | 1981-05-11 | 1997-03-04 | Sensor Adaptive Machines, Inc. | Vision target based assembly |
DE3405909A1 (en) * | 1984-02-18 | 1985-08-22 | Licentia Patent-Verwaltungs-Gmbh, 6000 Frankfurt | DEVICE FOR DETECTING, MEASURING ANALYSIS AND / OR REGULATING TECHNICAL PROCEDURES |
JPS62192807A (en) * | 1986-02-20 | 1987-08-24 | Fujitsu Ltd | Robot control system |
JPH03220603A (en) * | 1990-01-26 | 1991-09-27 | Citizen Watch Co Ltd | Robot control method |
US6718233B2 (en) * | 2002-03-29 | 2004-04-06 | Nortel Networks, Ltd. | Placement of an optical component on a substrate |
JP3940998B2 (en) * | 2002-06-06 | 2007-07-04 | 株式会社安川電機 | Robot equipment |
WO2008047872A1 (en) * | 2006-10-20 | 2008-04-24 | Hitachi, Ltd. | Manipulator |
US8864652B2 (en) * | 2008-06-27 | 2014-10-21 | Intuitive Surgical Operations, Inc. | Medical robotic system providing computer generated auxiliary views of a camera instrument for controlling the positioning and orienting of its tip |
JP5239901B2 (en) * | 2009-01-27 | 2013-07-17 | 株式会社安川電機 | Robot system and robot control method |
JP5509859B2 (en) * | 2010-01-13 | 2014-06-04 | 株式会社Ihi | Robot control apparatus and method |
JP4837116B2 (en) * | 2010-03-05 | 2011-12-14 | ファナック株式会社 | Robot system with visual sensor |
CN102059703A (en) * | 2010-11-22 | 2011-05-18 | 北京理工大学 | Self-adaptive particle filter-based robot vision servo control method |
CN103038030B (en) * | 2010-12-17 | 2015-06-03 | 松下电器产业株式会社 | Apparatus and method for controlling elastic actuator drive mechanism |
EP2708334B1 (en) * | 2011-05-12 | 2020-05-06 | IHI Corporation | Device and method for controlling prediction of motion |
JP2012254518A (en) * | 2011-05-16 | 2012-12-27 | Seiko Epson Corp | Robot control system, robot system and program |
JP5834545B2 (en) * | 2011-07-01 | 2015-12-24 | セイコーエプソン株式会社 | Robot, robot control apparatus, robot control method, and robot control program |
CN102501252A (en) * | 2011-09-28 | 2012-06-20 | 三一重工股份有限公司 | Method and system for controlling movement of tail end of executing arm |
JP6000579B2 (en) * | 2012-03-09 | 2016-09-28 | キヤノン株式会社 | Information processing apparatus and information processing method |
CN108081268A (en) * | 2013-10-10 | 2018-05-29 | 精工爱普生株式会社 | Robot control system, robot, program and robot control method |
-
2014
- 2014-10-10 CN CN201711203574.9A patent/CN108081268A/en active Pending
- 2014-10-10 CN CN201510137542.8A patent/CN104802174B/en not_active Expired - Fee Related
- 2014-10-10 CN CN201510137541.3A patent/CN104802166B/en not_active Expired - Fee Related
- 2014-10-10 CN CN201410531769.6A patent/CN104552292A/en active Pending
- 2014-10-10 CN CN201510136619.XA patent/CN104959982A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN104802174A (en) | 2015-07-29 |
CN108081268A (en) | 2018-05-29 |
CN104552292A (en) | 2015-04-29 |
CN104959982A (en) | 2015-10-07 |
CN104802166A (en) | 2015-07-29 |
CN104802174B (en) | 2016-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104802166B (en) | Robot control system, robot, program and robot control method | |
CN106873549B (en) | Simulator and analogy method | |
JP5852364B2 (en) | Information processing apparatus, information processing apparatus control method, and program | |
JP5743499B2 (en) | Image generating apparatus, image generating method, and program | |
CN106873550B (en) | Simulation device and simulation method | |
CN104057447B (en) | The manufacture method of robot picking up system and machined object | |
Dallej et al. | Modeling and vision-based control of large-dimension cable-driven parallel robots using a multiple-camera setup | |
EP1435280B1 (en) | A method and a system for programming an industrial robot | |
US20140039679A1 (en) | Apparatus for taking out bulk stored articles by robot | |
EP2682710B1 (en) | Apparatus and method for three-dimensional measurement and robot system comprising said apparatus | |
CN109807882A (en) | Holding system, learning device and holding method | |
US20050149231A1 (en) | Method and a system for programming an industrial robot | |
JP2018176334A (en) | Information processing device, measurement device, system, interference determination method and article manufacturing method | |
CN110088797A (en) | Industrial equipment image recognition processor and controller | |
CN102135776A (en) | Industrial robot control system based on visual positioning and control method thereof | |
JP2011112400A (en) | Three-dimensional visual sensor | |
Kohn et al. | Towards a real-time environment reconstruction for VR-based teleoperation through model segmentation | |
CN109976258A (en) | Link information generating means, link information generation method and recording medium | |
Lepora et al. | Pose-based tactile servoing: Controlled soft touch using deep learning | |
Dajles et al. | Teleoperation of a humanoid robot using an optical motion capture system | |
Gosselin et al. | Robot Companion, an intelligent interactive robot coworker for the Industry 5.0 | |
CN114800524B (en) | System and method for actively preventing collision of man-machine interaction cooperative robot | |
Ferrini et al. | Kinematically-consistent real-time 3D human body estimation for physical and social HRI | |
Freund et al. | Projective Virtual Reality in space applications: A telerobotic ground station for a space mission | |
Duan et al. | Analyze assembly skills using a motion simulator |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
EXSB | Decision made by sipo to initiate substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20160928 Termination date: 20201010 |