JP4869852B2 - Teaching method for transfer robot - Google Patents

Teaching method for transfer robot Download PDF

Info

Publication number
JP4869852B2
JP4869852B2 JP2006266030A JP2006266030A JP4869852B2 JP 4869852 B2 JP4869852 B2 JP 4869852B2 JP 2006266030 A JP2006266030 A JP 2006266030A JP 2006266030 A JP2006266030 A JP 2006266030A JP 4869852 B2 JP4869852 B2 JP 4869852B2
Authority
JP
Japan
Prior art keywords
teaching
hand
robot
cassette
reference position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2006266030A
Other languages
Japanese (ja)
Other versions
JP2008080466A (en
Inventor
泰伸 音川
Original Assignee
株式会社ダイヘン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ダイヘン filed Critical 株式会社ダイヘン
Priority to JP2006266030A priority Critical patent/JP4869852B2/en
Publication of JP2008080466A publication Critical patent/JP2008080466A/en
Application granted granted Critical
Publication of JP4869852B2 publication Critical patent/JP4869852B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Description

  The present invention relates to a teaching method for a transfer robot, and more particularly to a technique for improving the safety of teaching work and improving work efficiency.

  As a prior art of this type of teaching method, for example, there is Patent Document 1. The teaching method of Patent Document 1 relates to a robot that transports a wafer, which is a semiconductor substrate, and forms a cross-shaped positioning mark on the upper surface of a robot hand that holds and transports a workpiece (wafer). However, the camera is installed in a cassette in which the workpiece is stored.

  The cross-shaped mark has one side extending in the front-rear direction, which is the insertion direction of the hand cassette, and the other side extending in the left-right direction perpendicular to the front-rear direction. The camera is installed at the center in the left-right direction of the teaching plate having an outer shape substantially similar to that of the wafer, and catches the mark from above with respect to the hand inserted into the cassette.

  In the teaching method described in Patent Document 1, the mark photographed by the camera is not out of focus, and runs in two directions, vertical and left and right on the monitor, that is, the two sides of the cross are slanted. The operator displaces the posture of the robot while looking at the monitor image. Then, the position of the hand at which the ideal cross shape is displayed at the center of the monitor is taught to the robot as the reference position for holding the workpiece. In other words, the approach limit position of the hand in the cassette in the transport operation is taught.

According to this method, the operator can teach the reference position of the robot while looking at the monitor image taken by the camera, so that the operator can safely advance the teaching work without approaching the robot. Compared with the method of teaching the reference position while visually confirming the position of the hand, the teaching operation can be quickly performed with less effort.
JP 2002-307348 A

  The present inventors are involved in the development of a transfer robot that uses a heavy object such as a plate glass as a substrate of a large-sized liquid crystal television as a workpiece. One of the important issues is to do it quickly. However, the teaching method according to Patent Document 1 is suitable for a robot that targets a relatively small and light workpiece such as a semiconductor wafer, and has a large weight such as a plate glass used as a substrate of a liquid crystal television. It is unsuitable for robots that target objects.

  That is, as in Patent Document 1, when a small work such as a semiconductor wafer having a radius of several tens of centimeters is transported from the cassette, the moving distance of the hand in the cassette is as short as several tens of centimeters. Strict precision is not required for the moving posture of the hand in the front-rear direction, and if only the approach limit position of the hand is taught to the robot, there is little possibility of a problem that the work comes into contact with the entrance / exit of the cassette.

On the other hand, when taking out a plate glass of several meters square serving as a substrate of a large television from a square box type cassette having a side length of several meters, the moving distance of the hand in the cassette is also a long distance of several meters. Therefore, a high degree of precision is required for the posture control in the front-rear direction of the hand, and it is necessary to teach the reference position not only at the approach limit position of the hand but also at multiple points.
However, as in Patent Document 1, with the conventional method of capturing a hand that has entered the cassette with only one camera installed in the cassette, it is planned to teach the robot a reference position other than the entry limit position. Therefore, it is impossible to accurately teach the movement posture of the hand in the front-rear direction. As described above, the teaching method of Patent Document 1 cannot be applied as it is to a transfer robot for a large workpiece.

  The present invention has been made in order to solve the above-described problems, and the teaching operation can be carried out while viewing the monitor image. Therefore, the safety position is excellent and the reference position is quickly taught with less effort. It is an object of the present invention to provide a method for teaching a transfer robot that can be used. In addition, the present invention can teach not only the approach limit position of the hand relative to the cassette but also a plurality of reference positions with high accuracy while observing the monitor image. It is an object of the present invention to obtain a suitable teaching method that is adopted in a transfer robot for transferring a large-sized workpiece requiring high performance and accuracy.

The present invention according to claim 1 is a method for teaching a reference position of a workpiece transfer operation to a robot that transfers a workpiece from a cassette, and is mounted on a robot hand while simulating the workpiece transfer operation with respect to the cassette. And a mastering process for photographing a model image showing an appropriate relative positional relationship between the robot hand and the cassette at the reference position by the camera, and operating the robot on the cassette and mounting the model image on the robot hand . The real space image captured by the camera and the model image captured by the mastering process are displayed on the monitor in a superimposed manner, and the posture of the robot is displaced so that the real space image is aligned with the model image while referring to the monitor display. The robot is taught to the robot as the reference position. Characterized in that it comprises a Chingu step.

  According to the present invention, the teaching step includes an image processing step of extracting and displaying the image regions of the hand and the cassette from the model image obtained in the mastering step, and identifying and displaying these regions. In the above, the real space image can be superimposed and displayed on the model image after image processing. Specific examples of the image processing at this time include, for example, color-coded display of an image area indicating a hand, and colored display of an outer edge of an image area indicating a shelf.

  Specifically, as in the third aspect of the present invention, the cassette includes a square box-shaped storage chamber having a front and rear entrance, and a rectangular plate-shaped workpiece is placed in the storage chamber. The shelf for carrying out is arrange | positioned at upper and lower multistage shape, and each shelf is opposingly arranged by the wall surface on either side of a storage chamber, and is comprised by the left-right paired shelf board which runs in the horizontal direction. A pair of left and right cameras having at least the shelf board and the edge of the hand as the field of view are mounted on the left and right ends of the hand. Then, in the mastering step, a model image indicating an appropriate relative positional relationship of the hand with respect to the shelf board is taken by both cameras, and in the teaching step, the real space image of the shelf board and the hand is converted into the model image. The posture of the robot can be displaced so as to overlap the inner shelf board and the hand, and the robot can be taught to the posture position where both images overlap as a reference position.

  As in the fourth aspect of the present invention, a series of reference position teaching operations from the mastering step to the teaching step are performed only on the two shelves at the lowermost and uppermost levels, and the lowermost and uppermost steps are performed. By shifting the reference position obtained in the teaching process for each shelf in the height direction, the reference position of each shelf located in the middle of the vertical direction other than the lowest and uppermost stages is calculated, and these calculated values are calculated. The robot can be taught as the reference position of each shelf.

  The present invention as set forth in claim 5, further comprising the step of parameterizing the relative positional relationship between the hand and the cassette in the model image or the real space image as a coordinate position of the hand on the image or a distance dimension from the cassette. By comparing the parameter value in the real space image photographed in the teaching process with the parameter value in the model image, it is possible to check the deviation between the appropriate relative position between the hand and the cassette and the taught reference position. Can be.

In the first aspect of the present invention, a model image showing an appropriate relative positional relationship between the hand and the cassette related to the reference position is previously captured in the mastering step, and the real space image is added to the model image in the teaching step. Is superimposed and displayed on the monitor, and the posture of the robot when both images match is taught as the reference position. In other words, when the real space image matches the model image in the teaching process, it is determined that the robot has the same posture in the cassette as the posture when the model image was taken in the mastering step, and this posture is transported. It was taught as the reference position.
Therefore, in the teaching process, it is possible for the operator to proceed with the teaching work remotely while watching the monitor display, and it is necessary to check the posture by approaching the robot and check the margin gap with the cassette. Therefore, teaching work can be performed safely. In addition, since the teaching work can be carried out simply by an operation of substantially matching the real space image with the model image, the teaching work can be efficiently performed with less effort than in the past.

  In addition, it is possible to teach the reference position in the teaching process as many as the number of monitor images photographed in the mastering process, and there is no restriction on the location and the number of reference positions that can be taught, and entry is made as in Patent Document 1. Compared with the conventional method in which only the limit position can be taught, it is possible to teach multiple reference positions and move the hand precisely and accurately. Therefore, the teaching method according to the present invention is suitable for a large-sized transport robot that requires a long hand moving distance of several meters in the cassette and requires accurate hand control.

  When the teaching process is performed using the model image in which the image area of the hand or the cassette is identified and displayed by image processing as in the present invention described in claim 2, both images displayed on the monitor are superimposed. Identification becomes easy and the position of the hand or the like can be accurately captured. As a result, the aligning operation can be performed quickly, so that the teaching operation can be carried out efficiently.

In the form in which the work is placed on the upper and lower multi-stage shelf arranged in the rectangular box-shaped storage room, the hand is advanced deeply from the entrance formed on the front side of the storage room, and the work is taken out, In particular, it is important to strictly define a left-right margin between the edge of the hand and the shelf board to prevent the hand from colliding with the shelf board.
Therefore, as in the third aspect of the present invention, a pair of left and right images captured by these cameras are mounted on the left and right ends of the hand with two cameras having at least a shelf and an edge of the hand as the field of view. If the teaching process is performed using the, the real space image is overlaid on each of the left and right model images, and the margin gap between the hand and the shelf board is confirmed from the left and right directions, The position can be taught with higher accuracy. This makes it possible to teach the reference position in a posture form in which the hand always extends straight forward and backward while keeping the left and right margins constant, effectively preventing the hand or work from colliding with the shelf board etc. it can.

  As in the present invention, the reference position is taught only for the bottom shelf and the top shelf, and the bottom or top reference position is shifted in the vertical direction by shifting the bottom position or the top reference position in the height direction. If the reference position of each shelf located in the middle is taught, the teaching work is advanced more quickly than when a series of teaching work consisting of a mastering process and a teaching process is performed for all stages. Can contribute to a significant improvement in work efficiency.

  The relative positional relationship between the hand and the cassette in the model image or real space image is parameterized as the coordinate position of the hand on the image or the distance from the cassette, and the parameter value in the real space image taken in the teaching process and the model image If the correct relative position between the hand and the cassette can be checked for deviation from the taught reference position by comparing the parameter value with the teaching value, the teaching operation is performed using only the image displayed on the monitor. Compared to the case, the reference position can be taught with high accuracy. Since the teaching work can be performed while viewing the parameters, the teaching work can be simplified.

In the following, an embodiment in which the teaching method according to the present invention is applied to a transfer robot for transferring a plate glass as a substrate of a large liquid crystal television will be described with reference to the drawings. FIG. 1 is a diagram showing the operating state of the transfer robot, FIG. 2 is a diagram showing the configuration of the transfer robot, FIGS. 3 and 4 are diagrams for explaining a photographing area by a camera, and FIG. 5 is a reference position. FIG. 6 is a diagram for explaining the mounting position of the camera with respect to the hand, and FIG. 7 is a flowchart for explaining the teaching method of the transfer robot.
In the following description, as indicated by arrows Y1 and Y2 in FIG. 1, the direction in which the hand of the transfer robot moves in and out of the cassette is defined as the front-rear direction. In addition, as indicated by an arrow Y3, a direction perpendicular to the front-rear direction Y1, Y2 and the horizontal direction is defined as a left-right direction.

  As shown in FIG. 1, this transfer robot 1 (hereinafter simply referred to as “robot”) is installed in a work area S maintained in a highly clean environment such as a front end. The work 3 that is a rectangular plate glass that has been carried to the work area S in a stored state is taken out one by one from the cassette 2 and transferred to the work set position 4 of the processing apparatus.

As shown in FIGS. 1 and 2, the robot 1 is a cylindrical coordinate system robot, and includes a right cylindrical base 5 and a first arm 6 mounted on the upper end of the base 5 so as to be capable of rotational driving and vertical driving. The second arm 7 is rotatably mounted on the extended end of the first arm 6, and the hand 8 is rotatably mounted on the extended end of the second arm 7 and holds the workpiece 3. Is done. However, in the present invention, the lifting mechanism of the robot 1 is not limited to the lifting mechanism of the cylindrical coordinate system as shown in the illustrated example.
The robot 1 is configured to be slidable in the left-right direction along a rail 9 installed on the floor surface of the work space S. The robot 1 moves to the front position of any one of the cassettes 2 and moves from the cassette 2 to the front. The workpieces 3 are taken out and placed on the workpiece setting position 4.

  The hand 8 includes a long fork 10 that supports the work 3 and a holder 11 that supports a base end portion of the fork 10. The holder 11 has a square box shape, and a plurality of (four in the present embodiment) forks 10 are cantilevered from one side surface.

  The actuators that move the arms 6 and 7 and the hand 8 are composed of motors and the like, and are respectively arranged on the rotating shafts. The actuators are based on drive signals sent from the motion controller 14 via the motor drive 13. Driven. The motion controller 14 sends a drive signal based on a control signal from a host controller (not shown).

As shown in FIGS. 3 and 4, the cassette 2 has a rectangular box-shaped case body 20 as a base, and an entrance / exit 21 for the work 3 is provided on the front surface of the cassette 2. Is placed in a multistage shape. Each shelf R is configured by a pair of left and right shelf plates Ra and Rb that are opposed to each other between the left and right side walls 23 of the storage chamber 22 and run in the front-rear horizontal direction. It is stored in the storage chamber 22 in a state of being hooked between the shelf plates Ra and Rb, that is, in a state in which the left and right lower surfaces of the work 3 are placed on the shelf plates Ra and Rb. In the cassette 2 according to the present embodiment, a total of n stages of shelves R are provided from the lowermost first slot (R1) to the uppermost Nth slot (RN).
In FIG. 3 and FIG. 4, for example, “RaN” means the left shelf (Ra) constituting the Nth slot (RN).

As shown in FIG. 1, when the cassette 2 is arranged at a predetermined position in the work area S and the workpiece 3 is transferred by the robot 1, the fork 10 is inserted into the storage chamber 22 from the entrance / exit 21, and one workpiece 3 is inserted. Is placed on the fork 10 and removed from the cassette 2 and placed on the work set position 4 of the processing apparatus.
More specifically, when receiving a carry-out command, the robot 1 is displaced from a predetermined standby posture so that the fork 10 runs in the front-rear direction on the front surface of the cassette 2. Next, the fork 10 is inserted straight into the cassette 2 while maintaining the horizontal posture of the fork 10. When the fork 10 reaches a predetermined position in the cassette 2, the fork 10 is moved upward by a predetermined height while maintaining the horizontal posture, and the work 3 is placed on the fork 10. Next, with the horizontal posture maintained, the fork 10 is pulled straight out toward the front, and the work 3 is taken out from the cassette 2. Finally, the entire robot 1 is rotated around the base 5 and the work 3 is placed at the work set position 4 of the processing apparatus.

A series of transport operations from the cassette 2 as described above is executed according to the reference position stored in the storage device of the motion controller 14. That is, prior to work processing by the robot 1, a reference position serving as a reference for the work position is taught. In actual work processing, the arms 6 and 7 and the hand 8 are moved so as to be the same position as this reference position. Displace the posture.
In the present embodiment, as shown in FIG. 5, the first reference position (for example, (1-1)) in which the fork 10 is in a horizontal posture on the front surface of the cassette 2, and about half of the fork 10 is inserted into the cassette 2. A second reference position (1-2), a third reference position (1-3) in which substantially the entire fork 10 is inserted into the cassette 2, and a fourth reference position in which the upper surface of the fork 10 abuts the lower surface of the workpiece 3 ( 1-4) The fifth reference position (1-5) in which the workpiece 3 is completely lifted, the sixth reference position (1-6) in which the fork 10 is pulled out by about half, and the fork 10 is completely pulled out from the cassette 2. The seventh reference position (1-7) is taught corresponding to each of the first to Nth slots, and the conveying operation from the cassette 2 is executed according to the first to seventh reference positions.

  In teaching these reference positions, a teaching method called “teaching playback method” is adopted. Here, the “teaching playback method” is a method of teaching a reference position mainly using a teaching box 15 (see FIG. 2) connected to the motion controller 14. The robot 1 is operated by key operation of the teaching box 15. This is a method of teaching a reference position necessary for a series of work while sequentially moving to an actual work position. One work operation is composed of units called “steps” that are movement operations to a certain reference position, and the addition of a step means movement from the current reference position to the next reference position. That is, the transfer operation of the workpiece 3 from the previous cassette 2 is composed of six steps from the first reference position to the seventh reference position.

  In addition, in the teaching method according to the present embodiment, a pair of left and right cameras 30a and 30b are mounted on the hand 8 of the robot 1, and the reference position can be taught based on the captured images obtained by the cameras 30a and 30b. It is noted that there is. More specifically, as shown in FIGS. 2 and 4, the cameras 30 a and 30 b are detachably fixed to the left and right side surfaces of the holder 11 in a posture that is oriented in the extending direction of the fork 10. As shown in FIG. 6, the fixed positions of the cameras 30a and 30b with respect to the holder 11 and the center position in the thickness direction of the fork are the same, that is, the vertical imaging centers of the cameras 30a and 30b are aligned with the vertical centers of the forks. Therefore, as shown in FIG. 4, the outer edges of the fork 10 positioned at the left and right ends are always displayed in the captured images 31a and 31b by the cameras 30a and 30b. Further, when the fork 10 is inserted into the cassette 2, the field of view of the cameras 30a and 30b is set so that the shelf plates Ra and Rb and the outer edge of the fork 10 are projected. The captured images captured by both cameras 30a and 30b are as shown in FIG. 3 and FIG. 4, and these captured images 31a and 31b are image processing apparatuses connected to the robot 1 via cables as shown in FIG. It is displayed on 16 monitors 17a and 17b. The cameras 30a and 30b can be arranged on the lower surfaces of both sides of the holder 11 or on both sides of the fork 10 (positions that do not interfere with the workpiece and do not deform with a load).

  Next, a specific procedure of the teaching method will be described with reference to the flowchart of FIG. As shown in FIG. 7, this teaching method is roughly divided into a mastering step (S1), an image processing step (S2), a teaching step (S3), a parameter processing step (S4), and a calculation step (S5).

  The mastering step (S1) is performed on the manufacturer side prior to the market shipment of the robot 1, and a model image indicating an appropriate relative positional relationship between the hand 8 of the robot 1 and the cassette 2 at the reference position is obtained. Objective. Specifically, first, a cassette 2 (hereinafter referred to as a reference cassette) serving as a reference for the teaching position is prepared, and this is installed and fixed in the vicinity of the robot 1. In many cases, a work area S of the robot 1 as shown in FIG. 1 is temporarily set up in the research and development site of the manufacturer, and the reference cassette 2 is set in the vicinity of the robot 1 in the area S. Note that the reference cassette 2 is not substantially different from the cassette 2 used in actual conveyance work.

  Next, by operating the teaching box 15 by the operator on the manufacturer side, a workpiece unloading operation is simulated with respect to the reference cassette 2, and a model image is taken while sequentially moving the hand 8 to the actual work position. . In this work, an appropriate relative position of the hand 8 with respect to the cassette 2 is obtained, in other words, the posture of the hand 8 and the robot 1 is displaced to an appropriate work position serving as a reference position. For this reason, in such an operation, the operator visually confirms the position of the hand 8 near the robot 1, measures the distance between the shelves R and the fork 10 (the margin gap) with a measure, Further, the captured images 31a and 31b (real space images) displayed by the cameras 30a and 30b displayed on the monitors 17a and 17b of the image processing device 16 are confirmed, and the position of the hand 8 is strictly positioned so as to be positioned at a predetermined reference position. adjust. At this time, if a positioning mark is placed in the central portion of the shelf plates Ra and Rb in the thickness direction, the positioning of the hand 8 can be easily adjusted.

When the reference position of the hand 8 with respect to the reference cassette 2 is determined, real space images captured by the left and right cameras 30a and 30b at that position are taken as model images and stored in the storage device of the image processing device 16.
As shown in FIG. 5, the above-described model image photographing operation includes the first reference position (1-1) to the seventh reference position (1-7) of the first slot (R1) located at the lowest level. Are performed at a total of 14 positions from the first reference position (N-1) to the seventh reference position (N-7) of the Nth slot (RN) located at the uppermost stage, and a total of 28 models are provided by the cameras 30a and 30b. An image is taken.

When the mastering step (S1) as described above is completed, the process proceeds to the image processing step (S2). In this step, the operator operates the image processing device 16 to extract main points and regions included in each model image, and then easily identify these points and the like on the monitors 17a and 17b. Is subjected to image processing. Specifically, a coloring process is applied to the surfaces and contours of the fork 10, the shelf plates Ra and Rb, the inner surface of the cassette 2, and the like. These image areas may be processed as line drawings.
At this time, the coordinate positions of the main points on the image such as the corners and intersections of the shelf plates Ra and Rb are extracted and stored in the storage device of the image processing device 16 together with the model image after the image processing. In addition, the thickness dimensions of the shelf boards Ra and Rb on the model image in the image and the distance between the shelf boards Ra and Rb and the fork 10 are extracted and parameterized and stored in the storage device together with the model image. Keep it.

  Again, the manufacturer side of the robot 1 performs the mastering process (S1) and the image processing process (S2) as described above prior to market shipment of the robot 1. Then, the obtained model image after image processing is stored in the storage device of the image processing device 16. However, the model image may be stored in the storage medium, and the model image may be read from the storage medium to the image processing device 16 prior to the subsequent teaching step (S3).

  Next, a teaching process (S3) is performed. This process is performed after the purchased robot 1 is installed in the work area S (see FIG. 1). The operator of the process may be either the manufacturer side or the user side. In this step, first, the cameras 30 a and 30 b are fixed to the holder 11 of the robot 1. The fixed locations of the cameras 30a and 30b are the positions shown in FIG. 2, and are completely coincident with the fixed locations of the cameras 30a and 30b in the previous mastering step (S1).

  Next, the operator operates the robot 1 to teach the reference position. Specifically, a model image related to the reference position to be taught (for example, a model image of the first reference position (1-1: see FIG. 5) in the first slot) is displayed on the monitors 17a and 17b of the image processing device 16. After the display, the real space image captured by the cameras 30a and 30b is displayed on the model image in a superimposed manner (overlaid). Then, the teaching box 15 is operated to displace the posture of the robot 1 so that the real space image is aligned with the model image, and the robot 1 is taught with the posture position where the real space image completely matches the model image as a reference position. To do. That is, position data related to the coordinates of the reference position is stored in the storage device of the motion controller 14. At this time, it is necessary to adjust the posture of the robot 1 so that the real space image captured by both the left and right cameras 30a and 30b coincides with the model image, not only one of the left and right.

The operator performs the teaching operation as described above from the first reference position (1-1) to the seventh reference position (1-7) of the first slot and the first reference position (N-1) of the Nth slot. ) To 7th reference positions (N-7), a total of 14 positions are performed.
In addition, as shown in FIG. 1, when the robot 1 is in charge of transporting the work 3 from a plurality of cassettes 2 (four in the illustrated example), the operator applies to the cassette 2 at each position. Repeat the teaching work as described above.

  At this time, the teaching work can be advanced while checking the position of the hand 8 using the parameters obtained in the previous image processing step (S2) (parameter processing step: S4). Specifically, for example, the facing distance between the shelves Ra and Rb and the fork 10 in the real space image is parameterized, and this is set as the parameter value of the facing distance obtained in the previous image processing step (S2). By comparing, it is taught whether there is a deviation between the two images, in other words, while checking whether there is a deviation between the reference position defined in the mastering step (S1) and the current position of the hand 8. Work can proceed. This parameter value is displayed on the monitors 17a and 17b or the teaching box 15.

Next, based on the reference positions for the first and Nth slots as described above, the reference positions of the second to N′th slots (one slot below the topmost Nth slot) are obtained by calculation. (Calculation process: S5). That is, as shown in FIG. 5, the position data relating to the coordinates of the first to seventh reference positions ((1-1) to (1-7)) taught for the first slot is shifted in the height direction. The position data of the reference positions of the second to N′th slots are created, and these are taught to the robot 1. However, by shifting the position data related to the coordinates of the reference positions ((N-1) to (N-7)) taught for the Nth slot in the height direction, the reference positions of the second to N'th slots. Position data may be created. Further, by calculating the average value of the position data of the reference positions of the first and Nth slots and shifting the average value in the height direction, the position data of the reference positions of the second to N′th slots is created. May be.
This completes the teaching operation. The cameras 30a and 30b are removed from the holder 11 after the teaching step (S3) is completed.

As described above, in the teaching method according to the present embodiment, when the real space image matches the model image in the teaching step (S3), the posture when the robot 1 captures the model image in the mastering step (S1) It is determined that the same posture is taken in the cassette 2, and this posture is taught as a reference position for the transport operation.
According to this, in the teaching process (S3), the operator can advance the teaching work by remote operation while looking at the monitors 17a and 17b of the image processing apparatus 16, and approaches the robot 1 installed in the work area S. There is no need to check the posture or to check the clearance between the hand 8 and the cassette 2, and the teaching operation can be performed safely. That is, after the robot 1 is installed in the work area S, it is not necessary to approach the robot 1 in the teaching step (S3), so that the teaching work can be performed safely. In addition, since the teaching work can be carried out by simply adjusting the real space image to the model image, there is an advantage that the teaching work can be carried out with less labor and higher work efficiency than in the past. .

  If a large number of reference positions are taught, the hand 8 can be moved accurately and accurately. This teaching method has a large movement distance of the hand 8 in the cassette 2 of about several meters, and is accurate and precise. It is suitable for a large robot 1 that requires a large workpiece 3 to be transported and requires hand control.

  In the method of calculating the reference position based on the virtual cassette and the robot created in the computer, the fork 10 is unexpectedly bent and deformed by the weight of the work 3 when the conveyance operation is actually performed. As a result, the fork 10 may come into contact with the work 3 arranged in the lower stage, and there is a risk of causing problems such as damage to the work 3. Each time the workpiece 3 having a different weight is handled, it is also necessary to recalculate the amount of deformation of the fork 10 on the computer. Above all, it is not possible to determine whether or not the virtually calculated reference position is accurate until the actual transport operation is performed.

  On the other hand, in the method of teaching the reference position while actually moving the robot 1 with respect to the cassette 2 as in the present embodiment, the workpiece 3 is placed on the hand 8 in the mastering step (S1) and teaching step (S3). By performing the teaching work in a state in which the fork 10 is placed, the bending deformation of the fork 10 due to the weight of the workpiece 3 can be reliably compensated. That is, in the mastering step (S1), by performing the work with the work 3 placed on the hand 8 in advance, the reference position is determined in a state of being bent and deformed by the weight of the work 3, and the model image is obtained. be able to. In the teaching step (S3), a real space image is taken with the workpiece 3 placed on the hand 8, and this is determined in the mastering step (S1) by matching it with the previous model image. The hand 8 can be reliably positioned in the same posture as the reference position. Moreover, if several types of model images are taken in a state where the workpiece 3 having a different weight is placed in advance, the teaching process (S3) can be easily performed, and the change of the workpiece 3 can be quickly handled. Therefore, according to the teaching method of the present embodiment, the bending deformation of the hand 8 due to the weight of the workpiece 3 can be easily and reliably compensated, and therefore, the highly accurate matching with the movement of the actual hand 8 or the like. The reference position can be taught. Note that the teaching operation while placing the workpiece 3 on the hand 8 as described above is sufficient to be performed at the fourth to seventh reference positions in the example of FIG.

  As a specific countermeasure against the bending deformation of the fork 10 as described above, a method of shifting the overall posture position of the robot 1 including the hand 8 to a high posture is conceivable. When the hand 8 can take a vertically inclined posture with respect to the second arm 7, the hand 8 is inclined obliquely so that the tip of the fork 10 is directed slightly upward from the horizontal direction. In this case, it is possible to reliably prevent the tip of the fork 10 from coming into contact with the work 3 arranged at the lower stage.

  If the image area of the model image hand 8 or the cassette 2 is identified and displayed in the image processing step (S2), the real space for the model image displayed in a superimposed manner in the subsequent teaching step (S3). Images can be easily identified, and the positions of the hand 8 and the like in both images can be accurately captured. As a result, the image alignment operation can be performed quickly, so that the teaching operation can be carried out efficiently.

  As shown in FIG. 2, a pair of left and right cameras 30a and 30b are mounted on the left and right ends of the holder 11, and the teaching process (S3) is performed using the pair of left and right images captured by these cameras 30a and 30b. If this is the case, overlay the real space image on each of the left and right model images, and check the clearance between the fork 10 and the shelves Ra and Rb from both the left and right directions, making the reference position more accurate. Can be taught. Thereby, the reference position can be taught in a posture form in which the fork 10 extends straight back and forth while keeping the left and right margin gaps constant, so that the hand 10 and the work 3 can be moved against the shelf plates Ra and Rb. There is no problem of collision.

Further, in the embodiment in which the cameras 30a and 30b are detachably installed on the fork 10, the fork 10 may be bent and deformed due to the weight of the cameras 30a and 30b. Therefore, the posture of the hand 8 during teaching operation and actual conveyance operation There is a possibility that a slight difference occurs in the position, causing a malfunction such as the tip of the fork 10 contacting the inside of the cassette 2.
On the other hand, when the cameras 30a and 30b are installed in the holder 11 that supports the fork 10 as in this embodiment, there is no possibility that the fork 10 is bent and deformed by the weight of the cameras 30a and 30b. This contributes to improving the reliability of the transport operation.

  By teaching the reference position only for the two shelves R at the bottom and top, and shifting the reference position at the bottom (first slot) or the top (Nth slot) in the height direction, the vertical direction If the reference position of each shelf (second to N'th slots) located in the middle is taught, a series of teachings consisting of a mastering step (S1) and a teaching step (S3) for all stages. Compared with the case where the work is performed, the teaching work can be advanced quickly, and the working efficiency can be remarkably improved.

  The relative position relationship between the hand 8 and the cassette 2 in the model image or real space image is parameterized as the coordinate position of the hand 8 on the image or the distance dimension from the cassette 2, and the real space image photographed in the teaching step (S3). By comparing the parameter values in the model image and the parameter values of the model image, the monitor 8 displays that the deviation between the appropriate relative position between the hand 8 and the cassette 2 and the taught reference position can be checked. The reference position can be taught with higher accuracy than when the teaching operation is performed using only the image. Since the teaching work can be performed while viewing the parameters, the teaching work can be simplified.

  In the above-described embodiment, the robot 1 is instructed to perform the transfer operation from the cassette 2 by teaching the robot 1 a total of seven reference positions from the first to the seventh reference positions. The position may be more or less than this. However, as described above, the robot 1 can be controlled more precisely by teaching many reference positions. Further, the number of shelves R provided in the cassette 2 is not limited.

  In the above embodiment, the teaching step (S3) can be performed while checking the parameter value. However, the present invention is not limited to this. For example, the teaching step (S3) is completed for the parameter value. After that, it may be used when checking the reference position.

It is a figure which shows the operation state of the robot for conveyance to which the teaching method which concerns on this invention is applied. It is a figure which shows the structure of the robot for conveyance to which the teaching method which concerns on this invention is applied. It is a figure for demonstrating the imaging | photography area | region by the camera mounted in the robot for conveyance. It is a figure for demonstrating the imaging | photography area | region by the camera mounted in the robot for conveyance. It is a figure for demonstrating a reference position. It is a figure for demonstrating the mounting position of the camera with respect to a hand. It is a flowchart for demonstrating the teaching method of the conveyance robot which concerns on this invention.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 Transfer robot 2 Cassette 3 Work 8 Hand 16 Image processing apparatus 21 Cassette inlet / outlet 22 Cassette storage chamber 23 Cassette side wall 30a / 30b Camera R Shelf Ra / Rb Shelf plate

Claims (5)

  1. A method for teaching a reference position of a workpiece transfer operation to a robot that transfers a workpiece from a cassette,
    A mastering step of photographing a model image indicating an appropriate relative positional relationship between the robot hand and the cassette at the reference position by a camera mounted on the robot hand while simulating the workpiece transfer operation with respect to the cassette;
    The robot is operated with respect to the cassette, and the real space image photographed by the camera mounted on the robot hand and the model image photographed in the mastering process are displayed in a superimposed manner, and this monitor display is referred to. And a teaching step of teaching the robot using the posture position of the robot in which both images coincide with each other as a reference position, so as to align the real space image with the model image. Teaching method.
  2. An image processing step of identifying and displaying these regions after extracting the image regions of the hand and the cassette from the model image obtained in the mastering step;
    The teaching method for a transfer robot according to claim 1, wherein in the teaching step, the real space image is displayed in a superimposed manner on a model image after image processing.
  3. The cassette is provided with a rectangular box-shaped storage chamber having an entrance at the front, and in the storage chamber, shelves for placing a square plate-like work are arranged in a multi-stage shape,
    Each shelf is arranged with a pair of left and right shelf boards that are opposed to the left and right wall surfaces of the storage room and run in the front and rear horizontal direction,
    A pair of left and right cameras having at least the shelf board and the edge of the hand as the field of view are mounted on the left and right ends of the hand,
    In the mastering step, a model image showing an appropriate relative positional relationship of the hand with respect to the shelf board is taken by both cameras,
    In the teaching step, the posture of the robot is displaced so that the real space image of the shelf board and the hand overlaps the shelf board and the hand in the model image, and the robot is instructed to use the posture position where both images overlap as a reference position. The method for teaching a transfer robot according to claim 1 or 2.
  4. Performs a series of reference position teaching operations from the mastering process to the teaching process only on the bottom shelf and the top shelf,
    By shifting the reference position obtained in the teaching process for these bottom and top shelves in the height direction, the reference position of each shelf located in the middle in the vertical direction other than the bottom and top is calculated. 4. The method for teaching a transfer robot according to claim 3, wherein the calculated values are taught to the robot as the reference position of each shelf.
  5. Including the step of parameterizing the relative positional relationship between the hand and the cassette in the model image or the real space image as the coordinate position of the hand on the image or the distance dimension from the cassette,
    By comparing the parameter value in the real space image taken in the teaching process with the parameter value in the model image, it is possible to check the deviation between the proper relative position between the hand and the cassette and the taught reference position. The teaching method of the transfer robot according to any one of claims 1 to 4.
JP2006266030A 2006-09-28 2006-09-28 Teaching method for transfer robot Active JP4869852B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006266030A JP4869852B2 (en) 2006-09-28 2006-09-28 Teaching method for transfer robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006266030A JP4869852B2 (en) 2006-09-28 2006-09-28 Teaching method for transfer robot

Publications (2)

Publication Number Publication Date
JP2008080466A JP2008080466A (en) 2008-04-10
JP4869852B2 true JP4869852B2 (en) 2012-02-08

Family

ID=39351828

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006266030A Active JP4869852B2 (en) 2006-09-28 2006-09-28 Teaching method for transfer robot

Country Status (1)

Country Link
JP (1) JP4869852B2 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5431049B2 (en) * 2009-07-16 2014-03-05 株式会社荏原製作所 Control method for cassette of substrate transfer robot
JP5512350B2 (en) * 2010-03-30 2014-06-04 川崎重工業株式会社 Board transfer robot status monitoring device
JP6006103B2 (en) * 2012-12-07 2016-10-12 株式会社ダイヘン Robot teaching method, transfer method, and transfer system
KR101565336B1 (en) 2014-10-30 2015-11-04 주식회사 티에스시 Calibration Device of Robot for Transfering Wafer
JP6486679B2 (en) * 2014-12-25 2019-03-20 株式会社キーエンス Image processing apparatus, image processing system, image processing method, and computer program
KR101957096B1 (en) * 2018-03-05 2019-03-11 캐논 톡키 가부시키가이샤 Robot system, Manufacturing apparatus of device, Manufacturing method of device and Method for adjusting teaching positions

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2130218B (en) * 1982-11-18 1986-06-18 Ici Plc Electrodepositable coating compositions
JPH074781B2 (en) * 1986-07-23 1995-01-25 日立京葉エンジニアリング株式会社 Attitude reproduction method of the robot jig
JPH02198783A (en) * 1989-01-23 1990-08-07 Fanuc Ltd Correction method for positioning industrial robot
JPH04100118A (en) * 1990-06-28 1992-04-02 Yaskawa Electric Corp Position detection for robot with visual sense
JPH06285638A (en) * 1993-03-31 1994-10-11 Nippon Steel Corp Automatic welding control method
JPH11338532A (en) * 1998-05-22 1999-12-10 Hitachi Ltd Teaching device
JP2001202123A (en) * 2000-01-18 2001-07-27 Sony Corp Method for teaching carrier robot
JP2002172575A (en) * 2000-12-07 2002-06-18 Fanuc Ltd Teaching device
JP3694808B2 (en) * 2001-04-13 2005-09-14 株式会社安川電機 Wafer transfer robot teaching method and teaching plate
KR100870661B1 (en) * 2002-03-15 2008-11-26 엘지디스플레이 주식회사 Cassette for accepting substrate
JP3924495B2 (en) * 2002-04-24 2007-06-06 株式会社日立製作所 Remote control device
JP4257570B2 (en) * 2002-07-17 2009-04-22 株式会社安川電機 Transfer robot teaching device and transfer robot teaching method
JP2005005608A (en) * 2003-06-13 2005-01-06 Ebara Corp Teaching apparatus for substrate transfer robot and teaching method

Also Published As

Publication number Publication date
JP2008080466A (en) 2008-04-10

Similar Documents

Publication Publication Date Title
US20180093377A1 (en) Determining a Virtual Representation of an Environment By Projecting Texture Patterns
CN106573381B (en) The visualization of truck unloader
US9630316B2 (en) Real-time determination of object metrics for trajectory planning
US9604363B2 (en) Object pickup device and method for picking up object
US10434655B2 (en) Robot apparatus and method for controlling robot apparatus
KR101988083B1 (en) Systems and methods for tracking location of movable target object
JP6180087B2 (en) Information processing apparatus and information processing method
JP6305213B2 (en) Extraction device and method
EP1875991B1 (en) Measuring system and calibration method
DE102013012068B4 (en) Apparatus and method for removing loosely stored objects by a robot
US9124873B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
JP4565023B2 (en) Article take-out device
JP5810562B2 (en) User support device directed to image processing system, program thereof, and image processing device
Renaud et al. Simplifying the kinematic calibration of parallel mechanisms using vision-based metrology
US7805219B2 (en) Carrier robot system and control method for carrier robot
JP4976402B2 (en) Method and apparatus for practical 3D vision system
DE102014102943B4 (en) Robot system with functionality for determining the location of a 3D box
EP1905548B1 (en) Workpiece picking apparatus
EP1864764B1 (en) Robot simulation apparatus
JP4167954B2 (en) Robot and robot moving method
KR100753290B1 (en) Apparatus and method for robotic alignment of substrates
CN102735166B (en) Spatial digitizer and robot system
US20100234993A1 (en) Method and System for Providing Autonomous Control of a Platform
CN101927494B (en) Shape detection system
CN106896790B (en) Simulation device and simulation method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090703

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110117

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110830

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20111011

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20111108

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20111116

R150 Certificate of patent or registration of utility model

Ref document number: 4869852

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20141125

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250