WO2020022302A1 - Grasping device - Google Patents
Grasping device Download PDFInfo
- Publication number
- WO2020022302A1 WO2020022302A1 PCT/JP2019/028757 JP2019028757W WO2020022302A1 WO 2020022302 A1 WO2020022302 A1 WO 2020022302A1 JP 2019028757 W JP2019028757 W JP 2019028757W WO 2020022302 A1 WO2020022302 A1 WO 2020022302A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gripping
- work
- hand
- success rate
- image
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Definitions
- This invention relates to a gripping device for picking up a work.
- a work as an object is often automatically picked up by a robot or an assembling device or the like and set on a casing of the processing device or the assembly.
- a work information processing apparatus is used for detecting a posture of a work, recognizing a shape, and the like, which are necessary for a work to be taken out by a robot or an assembly device.
- image processing is used for work posture detection and shape recognition.
- a work is photographed by a camera, the posture of the work is detected by pattern matching or the like, and the hand is positioned to a grippable posture by controlling a pickup arm based on the detection result.
- JP-A-2015-213975, JP-A-2017-047505, JP-A-2016-221647, JP-A-10-332333 and JP-A-11-066321 disclose gripping devices for picking up a work. ing.
- Patent Document 1 interference between the hand and the positioning mechanism of the hand and the container storing the work is checked before the gripping operation, and the detected work is detected. There has been proposed a method of determining whether or not a device can be picked. This is to check for mechanical interference before the gripping operation, and is a necessary function in work removal.
- Patent Document 2 determines that it is impossible to grip even if there is an obstacle in the process of the hand passing between the approach point and the grip point of the hand. Instead, learn the past grasping success / failure information and the existence state of obstacles in the passing area of the hand at the time of the grasping operation by machine learning (support vector machine), and determine the success or failure of the grasping using the learning result before the grasping operation. The gripping operation is performed based on the determination result.
- This apparatus also determines whether or not a gripping operation due to mechanical interference is possible before the gripping operation, as in Japanese Patent Application Laid-Open No. 2015-213973 (Patent Document 1).
- Patent Document 3 proposes a method of selecting a gripping form from a plurality of pieces of gripping form information held in the apparatus in consideration of the state of a surrounding work at a gripping position. ing.
- the evaluation value for selecting the gripping form the evaluation value is calculated based on the number of contact points between the gripping target and the surrounding work, and information on the ease of slipping, breaking, and tangling of the target work at the contact points.
- a method of selecting a gripping form that maximizes an evaluation value is proposed.
- Patent Document 2 Japanese Patent Application Laid-Open No. 2017-047505 (Patent Document 2) is superior to Japanese Patent Application Laid-Open No. 2015-213973 (Patent Document 1) in that the accuracy of success / failure determination can be improved by learning based on the success / failure information of past gripping operations.
- Patent Document 3 Japanese Patent Application Laid-Open No. 2016-221647 (Patent Document 3) it is necessary to determine features in advance, and preparation for learning may take time.
- An object of the present invention is to solve such a problem, and an object of the present invention is to finish learning for selecting an appropriate gripping position in a relatively short time in a work of taking out a bulky work. It is to provide a possible gripping device.
- a grasping device includes a photographing device for observing the surface of a work, a three-dimensional sensor for measuring coordinate data of a point on the surface of the work, a hand for grasping the work, and a posture capable of grasping. And a work information processing device that detects the optimal gripping position of the work using the photographing device and the three-dimensional sensor, and controls the positioning mechanism and the hand.
- the work information processing apparatus determines a gripping position of the work based on the coordinate data acquired from the three-dimensional sensor or the image data acquired from the imaging device, and a positioning mechanism and a hand are provided so that the hand grips the work based on the gripping position.
- the hand is photographed by the photographing device, and it is determined whether or not the work is actually grasped based on the acquired image data.
- the gripping rate is calculated, and a gripping position to be used in the next and subsequent gripping operations is determined based on the gripping success rate.
- the work information processing device is a work detection unit that detects the position of the work to be grasped using the coordinate data acquired from the three-dimensional sensor or the image data acquired from the imaging device, and is detected by the work detection unit.
- a gripping position candidate detecting unit that determines interference between the hand around the workpiece and the hand based on the position of the workpiece and detects one or more gripping position candidates, and a gripping position candidate extracted by the gripping position candidate detecting unit.
- a gripping position selection unit for selecting an optimal gripping position from the above, a command output unit for outputting a positioning command or a gripping / opening command to a positioning mechanism or a hand based on the optimal gripping position, and an actual output based on image data obtained from the photographing device.
- the grasping position candidate detection unit obtains a grasping success rate for the grasping position candidate based on a machine learning result using data acquired from the three-dimensional sensor or the imaging device and a determination result of the gripping movement creation non-determination unit.
- the gripping position selection unit selects a candidate having a maximum gripping success rate as an optimal gripping position.
- the gripping position candidate detection unit is coordinate data acquired from a three-dimensional sensor or image data acquired from an imaging device as learning data of machine learning, a gripping position when a gripping operation is performed, and a gripping movement creation determination unit.
- the gripping success rate is obtained using the determination result of (1).
- the number of gripping operations in which the gripping position candidate detecting unit does not use a learning result for calculating the gripping success rate is N times.
- the gripping position selection unit selects a point closest to the position of the center of gravity on the image of the workpiece among the gripping position candidates in the gripping up to the Nth time, and is obtained by using the machine learning result after exceeding N times.
- the candidate with the highest gripping success rate is selected as the optimal gripping position.
- the gripping position candidate detection unit uses the coordinate data obtained from the three-dimensional sensor or the image data obtained from the imaging device and the determination result of the gripping motion creation determination unit during the operation of the gripping device, and sequentially executes the machine learning result. Update.
- the gripping position candidate detecting unit resets at least a part of the machine learning result every fixed number of gripping operations.
- the apparatus further includes a vibrating table on which a container for accommodating the work is placed, and in the next gripping operation, the workpiece information processing apparatus is configured to perform a gripping operation based on an image of the workpiece in the container captured by the image capturing device before gripping with the hand. If the grip success rate corresponding to the determined grip position is lower than the determination value, the position of the work is changed by the shaking table.
- the photographing device includes a first camera for photographing a placement place where the work is placed before being gripped by the hand, and a second camera for photographing a placement place where the work is to be placed after being gripped by the hand.
- the work information processing apparatus determines the arrangement position of the work based on the image data obtained from the second camera, determines whether the arrangement position matches the reference position, and calculates an arrangement success rate based on the determination result.
- the workpiece information processing apparatus captures, using the first camera, a workpiece placed at the mounting location before gripping with the hand, and grips the workpiece corresponding to the gripping position determined based on the captured image. Based on the success rate and the placement success rate, it is determined whether to adopt the grip position.
- FIG. 2 is a diagram illustrating a configuration of a gripping device according to the first embodiment. It is a front view which shows the shape of a hand. It is a side view which shows the shape of a hand.
- FIG. 2 is a diagram illustrating an internal configuration of a work information processing apparatus 6.
- FIG. 6 is a diagram for explaining a state of pattern matching.
- FIG. 9 is a diagram for explaining information detected at the time of pattern matching.
- FIG. 7 is a diagram when the work W is viewed from the + direction of the Z axis in FIG. 6.
- FIG. 9 is a diagram for describing processing executed by a gripping position candidate detection unit 63.
- 9 is a first example of an instruction of a position at which a workpiece W can be gripped.
- FIG. 9 is a second example of an instruction of a position at which the workpiece W can be gripped. It is the figure which showed the example of arrangement
- 9 is a flowchart illustrating a process performed by the work information processing apparatus.
- FIG. 9 is a diagram illustrating a configuration of a gripping device according to a second embodiment. 13 is a flowchart for describing processing executed by the work information processing apparatus according to the second embodiment.
- FIG. 13 is a diagram illustrating a configuration of a gripping device according to a third embodiment. 13 is a flowchart for describing processing executed by the work information processing apparatus according to the third embodiment. It is a figure which shows the image detected at the time of installation success. It is a figure which shows the image detected at the time of
- the gripping device selects an optimal gripping position for inserting a hand while avoiding interference with surrounding workpieces in detecting a gripping position of a work in a piled state.
- FIG. 1 is a diagram illustrating a configuration of the gripping device according to the first embodiment.
- a gripping device 1 includes a camera 2 for observing a surface of a work W, a three-dimensional sensor 3 for measuring XYZ coordinate data of each point on the surface of the work W, and a gripper 1 for gripping the work W.
- Optimal gripping position using hand 4 positioning mechanism 5 for positioning hand 4 in a posture capable of gripping work W, and image data R, G, B from camera 2 and XYZ coordinate data D from three-dimensional sensor 3.
- a work information processing device 6 for controlling the positioning mechanism 5 and the hand 4.
- the work information processing device 6 determines the gripping position of the work based on the coordinate data acquired from the three-dimensional sensor 3 or the image data acquired from the imaging device 2, and causes the hand 4 to grip the work based on the gripping position.
- the positioning mechanism 5 and the hand 4 are operated, the hand 4 is photographed by the photographing device 2, and it is determined whether or not the workpiece is actually grasped based on the acquired image data.
- the accumulation success rate at the grasping position is accumulated and the grasping position to be employed in the next and subsequent grasping operations is determined based on the grasping success rate.
- an articulated robot is used as the positioning mechanism 5
- a horizontal articulated robot in which an arm moves in a horizontal direction an orthogonal robot configured by three orthogonal slide axes, or the like can be selected as the positioning mechanism 5.
- FIG. 2 is a front view showing the shape of the hand.
- FIG. 3 is a side view showing the shape of the hand.
- the hand 4 grips the work by sandwiching the work between the two fingers 41 and 42.
- two cameras are mounted inside the three-dimensional sensor 3, and measure three-dimensional coordinates of the surface of the object by triangulation. If there is no pattern serving as a measurement mark on the surface of the work W, since points on the images of the two cameras cannot be associated with each other, slit light or dot light is emitted from the light source toward the work W. .
- This light source may be mounted inside the three-dimensional sensor 3 or may be arranged above the camera 2.
- the three-dimensional sensor 3 does not need to be particularly limited to a stereo three-dimensional sensor, and is not particularly limited as long as it measures three-dimensional coordinate data such as a light section method or a phase shift method.
- FIG. 4 is a diagram illustrating an internal configuration of the work information processing apparatus 6.
- the work information processing device 6 includes an image input unit 61, a work detection unit 62, a grip position candidate detection unit 63, a grip position selection unit 64, a grip movement creation / non-judgment unit 65, a data storage unit 67, a command output A part 66.
- the image input unit 61 inputs XYZ coordinate data from the three-dimensional sensor 3 and image data from the camera 2.
- the work detection unit 62 detects the position of the work to be grasped using the XYZ coordinate data acquired from the three-dimensional sensor 3 or the image data acquired from the camera 2.
- the gripping position candidate detection unit 63 determines interference between the object around the work and the hand 4 based on the position of the work detected by the work detection unit 62, and detects one or more gripping position candidates.
- the gripping position selection unit 64 selects an optimal gripping position at which the predicted success rate is maximized from the gripping position candidates extracted by the gripping position candidate detection unit 63.
- the gripping movement creation determination unit 65 determines whether or not the workpiece is actually gripped by image processing based on the image data acquired from the camera 2.
- the data storage unit 67 stores the XYZ coordinate data from the three-dimensional sensor 3, the image data acquired from the camera 2, the coordinate values of the gripping position candidates, the determination result of the success or failure of the gripping operation, and the like.
- the command output unit 66 outputs a positioning command or a grip / release command to the positioning mechanism 5 or the hand 4 based on the optimal grip position.
- the grasping position candidate detection unit 63 obtains a grasping success rate for the grasping position candidate from the machine learning result using the data acquired from the three-dimensional sensor 3 or the imaging device 2 and the judgment result of the grasping motion creation non-judgment unit 65.
- the candidate with the highest success rate is selected as the optimal gripping position.
- the work detection unit 62 detects a gripping target using the XYZ coordinate data D acquired from the three-dimensional sensor 3.
- a gripping target using the XYZ coordinate data D acquired from the three-dimensional sensor 3.
- pattern matching is used to detect a grasp target.
- FIG. 5 is a diagram for explaining the state of pattern matching.
- An image of the work W is stored in the data storage unit 67 as a template T in advance.
- the work detection unit 62 reads the template T when detecting a gripping target, superimposes the template T on the image data from the camera 2 or the XYZ coordinate data from the three-dimensional sensor 3, and performs pattern matching for searching for a work having the best matching posture. .
- FIG. 6 is a diagram for explaining information detected at the time of pattern matching.
- the center coordinates (ox, oy, oz) of the work W and the rotation angles ( ⁇ , ⁇ , ⁇ ) around each axis are detected.
- FIG. 7 is a diagram when the work W is viewed from the + direction of the Z axis in FIG.
- FIG. 7 shows a positional relationship among a workpiece W to be gripped, four surrounding workpieces, and two fingers 41 and 42 of the hand 4.
- the hand 4 is positioned so that the midpoint of the coordinates of the two fingers 41 and 42 of the hand 4 coincides with the center of gravity O of the work W.
- the work W and the two fingers 41 and 42 of the hand 4 do not interfere.
- the gripping position candidate detection unit 63 determines that the workpiece W and the two fingers 41 and 42 of the hand 4 as shown in FIG.
- FIG. 8 is a diagram for explaining the processing executed by the gripping position candidate detection unit 63.
- the grasping position candidate detection unit 63 predicts the grasping position candidates and the success rate of the grasping operation illustrated in FIG. 7 using the neural network in FIG. 8.
- the neural network receives the XYZ coordinate data D obtained from the three-dimensional sensor 3 and the images R, G, and B obtained from the camera 2 as inputs, and predicts the XYZ coordinate values of the gripping position candidates and the gripping success rate.
- the gripping position candidate detection unit 63 also performs a neural network learning process.
- Back learning back propagation method
- FIG. 9 is a first example of an instruction of a grippable position of the work W.
- FIG. 10 is a second example of the instruction of the grippable position of the work W.
- a rectangle as shown in FIG. 9 and FIG. 10 is designated by a human as teacher data of a grippable position.
- the center coordinates X and Y and the depth coordinate Z of the rectangle shown in FIGS. 9 and 10 are obtained from the coordinate data D acquired from the three-dimensional sensor 3. Also, a maximum value of 100 is given as an initial value of the success rate of the specified grip position. The success rate is a value in the range of 0 to 100.
- a work is photographed in advance, and a person can determine and designate a position that can be gripped in the photographed image, and perform learning in advance using the photographed image and the position designated by the human as teacher data.
- the learning result the number of trials of the actual operation can be reduced.
- Patent Document 3 it is not necessary to determine features in advance as in Japanese Patent Application Laid-Open No. 2016-221647 (Patent Document 3). can do.
- FIG. 11 is a diagram showing an example of the arrangement of the situation of a work W having a plurality of grippable locations and surrounding works. Assuming the situation shown in FIG. 11, a label indicating the priority of grip is assigned to the grip position. Here, for example, A has the highest priority, and B and C are sequentially provided with three levels of priority. The priority is determined and assigned by the rectangular instructor.
- the center coordinates (X, Y), the depth coordinates Z, the vertical and horizontal sizes of the rectangle, the initial value of the success rate, and the label are assigned to each rectangle. Learning of the neural network is performed using these information, the images R, G, and B of the work W, and the XYZ coordinate data D acquired from the three-dimensional sensor 3 as teacher data.
- the gripping position candidate detection unit 63 detects a gripping position candidate using the learning result as an initial value.
- the gripping position selection unit 64 selects a candidate having the highest priority and the highest prediction success rate of the gripping operation from the gripping position candidates detected by the gripping position candidate detection unit 63.
- the success or failure of the gripping operation is determined by the gripping movement creation determination unit 65. After the actual gripping operation, the hand 4 is photographed by the camera 2 and the work in the hand 4 is detected by image processing. If the work is successfully detected, it is determined to be successful, and if not detected, to be unsuccessful.
- FIG. 12 is a diagram showing an image detected at the time of successful gripping.
- FIG. 13 is a diagram showing an image detected at the time of grip failure.
- the gripping movement creation rejection determination unit 65 compares the image of the hand 4 in a non-gripped state, which has been shot in advance and stored in the data storage unit 67, with the image after the gripping operation, as shown in FIGS. Then, if the two images match, it is determined that no image has been detected, and if they do not match, it is determined that the image has been detected.
- the grasping position candidate detection unit 63 performs learning even during the trial of the grasping operation, and updates the learning result.
- the gripping position candidate detection unit 63 includes coordinate data acquired from the three-dimensional sensor 3 or image data acquired from the imaging device as learning data of machine learning, a gripping position when a gripping operation is performed, and The judgment result is used.
- the success rate S of the grasping operation predicted by the neural network is substituted into the following equation to obtain an updated success rate S ′.
- S ′ ⁇ ⁇ S + ⁇
- ( ⁇ , ⁇ ) is called an update coefficient.
- the success rate S ′ updated as described above, the center coordinates (X, Y), the depth coordinates Z, the vertical and horizontal sizes and labels of the rectangles predicted by the neural network, and the gripping position candidate detection unit 63
- the learning is performed using the images R, G, and B and the data used for detection, and the image data D of the three-dimensional sensor 3.
- the gripping position candidate detection unit 63 sequentially uses the coordinate data obtained from the three-dimensional sensor 3 or the image data obtained from the imaging device 2 and the determination result of the gripping motion creation determination unit 65 while the gripping device 1 is operating, and sequentially obtains the machine learning result. To update.
- the gripping positions with a low success rate are excluded, and only the gripping positions with a high success rate are selected. Thereby, the failure of grasping is reduced, and the removal operation can be made more efficient.
- the learning is performed sequentially while the gripping device is operating.
- the learning may be performed sequentially depending on a change in the surrounding environment, a change in the shape of the hand 4, a change in the processing quality of the work, and the like.
- the learning result may be reset for each of the predetermined number M of gripping operations.
- the grip position candidate detection unit 63 resets the machine learning result every fixed number of grip operations. After the reset, the same learning is performed again each time the grasping operation is performed, and the learning result is reset when the number of learning reaches M. Further, M does not need to be a fixed value, and may be changed each time it is reset according to changes in the surrounding environment, the hand 4, the work, and the like.
- a human may periodically check the recognition result and create teacher data.
- a human designates a rectangle as shown in FIGS. 9 and 10, for example. Further, the updated success rate S 'may be directly designated by a human. Thereby, the learning time can be further reduced.
- FIG. 14 is a flowchart illustrating a process performed by the work information processing apparatus.
- the process shown in FIG. 14 is a process performed after the human has given the teacher data of the gripping position and made the neural network learn to some extent as described in FIGS.
- the work information processing device 6 acquires an image from the camera 2 and the three-dimensional sensor 3 (S1). Subsequently, the work information processing apparatus 6 detects a gripping target from the acquired image (S2), detects a gripping position candidate, and predicts a gripping success rate of the gripping position candidate (S3).
- the work information processing apparatus 6 selects a gripping position candidate having the highest priority and the highest gripping success rate from the gripping position candidates (S4), and performs a gripping operation using the positioning mechanism 5 and the hand 4 (S5). ).
- N which is the number of gripping operations that does not use the learning result for calculating the gripping success rate in step S3, is set in advance.
- the point closest to the position of the center of gravity is selected, and after exceeding N times, the gripping success rate is calculated using the machine learning result in step S3, and the candidate having the maximum gripping success rate in step S4 is determined as the optimal gripping position. You may choose.
- the work information processing apparatus 6 After executing the gripping operation, the work information processing apparatus 6 captures an image of the hand 4 with the camera 2 (S6), and checks whether the gripping operation is successful (S7). Then, the work information processing device 6 saves the image, the grip position coordinates, and whether or not to generate a grip motion (S8), and executes a neural network learning process (S9).
- the present invention is not particularly limited to this.
- the learning may be performed by a dedicated learning device of the device 6.
- the work information processing device 6 and the learning device are connected by serial communication or I / O, and the coordinate values of the gripping position candidates used for learning, the XYZ coordinate data D of the three-dimensional sensor, the camera image data R, G, B, and the result of the success / failure determination of the gripping operation are transmitted from the work information processing device 6 to the learning device.
- the learning device performs learning based on the received data, and transmits a learning result to the work information processing device 6.
- the work information processing device 6 calculates a grasp success rate using the new learning result received from the learning device.
- back learning error backpropagation method
- a supervised learning method such as a support vector machine, an auto encoder (self-encoder),
- An unsupervised learning method such as a k-means method or principal component analysis, or a machine learning method combining these methods may be used.
- the gripping device by using the work image actually captured by the three-dimensional sensor or the camera and the actual gripping operation result for learning, the dimensions and surface texture of the workpiece can be obtained. Variations, the current state of the hand, and the like can be reflected in learning, and a decrease in the success rate of gripping can be reduced. In addition, since a decrease in the success rate of gripping can be reduced, the tact time of the operation can be reduced.
- the work having the highest gripping success rate is selected from the works in the current arrangement.
- the hand 4 may not be inserted depending on the positional relationship with the adjacent work.
- the gripping work is assisted in such a case.
- FIG. 15 is a diagram illustrating a configuration of the gripping device according to the second embodiment.
- gripping device 1A further includes a vibration table 8 on which a container (work tray 7) for accommodating a work is placed in addition to the configuration of gripping device 1 in FIG.
- the vibrating table 8 is provided with a vibrating body.
- the gripping device 1A includes a work information processing device 6A instead of the work information processing device 6.
- the camera 2, the three-dimensional sensor 3, the hand 4, and the positioning mechanism 5 are the same as in FIG.
- the work information processing apparatus 6A attempts to improve the gripping success rate by vibrating the shaking table 8 when the gripping success rate of the gripping position candidate is lower than the threshold.
- the workpiece information processing device 6A has a gripping success rate corresponding to a gripping position determined based on an image of the workpiece in the container captured by the capturing device before gripping with the hand, which is lower than the determination value. In this case, the position of the work is changed by the vibrating table 8.
- FIG. 16 is a flowchart illustrating a process executed by the work information processing apparatus according to the second embodiment.
- the processing of the flowchart of FIG. 16 differs from the processing described in FIG. 14 in that the processing of steps S51 and S52 is performed after step S4.
- the work information processing apparatus 6A acquires an image (S1), detects a gripping target (S2), predicts a gripping success rate of a gripping position candidate (S3), selects a gripping position candidate having the highest priority and a maximum gripping success rate.
- S4 is sequentially performed as in the case of FIG.
- step S51 the work information processing apparatus 6A determines whether the predicted success rate S of the gripping position candidate is higher than a certain threshold T.
- the shaking table 8 is vibrated in the direction of the arrow for a certain time (S52), and the processing of steps S1 to S4 is executed again. Then, an attempt is made to detect a gripping position candidate that gives a success rate S exceeding the threshold value T.
- step S52 if a gripping position candidate with an improved success rate is not detected even after repeating the processing of step S52 several times, for example, an operation of shifting the work with the hand 4 may be performed.
- the gripping device according to the second embodiment has the same effect as the gripping device according to the first embodiment, and can further improve the gripping success rate when the gripping success rate is not high due to poor placement of the work.
- FIG. 17 is a diagram illustrating the configuration of the gripping device according to the third embodiment.
- gripping device 1B includes a camera 2A and a camera 2B as imaging devices in the configuration of gripping device 1 in FIG.
- the camera 2A is similar to the camera 2 in FIG.
- the camera 2B captures an image of the position of the work set on the work table 10 in the next process.
- the gripping device 1B includes a work information processing device 6B instead of the work information processing device 6.
- the camera 2, the three-dimensional sensor 3, the hand 4, and the positioning mechanism 5 are the same as those in FIG.
- the photographing apparatus 2 includes a first camera 2A for photographing a mounting place where a work is placed before being gripped by the hand, and a member on which the work is arranged after being gripped by the hand. And the second camera 2B that captures the image.
- the work information processing device 6B determines the arrangement position of the work based on the image data acquired from the second camera 2B, determines whether the arrangement position matches the reference position, and determines the arrangement success rate based on the determination result.
- the gripping success rate and the arrangement corresponding to the gripping position determined based on the image taken by the first camera 2A of the work placed on the mounting position before gripping with the hand 4 Based on the success rate, it is determined whether to adopt the grip position.
- FIG. 18 is a flowchart illustrating a process executed by the work information processing apparatus according to the third embodiment.
- the processing of the flowchart in FIG. 18 differs from the processing described in FIG. 14 in that the processing of steps S101 to S103 is performed after step S7.
- the work information processing apparatus 6B sequentially performs the processing of steps S1 to S7 in the same manner as in the case of FIG. Then, in step S101, the work information processing apparatus 6B executes a setting operation of the gripped work, captures an image of the setting position with the camera 2B in step S102, and checks whether the setting operation is successful in step S103.
- step S9 the installation success rate is considered in addition to the grasping success rate.
- FIG. 19 is a diagram showing an image detected at the time of successful installation.
- the image in which the work is arranged at the target position is compared with the image in which the work is actually held by the hand 4 and the work is set on the work table 10, they match. When there is no mismatched portion, it is determined that the installation is successful.
- FIG. 20 is a diagram showing an image detected when the installation has failed.
- the image in which the work is placed at the target position is compared with the image in which the work is actually held by the hand 4 and the work is set on the work worktable 10, a mismatch occurs. If there is a mismatch, the installation is determined to have failed.
- the work information processing apparatus 6B compares the image of the work placed at the target position, which has been shot in advance and stored in the data storage unit 67, with the image of the work after the installation operation. If the two images match, it is determined to be successful, and if they do not match, it is determined to be unsuccessful.
- the gripping device according to the third embodiment has the same effects as the gripping device according to the first embodiment, and can further improve the installation success rate when the installation success rate is not high due to a poor gripping position of the work. .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Manipulator (AREA)
Abstract
A grasping device (1) is provided with a workpiece information processing device (6) which employs an image capturing device (2) and a three dimensional sensor (3) to detect an optimal grasping position of a workpiece, and which controls a positioning mechanism (5) and a hand (4). The workpiece information processing device (6) determines a grasping position of the workpiece on the basis of coordinate data acquired from the three dimensional sensor (3) or image data acquired from the camera (2), moves the positioning mechanism (5) and the hand (4) on the basis of the grasping position in such a way that the hand (4) grasps the workpiece, captures an image of the hand (4) by means of the image capturing device (2), determines whether the workpiece has actually been grasped, on the basis of the acquired image data, accumulates a relationship between the determination result and the grasping position to calculate a grasping success ratio for each grasping position, and determines the grasping position to be adopted during subsequent grasping operations on the basis of the grasping success ratio.
Description
この発明は、ワークをピックアップする把持装置に関する。
This invention relates to a gripping device for picking up a work.
機械加工または組立作業において、対象物であるワークは、ロボットまたは組立装置等によって自動的にピックアップされ加工装置または組立物の筐体などにセットされることが多い。ピックアップ時には、ワークの形状や姿勢を認識してピックアップアームを制御する必要がある。たとえば、ロボットや組立装置等によるワークの取り出し作業で必要なワークの姿勢検出や形状認識などにワーク情報処理装置が利用される。
In machining or assembling work, a work as an object is often automatically picked up by a robot or an assembling device or the like and set on a casing of the processing device or the assembly. At the time of pickup, it is necessary to recognize the shape and posture of the work and control the pickup arm. For example, a work information processing apparatus is used for detecting a posture of a work, recognizing a shape, and the like, which are necessary for a work to be taken out by a robot or an assembly device.
従来、ワークの姿勢検出や形状認識などに画像処理が用いられている。ワーク情報処理装置では、カメラでワークを撮影し、パターンマッチング等でワークの姿勢を検出し、検出結果に基づいてピックアップアームを制御して把持可能な姿勢にハンドを位置決めする。
画像 Conventionally, image processing is used for work posture detection and shape recognition. In the work information processing apparatus, a work is photographed by a camera, the posture of the work is detected by pattern matching or the like, and the hand is positioned to a grippable posture by controlling a pickup arm based on the detection result.
特開2015-213973号公報、特開2017-047505号公報、特開2016-221647号公報、特開平10-332333号公報および特開平11-066321公報には、ワークをピックアップする把持装置が開示されている。
JP-A-2015-213975, JP-A-2017-047505, JP-A-2016-221647, JP-A-10-332333 and JP-A-11-066321 disclose gripping devices for picking up a work. ing.
ハンドの姿勢制御に関しては、特開2015-213973号公報(特許文献1)において、ハンドおよびハンドの位置決め機構とワークが格納されている容器との干渉を把持動作前にチェックし、検出されたワークがピッキング可能か否かを判定する方法が提案されている。機械的な干渉を把持動作前にチェックするものであり、ワークの取り出し作業において必要な機能である。
Regarding the posture control of the hand, in Japanese Patent Application Laid-Open No. 2015-213973 (Patent Document 1), interference between the hand and the positioning mechanism of the hand and the container storing the work is checked before the gripping operation, and the detected work is detected. There has been proposed a method of determining whether or not a device can be picked. This is to check for mechanical interference before the gripping operation, and is a necessary function in work removal.
また、特開2017-047505号公報(特許文献2)に開示された把持装置は、ハンドのアプローチ点から把持点まで間にハンドが通過する過程で障害物があった場合でも把持不可を判定せず、過去の把持成否情報と把持動作時のハンドの通過領域内の障害物の存在状態を機械学習(サポートベクタマシン)で学習し、把持動作前に学習結果を用いて把持の成否を判定し、判定結果に基づいて把持動作を行なう。この装置も特開2015-213973号公報(特許文献1)と同様、機械的な干渉による把持動作の可否を把持動作前に判定するものである。
Further, the gripping device disclosed in Japanese Patent Application Laid-Open No. 2017-047505 (Patent Document 2) determines that it is impossible to grip even if there is an obstacle in the process of the hand passing between the approach point and the grip point of the hand. Instead, learn the past grasping success / failure information and the existence state of obstacles in the passing area of the hand at the time of the grasping operation by machine learning (support vector machine), and determine the success or failure of the grasping using the learning result before the grasping operation. The gripping operation is performed based on the determination result. This apparatus also determines whether or not a gripping operation due to mechanical interference is possible before the gripping operation, as in Japanese Patent Application Laid-Open No. 2015-213973 (Patent Document 1).
特開2016-221647号公報(特許文献3)は、装置内に保持されている複数の把持形態情報の中から、把持位置における周囲ワークの状態を考慮して把持形態を選択する方法を提案している。把持形態を選択する評価値として、把持対象と周囲ワークとの接触点の数や、接触点に対象ワークの滑りやすさ、壊れやすさ、絡まりやすさの情報に基づいて評価値を算出し、評価値が最大になる把持形態を選択する方法を提案している。
Japanese Patent Laying-Open No. 2016-221647 (Patent Document 3) proposes a method of selecting a gripping form from a plurality of pieces of gripping form information held in the apparatus in consideration of the state of a surrounding work at a gripping position. ing. As the evaluation value for selecting the gripping form, the evaluation value is calculated based on the number of contact points between the gripping target and the surrounding work, and information on the ease of slipping, breaking, and tangling of the target work at the contact points, A method of selecting a gripping form that maximizes an evaluation value is proposed.
特開2017-047505号公報(特許文献2)は、過去の把持動作の成否情報に基づいた学習で成否判定の精度を向上できる点で特開2015-213973号公報(特許文献1)よりも優れているが、学習には動作の試行が必要であり、学習に時間がかかるという課題がある。また、特開2016-221647号公報(特許文献3)のように、事前に特徴を定める必要があり、学習の準備に時間がかかる場合もある。
Japanese Patent Application Laid-Open No. 2017-047505 (Patent Document 2) is superior to Japanese Patent Application Laid-Open No. 2015-213973 (Patent Document 1) in that the accuracy of success / failure determination can be improved by learning based on the success / failure information of past gripping operations. However, there is a problem in that learning requires an operation trial, and learning takes time. Also, as in Japanese Patent Application Laid-Open No. 2016-221647 (Patent Document 3), it is necessary to determine features in advance, and preparation for learning may take time.
本発明は、このような課題を解決するためのものであって、その目的は、バラ積みワークの取り出し作業において、適切な把持位置を選択するための学習を比較的短時間で終了することが可能な把持装置を提供することである。
An object of the present invention is to solve such a problem, and an object of the present invention is to finish learning for selecting an appropriate gripping position in a relatively short time in a work of taking out a bulky work. It is to provide a possible gripping device.
本開示に係る把持装置は、ワークの表面を観察するための撮影装置と、ワークの表面上の点の座標データを計測する三次元センサと、ワークを把持するためのハンドと、把持可能な姿勢にハンドを位置決めする位置決め機構と、撮影装置および三次元センサを用いてワークの最適把持位置を検出し、位置決め機構およびハンドを制御するワーク情報処理装置とを備える。ワーク情報処理装置は、三次元センサから取得した座標データまたは撮影装置から取得した画像データに基づいてワークの把持位置を決定し、把持位置に基づいてハンドがワークを把持するように位置決め機構およびハンドを作動させ、ハンドを撮影装置によって撮影させ、取得した画像データに基づいて実際にワークを把持できているか否かを判定し、判定結果と把持位置との関係を蓄積して把持位置における把持成功率を算出し、把持成功率に基づいて次回以降の把持動作時に採用する把持位置を決定する。
A grasping device according to the present disclosure includes a photographing device for observing the surface of a work, a three-dimensional sensor for measuring coordinate data of a point on the surface of the work, a hand for grasping the work, and a posture capable of grasping. And a work information processing device that detects the optimal gripping position of the work using the photographing device and the three-dimensional sensor, and controls the positioning mechanism and the hand. The work information processing apparatus determines a gripping position of the work based on the coordinate data acquired from the three-dimensional sensor or the image data acquired from the imaging device, and a positioning mechanism and a hand are provided so that the hand grips the work based on the gripping position. Is operated, the hand is photographed by the photographing device, and it is determined whether or not the work is actually grasped based on the acquired image data. The gripping rate is calculated, and a gripping position to be used in the next and subsequent gripping operations is determined based on the gripping success rate.
好ましくは、ワーク情報処理装置は、三次元センサから取得した座標データまたは撮影装置から取得した画像データを用いて把持対象とするワークの位置を検出するワーク検出部と、ワーク検出部で検出されたワークの位置に基づいてワークの周囲物とハンドとの干渉を判定し、単数または複数の把持位置候補を検出する把持位置候補検出部と、把持位置候補検出部で抽出された把持位置候補の中から最適把持位置を選択する把持位置選択部と、最適把持位置に基づいて位置決め機構またはハンドに位置決め指令または把持・開放指令を出力する指令出力部と、撮影装置から取得した画像データに基づいて実際にワークを把持できているか否かを画像処理で判定する把持動作成否判定部とを含む。把持位置候補検出部は、把持位置候補について、三次元センサまたは撮影装置から取得したデータと把持動作成否判定部の判定結果とを用いた機械学習結果により把持成功率を求める。把持位置選択部は、把持成功率が最大になる候補を最適把持位置として選択する。
Preferably, the work information processing device is a work detection unit that detects the position of the work to be grasped using the coordinate data acquired from the three-dimensional sensor or the image data acquired from the imaging device, and is detected by the work detection unit. A gripping position candidate detecting unit that determines interference between the hand around the workpiece and the hand based on the position of the workpiece and detects one or more gripping position candidates, and a gripping position candidate extracted by the gripping position candidate detecting unit. A gripping position selection unit for selecting an optimal gripping position from the above, a command output unit for outputting a positioning command or a gripping / opening command to a positioning mechanism or a hand based on the optimal gripping position, and an actual output based on image data obtained from the photographing device. And a gripping motion creation determination unit that determines whether or not the workpiece can be gripped by image processing. The grasping position candidate detection unit obtains a grasping success rate for the grasping position candidate based on a machine learning result using data acquired from the three-dimensional sensor or the imaging device and a determination result of the gripping movement creation non-determination unit. The gripping position selection unit selects a candidate having a maximum gripping success rate as an optimal gripping position.
より好ましくは、把持位置候補検出部は、機械学習の学習データとして三次元センサから取得した座標データまたは撮影装置から取得した画像データ、把持動作を実行したときの把持位置、および把持動作成否判定部の判定結果とを用いて把持成功率を求める。
More preferably, the gripping position candidate detection unit is coordinate data acquired from a three-dimensional sensor or image data acquired from an imaging device as learning data of machine learning, a gripping position when a gripping operation is performed, and a gripping movement creation determination unit. The gripping success rate is obtained using the determination result of (1).
より好ましくは、予め定められたNを自然数とすると、前記把持位置候補検出部が、前記把持成功率の算出に学習結果を利用しない把持動作回数はN回である。把持位置選択部は、N回目までの把持では把持位置候補のうち、前記ワークの画像上の重心位置に最も近い点を選択し、N回を超過してから機械学習結果を用いて得られた把持成功率が最大になる候補を最適把持位置として選択する。
More preferably, if the predetermined N is a natural number, the number of gripping operations in which the gripping position candidate detecting unit does not use a learning result for calculating the gripping success rate is N times. The gripping position selection unit selects a point closest to the position of the center of gravity on the image of the workpiece among the gripping position candidates in the gripping up to the Nth time, and is obtained by using the machine learning result after exceeding N times. The candidate with the highest gripping success rate is selected as the optimal gripping position.
より好ましくは、把持位置候補検出部は、把持装置の稼働中に三次元センサから取得した座標データまたは撮影装置から取得した画像データと把持動作成否判定部の判定結果を用い、逐次機械学習結果を更新する。
More preferably, the gripping position candidate detection unit uses the coordinate data obtained from the three-dimensional sensor or the image data obtained from the imaging device and the determination result of the gripping motion creation determination unit during the operation of the gripping device, and sequentially executes the machine learning result. Update.
より好ましくは、把持位置候補検出部は、一定回数の把持動作毎に機械学習結果の少なくとも一部をリセットする。
More preferably, the gripping position candidate detecting unit resets at least a part of the machine learning result every fixed number of gripping operations.
好ましくは、ワークを収容する容器を載置する振動台をさらに備え、ワーク情報処理装置は、次回の把持動作において、ハンドで把持を行なう前に容器内のワークを撮影装置で撮影した画像に基づいて決定した把持位置に対応する把持成功率が判定値よりも低い場合には、振動台によってワークの位置を変更する。
Preferably, the apparatus further includes a vibrating table on which a container for accommodating the work is placed, and in the next gripping operation, the workpiece information processing apparatus is configured to perform a gripping operation based on an image of the workpiece in the container captured by the image capturing device before gripping with the hand. If the grip success rate corresponding to the determined grip position is lower than the determination value, the position of the work is changed by the shaking table.
好ましくは、撮影装置は、ハンドで把持される前にワークが置いてある載置場所を撮影する第1カメラと、ハンドで把持された後にワークが配置されるべき配置場所を撮影する第2カメラとを含む。ワーク情報処理装置は、第2カメラから取得した画像データに基づいてワークの配置位置を決定し、配置位置が基準位置と一致するか否かを判定し判定結果に基づいて配置成功率を算出する。ワーク情報処理装置は、次回の把持動作において、ハンドで把持を行なう前に載置場所に置かれているワークを第1カメラで撮影し、撮影した画像に基づいて決定した把持位置に対応する把持成功率および配置成功率に基づいて、把持位置の採否を決定する。
Preferably, the photographing device includes a first camera for photographing a placement place where the work is placed before being gripped by the hand, and a second camera for photographing a placement place where the work is to be placed after being gripped by the hand. And The work information processing apparatus determines the arrangement position of the work based on the image data obtained from the second camera, determines whether the arrangement position matches the reference position, and calculates an arrangement success rate based on the determination result. . In the next gripping operation, the workpiece information processing apparatus captures, using the first camera, a workpiece placed at the mounting location before gripping with the hand, and grips the workpiece corresponding to the gripping position determined based on the captured image. Based on the success rate and the placement success rate, it is determined whether to adopt the grip position.
本発明によれば、バラ積みワークの取り出し作業において、適切な把持位置を選択するための学習を比較的短時間で終了することが可能となる。
According to the present invention, it is possible to finish the learning for selecting an appropriate gripping position in a relatively short time in the work of taking out bulk piled works.
以下、本発明の実施の形態について図面を参照しつつ説明する。なお、以下の図面において同一または相当する部分には同一の参照番号を付し、その説明は繰り返さない。
Hereinafter, embodiments of the present invention will be described with reference to the drawings. In the following drawings, the same or corresponding portions have the same reference characters allotted, and description thereof will not be repeated.
[実施の形態1]
以下の実施の形態では、バラ積み状態の円柱状ワークを把持する場合について説明する。本実施の形態の把持装置は、バラ積み状態のワークの把持位置検出において、周囲のワークとの干渉を避けてハンドを挿入するための最適把持位置を選択する。 [Embodiment 1]
In the following embodiments, a case in which a cylindrical work in a piled state is gripped will be described. The gripping device according to the present embodiment selects an optimal gripping position for inserting a hand while avoiding interference with surrounding workpieces in detecting a gripping position of a work in a piled state.
以下の実施の形態では、バラ積み状態の円柱状ワークを把持する場合について説明する。本実施の形態の把持装置は、バラ積み状態のワークの把持位置検出において、周囲のワークとの干渉を避けてハンドを挿入するための最適把持位置を選択する。 [Embodiment 1]
In the following embodiments, a case in which a cylindrical work in a piled state is gripped will be described. The gripping device according to the present embodiment selects an optimal gripping position for inserting a hand while avoiding interference with surrounding workpieces in detecting a gripping position of a work in a piled state.
(把持装置の構成)
図1は、実施の形態1の把持装置の構成を示す図である。図1を参照して、把持装置1は、ワークWの表面を観察するためのカメラ2と、ワーク表面の各点のXYZ座標データを計測する三次元センサ3と、ワークWを把持するためのハンド4と、ワークWを把持可能な姿勢にハンド4を位置決めする位置決め機構5と、カメラ2からの画像データR,G,Bおよび三次元センサ3からのXYZ座標データDを用いて最適把持位置を検出し、位置決め機構5およびハンド4を制御するワーク情報処理装置6とを備える。 (Configuration of gripping device)
FIG. 1 is a diagram illustrating a configuration of the gripping device according to the first embodiment. Referring to FIG. 1, agripping device 1 includes a camera 2 for observing a surface of a work W, a three-dimensional sensor 3 for measuring XYZ coordinate data of each point on the surface of the work W, and a gripper 1 for gripping the work W. Optimal gripping position using hand 4, positioning mechanism 5 for positioning hand 4 in a posture capable of gripping work W, and image data R, G, B from camera 2 and XYZ coordinate data D from three-dimensional sensor 3. And a work information processing device 6 for controlling the positioning mechanism 5 and the hand 4.
図1は、実施の形態1の把持装置の構成を示す図である。図1を参照して、把持装置1は、ワークWの表面を観察するためのカメラ2と、ワーク表面の各点のXYZ座標データを計測する三次元センサ3と、ワークWを把持するためのハンド4と、ワークWを把持可能な姿勢にハンド4を位置決めする位置決め機構5と、カメラ2からの画像データR,G,Bおよび三次元センサ3からのXYZ座標データDを用いて最適把持位置を検出し、位置決め機構5およびハンド4を制御するワーク情報処理装置6とを備える。 (Configuration of gripping device)
FIG. 1 is a diagram illustrating a configuration of the gripping device according to the first embodiment. Referring to FIG. 1, a
ワーク情報処理装置6は、三次元センサ3から取得した座標データまたは撮影装置2から取得した画像データに基づいてワークの把持位置を決定し、把持位置に基づいてハンド4がワークを把持するように位置決め機構5およびハンド4を作動させ、ハンド4を撮影装置2によって撮影させ、取得した画像データに基づいて実際にワークを把持できているか否かを判定し、判定結果と把持位置との関係を蓄積して把持位置における把持成功率を算出し、把持成功率に基づいて次回以降の把持動作時に採用する把持位置を決定する。
The work information processing device 6 determines the gripping position of the work based on the coordinate data acquired from the three-dimensional sensor 3 or the image data acquired from the imaging device 2, and causes the hand 4 to grip the work based on the gripping position. The positioning mechanism 5 and the hand 4 are operated, the hand 4 is photographed by the photographing device 2, and it is determined whether or not the workpiece is actually grasped based on the acquired image data. The accumulation success rate at the grasping position is accumulated and the grasping position to be employed in the next and subsequent grasping operations is determined based on the grasping success rate.
本実施の形態では、位置決め機構5として多関節ロボットを用いた例を示す。この他にも、水平方向にアームが移動する水平多関節型や、直交する3つのスライド軸により構成された直交型のロボットなどを位置決め機構5として選択することができる。
In this embodiment, an example in which an articulated robot is used as the positioning mechanism 5 will be described. In addition, a horizontal articulated robot in which an arm moves in a horizontal direction, an orthogonal robot configured by three orthogonal slide axes, or the like can be selected as the positioning mechanism 5.
図2は、ハンドの形状を示す正面図である。図3は、ハンドの形状を示す側面図である。ハンド4は、2本の指41および42でワーク挟み込むことでワークを把持する。
FIG. 2 is a front view showing the shape of the hand. FIG. 3 is a side view showing the shape of the hand. The hand 4 grips the work by sandwiching the work between the two fingers 41 and 42.
再び図1を参照して、三次元センサ3の内部には、図示しない2台のカメラが搭載されており、三角測量により物体表面の三次元座標を計測する。ワークWの表面に計測の目印となる模様がない場合、2台のカメラの画像上の点を対応付けできないため、光源からスリット状の光やドット状の光がワークWに向けて照射される。この光源は、三次元センサ3の内部に搭載されていても良いし、カメラ2の上方に配置しても良い。また、三次元センサ3は、特にステレオ方式の三次元センサに限定する必要はなく、光切断法や位相シフト法などの三次元座標データを計測するセンサであれば特に限定されない。
(1) Referring again to FIG. 1, two cameras (not shown) are mounted inside the three-dimensional sensor 3, and measure three-dimensional coordinates of the surface of the object by triangulation. If there is no pattern serving as a measurement mark on the surface of the work W, since points on the images of the two cameras cannot be associated with each other, slit light or dot light is emitted from the light source toward the work W. . This light source may be mounted inside the three-dimensional sensor 3 or may be arranged above the camera 2. In addition, the three-dimensional sensor 3 does not need to be particularly limited to a stereo three-dimensional sensor, and is not particularly limited as long as it measures three-dimensional coordinate data such as a light section method or a phase shift method.
(ワーク情報処理装置の内部構成)
図4は、ワーク情報処理装置6の内部構成を示す図である。ワーク情報処理装置6は、画像入力部61と、ワーク検出部62と、把持位置候補検出部63と、把持位置選択部64と、把持動作成否判定部65と、データ保存部67と、指令出力部66とを備える。 (Internal configuration of work information processing device)
FIG. 4 is a diagram illustrating an internal configuration of the workinformation processing apparatus 6. The work information processing device 6 includes an image input unit 61, a work detection unit 62, a grip position candidate detection unit 63, a grip position selection unit 64, a grip movement creation / non-judgment unit 65, a data storage unit 67, a command output A part 66.
図4は、ワーク情報処理装置6の内部構成を示す図である。ワーク情報処理装置6は、画像入力部61と、ワーク検出部62と、把持位置候補検出部63と、把持位置選択部64と、把持動作成否判定部65と、データ保存部67と、指令出力部66とを備える。 (Internal configuration of work information processing device)
FIG. 4 is a diagram illustrating an internal configuration of the work
画像入力部61は、三次元センサ3からのXYZ座標データおよびカメラ2からの画像データを入力する。ワーク検出部62は、三次元センサ3から取得したXYZ座標データまたはカメラ2から取得した画像データを用いて把持対象とするワークの位置を検出する。把持位置候補検出部63は、ワーク検出部62で検出されたワークの位置に基づいてワークの周囲物とハンド4との干渉を判定し、単数または複数の把持位置候補を検出する。把持位置選択部64は、把持位置候補検出部63で抽出された把持位置候補の中から、予測された成功率が最大になる最適把持位置を選択する。把持動作成否判定部65は、カメラ2から取得した画像データに基づいて実際にワークを把持できているか否かを画像処理で判定する。
The image input unit 61 inputs XYZ coordinate data from the three-dimensional sensor 3 and image data from the camera 2. The work detection unit 62 detects the position of the work to be grasped using the XYZ coordinate data acquired from the three-dimensional sensor 3 or the image data acquired from the camera 2. The gripping position candidate detection unit 63 determines interference between the object around the work and the hand 4 based on the position of the work detected by the work detection unit 62, and detects one or more gripping position candidates. The gripping position selection unit 64 selects an optimal gripping position at which the predicted success rate is maximized from the gripping position candidates extracted by the gripping position candidate detection unit 63. The gripping movement creation determination unit 65 determines whether or not the workpiece is actually gripped by image processing based on the image data acquired from the camera 2.
データ保存部67は、三次元センサ3からのXYZ座標データおよびカメラ2から取得した画像データおよび把持位置候補の座標値、把持動作の成否判定結果等を保存する。指令出力部66は、最適把持位置に基づいて位置決め機構5またはハンド4に位置決め指令または把持・開放指令を出力する。
The data storage unit 67 stores the XYZ coordinate data from the three-dimensional sensor 3, the image data acquired from the camera 2, the coordinate values of the gripping position candidates, the determination result of the success or failure of the gripping operation, and the like. The command output unit 66 outputs a positioning command or a grip / release command to the positioning mechanism 5 or the hand 4 based on the optimal grip position.
把持位置候補検出部63は、把持位置候補について、三次元センサ3または撮影装置2から取得したデータと把持動作成否判定部65の判定結果とを用いた機械学習結果により把持成功率を求め、把持成功率が最大になる候補を最適把持位置として選択する。
(ワーク情報処理装置における処理手順)
以下に、ワーク情報処理装置6の各部の処理を詳細に説明する。 The grasping positioncandidate detection unit 63 obtains a grasping success rate for the grasping position candidate from the machine learning result using the data acquired from the three-dimensional sensor 3 or the imaging device 2 and the judgment result of the grasping motion creation non-judgment unit 65. The candidate with the highest success rate is selected as the optimal gripping position.
(Processing procedure in work information processing device)
Hereinafter, processing of each unit of the workinformation processing apparatus 6 will be described in detail.
(ワーク情報処理装置における処理手順)
以下に、ワーク情報処理装置6の各部の処理を詳細に説明する。 The grasping position
(Processing procedure in work information processing device)
Hereinafter, processing of each unit of the work
ワーク検出部62は、三次元センサ3から取得したXYZ座標データDを用いて把持対象を検出する。本実施の形態では、把持対象の検出にパターンマッチングを用いる例を示す。
The work detection unit 62 detects a gripping target using the XYZ coordinate data D acquired from the three-dimensional sensor 3. In the present embodiment, an example will be described in which pattern matching is used to detect a grasp target.
図5は、パターンマッチングの様子を説明するための図である。あらかじめワークWの画像がテンプレートTとしてデータ保存部67に保存されている。ワーク検出部62は、把持対象の検出時にテンプレートTを読み出してカメラ2からの画像データや三次元センサ3からのXYZ座標データに重ね合わせ、最も一致する姿勢を有するワークを探索するパターンマッチングを行なう。
FIG. 5 is a diagram for explaining the state of pattern matching. An image of the work W is stored in the data storage unit 67 as a template T in advance. The work detection unit 62 reads the template T when detecting a gripping target, superimposes the template T on the image data from the camera 2 or the XYZ coordinate data from the three-dimensional sensor 3, and performs pattern matching for searching for a work having the best matching posture. .
図6は、パターンマッチング時に検出される情報を説明するための図である。パターンマッチングでは、ワークWの中心座標(ox,oy,oz)と各軸周りの回転角度(α,β,γ)が検出される。
FIG. 6 is a diagram for explaining information detected at the time of pattern matching. In the pattern matching, the center coordinates (ox, oy, oz) of the work W and the rotation angles (α, β, γ) around each axis are detected.
次に、把持位置候補検出部63が実行する処理を説明する。図7は、図6のZ軸の+方向からワークWを見たときの図である。図7には、把持対象のワークWと、周囲の4つのワークと、ハンド4の2本の指41、42との位置関係が示されている。また、図7では、ハンド4の2指41,42の座標の中点がワークWの重心Oと一致するようにハンド4が位置決めされている。図7の例では、ワークWとハンド4の2指41,42は干渉しない。図7に示したようなワークWとハンド4の2指41,42が干渉しない箇所を、把持位置候補検出部63は把持可能と判断する。
Next, the processing executed by the gripping position candidate detection unit 63 will be described. FIG. 7 is a diagram when the work W is viewed from the + direction of the Z axis in FIG. FIG. 7 shows a positional relationship among a workpiece W to be gripped, four surrounding workpieces, and two fingers 41 and 42 of the hand 4. In FIG. 7, the hand 4 is positioned so that the midpoint of the coordinates of the two fingers 41 and 42 of the hand 4 coincides with the center of gravity O of the work W. In the example of FIG. 7, the work W and the two fingers 41 and 42 of the hand 4 do not interfere. The gripping position candidate detection unit 63 determines that the workpiece W and the two fingers 41 and 42 of the hand 4 as shown in FIG.
図8は、把持位置候補検出部63が実行する処理を説明するための図である。把持位置候補検出部63は、図8のニューラルネットワークで、図7に示す把持位置候補と把持動作の成功率を予測する。ニューラルネットワークは、三次元センサ3から取得したXYZ座標データDおよびカメラ2から取得した画像R,G,Bを入力とし、把持位置候補のXYZ座標値と把持成功率を予測する。
FIG. 8 is a diagram for explaining the processing executed by the gripping position candidate detection unit 63. The grasping position candidate detection unit 63 predicts the grasping position candidates and the success rate of the grasping operation illustrated in FIG. 7 using the neural network in FIG. 8. The neural network receives the XYZ coordinate data D obtained from the three-dimensional sensor 3 and the images R, G, and B obtained from the camera 2 as inputs, and predicts the XYZ coordinate values of the gripping position candidates and the gripping success rate.
把持位置候補検出部63は、ニューラルネットワークの学習処理も行なう。学習にはバックプロバケーション(誤差逆伝播法)を用いる。学習のために、以下の教師データを準備する。
The gripping position candidate detection unit 63 also performs a neural network learning process. Back learning (back propagation method) is used for learning. Prepare the following teacher data for learning.
ワークWをカメラ2で図6のZ軸の+方向から撮影する。ワークWについて、ハンド4が干渉しない位置を人間が指示する。図9は、ワークWの把持可能位置の指示の第1例である。図10は、ワークWの把持可能位置の指示の第2例である。図9および図10に示すような矩形を人間が指示して把持可能位置の教師データとする。
撮 影 The work W is photographed by the camera 2 from the + direction of the Z axis in FIG. With respect to the work W, a person indicates a position where the hand 4 does not interfere. FIG. 9 is a first example of an instruction of a grippable position of the work W. FIG. 10 is a second example of the instruction of the grippable position of the work W. A rectangle as shown in FIG. 9 and FIG. 10 is designated by a human as teacher data of a grippable position.
図9、図10に示した矩形の中心座標X,Yと奥行方向の座標Zを、三次元センサ3から取得した座標データDから求める。また、指示した把持位置の成功率の初期値として、最大値100を与える。成功率は0~100の範囲の値とする。
中心 The center coordinates X and Y and the depth coordinate Z of the rectangle shown in FIGS. 9 and 10 are obtained from the coordinate data D acquired from the three-dimensional sensor 3. Also, a maximum value of 100 is given as an initial value of the success rate of the specified grip position. The success rate is a value in the range of 0 to 100.
把持を試行して学習を行なうと時間がかかる。この課題に対処するために、本実施の形態では、動作を試行する前にある程度の学習を済ませておく。
It takes time to learn by trying gripping. In order to address this problem, in the present embodiment, some learning is performed before attempting an operation.
たとえば、あらかじめワークを撮影し、撮影した画像の中で把持可能な箇所を人間が判断して指示し、撮影した画像と人間が指示した箇所を教師データとして事前に学習を行う。この学習結果を用いることで、実際の動作の試行回数を減らすことができる。
For example, a work is photographed in advance, and a person can determine and designate a position that can be gripped in the photographed image, and perform learning in advance using the photographed image and the position designated by the human as teacher data. By using the learning result, the number of trials of the actual operation can be reduced.
また、以上のような方法をとることで、特開2016-221647号公報(特許文献3)のように、事前に特徴を定める必要はなくなり、把持の可否判断に必要な特徴情報を学習で取得することができる。
In addition, by adopting the above-described method, it is not necessary to determine features in advance as in Japanese Patent Application Laid-Open No. 2016-221647 (Patent Document 3). can do.
図11は、把持可能箇所が複数となるワークWと周囲ワークとの状況の配置例を示した図である。図11のような状況を想定し、把持位置には把持の優先度を示すラベルを割り当てる。ここでは、たとえば、Aを優先度最大とし、順番にB,Cと3段階の優先度を設ける。優先度は矩形の指示者が判断して割り当てる。
FIG. 11 is a diagram showing an example of the arrangement of the situation of a work W having a plurality of grippable locations and surrounding works. Assuming the situation shown in FIG. 11, a label indicating the priority of grip is assigned to the grip position. Here, for example, A has the highest priority, and B and C are sequentially provided with three levels of priority. The priority is determined and assigned by the rectangular instructor.
以上より、各矩形に対し、中心座標(X,Y)、奥行座標Z、矩形の縦横サイズ、成功率の初期値、ラベルが割り当てられる。これらの情報とワークWの画像R,G,B、三次元センサ3から取得したXYZ座標データDを教師データとしニューラルネットワークの学習を行なう。把持位置候補検出部63は、この学習結果を初期値として把持位置候補を検出する。
From the above, the center coordinates (X, Y), the depth coordinates Z, the vertical and horizontal sizes of the rectangle, the initial value of the success rate, and the label are assigned to each rectangle. Learning of the neural network is performed using these information, the images R, G, and B of the work W, and the XYZ coordinate data D acquired from the three-dimensional sensor 3 as teacher data. The gripping position candidate detection unit 63 detects a gripping position candidate using the learning result as an initial value.
次に把持位置選択部64および把持動作成否判定部65が実行する処理について説明する。把持位置選択部64は、把持位置候補検出部63が検出した把持位置候補の内、優先度が高く、把持動作の予測成功率が最大の候補を選択する。
Next, the processing executed by the gripping position selection unit 64 and the gripping movement creation determination unit 65 will be described. The gripping position selection unit 64 selects a candidate having the highest priority and the highest prediction success rate of the gripping operation from the gripping position candidates detected by the gripping position candidate detection unit 63.
把持動作の成否は、把持動作成否判定部65が判定する。実際の把持動作後、カメラ2でハンド4を撮影し、ハンド4内のワークを画像処理で検出し、検出できた場合は成功、未検出の場合は失敗と判定する。
成 The success or failure of the gripping operation is determined by the gripping movement creation determination unit 65. After the actual gripping operation, the hand 4 is photographed by the camera 2 and the work in the hand 4 is detected by image processing. If the work is successfully detected, it is determined to be successful, and if not detected, to be unsuccessful.
図12は、把持成功時に検出された画像を示す図である。ワークを把持していないハンド4の画像とワークWの把持に成功したハンド4との画像を比較すると、不一致の部分が生じる。このように不一致の部分がある場合には把持成功と判定される。
FIG. 12 is a diagram showing an image detected at the time of successful gripping. When comparing the image of the hand 4 not gripping the workpiece with the image of the hand 4 successfully gripping the workpiece W, a mismatched portion occurs. If there is such a mismatched portion, it is determined that the grip has been successful.
図13は、把持失敗時に検出された画像を示す図である。ワークを把持していないハンド4の画像とワークWの把持に失敗したハンド4との画像を比較すると、一致するので差分の画像が生じない。このように不一致の部分が無い場合には把持失敗と判定される。
FIG. 13 is a diagram showing an image detected at the time of grip failure. When comparing the image of the hand 4 not gripping the workpiece with the image of the hand 4 that has failed to grip the workpiece W, no difference image is generated because they match. When there is no mismatched portion, it is determined that the grip has failed.
把持動作成否判定部65は、図12、図13に示したように、あらかじめ撮影されデータ保存部67に保存されていた把持していない状態のハンド4の画像を、把持動作後の画像と比較して、2つの画像が一致していれば未検出、不一致ならば検出と判定する。
The gripping movement creation rejection determination unit 65 compares the image of the hand 4 in a non-gripped state, which has been shot in advance and stored in the data storage unit 67, with the image after the gripping operation, as shown in FIGS. Then, if the two images match, it is determined that no image has been detected, and if they do not match, it is determined that the image has been detected.
把持位置候補検出部63は、把持動作の試行中も学習を行ない、学習結果を更新する。把持位置候補検出部63は、機械学習の学習データとして三次元センサ3から取得した座標データまたは撮影装置から取得した画像データ、把持動作を実行したときの把持位置、および把持動作成否判定部65の判定結果とを用いる。
The grasping position candidate detection unit 63 performs learning even during the trial of the grasping operation, and updates the learning result. The gripping position candidate detection unit 63 includes coordinate data acquired from the three-dimensional sensor 3 or image data acquired from the imaging device as learning data of machine learning, a gripping position when a gripping operation is performed, and The judgment result is used.
把持動作成否判定部65において、未検出の場合、ニューラルネットワークが予測した把持動作の成功率Sを下記式に代入して更新された成功率S’を得る。
S’=α×S+β
ここでは、(α、β)を更新係数と呼ぶ。それぞれの値は任意だが、ここでは(α,β)=(0.99,-0.1)とした。なお、学習を試行し経験的に定めた最適値を用いてもよい。 When the grasping motioncreation determination section 65 does not detect the grasping motion, the success rate S of the grasping operation predicted by the neural network is substituted into the following equation to obtain an updated success rate S ′.
S ′ = α × S + β
Here, (α, β) is called an update coefficient. Each value is arbitrary, but (α, β) = (0.99, -0.1) here. It should be noted that an optimum value empirically determined by trial learning may be used.
S’=α×S+β
ここでは、(α、β)を更新係数と呼ぶ。それぞれの値は任意だが、ここでは(α,β)=(0.99,-0.1)とした。なお、学習を試行し経験的に定めた最適値を用いてもよい。 When the grasping motion
S ′ = α × S + β
Here, (α, β) is called an update coefficient. Each value is arbitrary, but (α, β) = (0.99, -0.1) here. It should be noted that an optimum value empirically determined by trial learning may be used.
把持動作試行中は、以上のように更新された成功率S’と、ニューラルネットワークが予測した中心座標(X,Y)、奥行座標Z、矩形の縦横サイズおよびラベルと、把持位置候補検出部63が検出に利用した画像R,G,Bとデータ、三次元センサ3の画像データDを用いて学習が行なわれる。
During the gripping operation trial, the success rate S ′ updated as described above, the center coordinates (X, Y), the depth coordinates Z, the vertical and horizontal sizes and labels of the rectangles predicted by the neural network, and the gripping position candidate detection unit 63 The learning is performed using the images R, G, and B and the data used for detection, and the image data D of the three-dimensional sensor 3.
把持位置候補検出部63は、把持装置1の稼働中に三次元センサ3から取得した座標データまたは撮影装置2から取得した画像データと把持動作成否判定部65の判定結果を用い、逐次機械学習結果を更新する。
The gripping position candidate detection unit 63 sequentially uses the coordinate data obtained from the three-dimensional sensor 3 or the image data obtained from the imaging device 2 and the determination result of the gripping motion creation determination unit 65 while the gripping device 1 is operating, and sequentially obtains the machine learning result. To update.
以上のように、把持動作のたびに、把持動作結果に基づいた学習を行なうことで、成功率の低い把持位置は除外され、成功率の高い把持位置のみが選択されるようになる。これにより、把持の失敗が低減され、取り出し作業を効率化することができる。
As described above, by performing learning based on the result of the gripping operation every time the gripping operation is performed, the gripping positions with a low success rate are excluded, and only the gripping positions with a high success rate are selected. Thereby, the failure of grasping is reduced, and the removal operation can be made more efficient.
また、本実施の形態では、把持装置が稼働している間、逐次学習を行なう例を示したが、周囲の環境変化やハンド4の形状変化、ワークの加工品位の変化等によっては、逐次学習よりも一定間隔で学習結果をリセットした方が好ましい場合も考えられる。
In this embodiment, an example in which the learning is performed sequentially while the gripping device is operating has been described. However, the learning may be performed sequentially depending on a change in the surrounding environment, a change in the shape of the hand 4, a change in the processing quality of the work, and the like. In some cases, it is preferable to reset the learning result at regular intervals rather than at regular intervals.
このようなときは、あらかじめ設定した回数Mの把持動作毎に学習結果をリセットしてもよい。把持位置候補検出部63は、一定回数の把持動作毎に機械学習結果をリセットする。リセット後は、再度、把持動作のたびに同様の学習を行ない、学習回数がMに達したとき学習結果をリセットする。また、Mは特に固定の値にする必要はなく、周囲の環境やハンド4、ワーク等の変化に応じてリセットのたびに変化させてもよい。
In such a case, the learning result may be reset for each of the predetermined number M of gripping operations. The grip position candidate detection unit 63 resets the machine learning result every fixed number of grip operations. After the reset, the same learning is performed again each time the grasping operation is performed, and the learning result is reset when the number of learning reaches M. Further, M does not need to be a fixed value, and may be changed each time it is reset according to changes in the surrounding environment, the hand 4, the work, and the like.
また、把持動作中の自動学習では、定期的に人間が認識結果を確認し、教師データを作成してもよい。
In the automatic learning during the grasping operation, a human may periodically check the recognition result and create teacher data.
この場合、データ保存部67に保存された画像を参照し、たとえば図9、図10に示すような矩形を人間が指示する。また、更新された成功率S’を人間が直接指定してもよい。これにより、学習時間のさらなる短縮化を図ることができる。
In this case, referring to the image stored in the data storage unit 67, a human designates a rectangle as shown in FIGS. 9 and 10, for example. Further, the updated success rate S 'may be directly designated by a human. Thereby, the learning time can be further reduced.
以上説明した一連の処理をフローチャートで説明する。図14は、ワーク情報処理装置が行なう処理を示したフローチャートである。図14に示す処理は、図9、図10で説明した人間が把持位置の教師データを与えてニューラルネットワークにある程度の学習をさせた後に行なわれる処理である。ワーク情報処理装置6は、カメラ2および三次元センサ3から画像を取得する(S1)。続いて、ワーク情報処理装置6は、取得した画像から把持対象を検出し(S2)、把持位置候補を検出し、把持位置候補の把持成功率を予測する(S3)。
一連 A series of processes described above will be described with a flowchart. FIG. 14 is a flowchart illustrating a process performed by the work information processing apparatus. The process shown in FIG. 14 is a process performed after the human has given the teacher data of the gripping position and made the neural network learn to some extent as described in FIGS. The work information processing device 6 acquires an image from the camera 2 and the three-dimensional sensor 3 (S1). Subsequently, the work information processing apparatus 6 detects a gripping target from the acquired image (S2), detects a gripping position candidate, and predicts a gripping success rate of the gripping position candidate (S3).
そして、ワーク情報処理装置6は、把持位置候補のうち、優先度最大、かつ把持成功率最大の把持位置候補を選択し(S4)、位置決め機構5およびハンド4を用いて把持動作を行なう(S5)。
Then, the work information processing apparatus 6 selects a gripping position candidate having the highest priority and the highest gripping success rate from the gripping position candidates (S4), and performs a gripping operation using the positioning mechanism 5 and the hand 4 (S5). ).
好ましくは、Nを自然数とすると、ステップS3における把持成功率の算出に学習結果を利用しない把持動作回数であるNをあらかじめ設定し、N回目までの把持では把持位置候補のうち、ワークの画像上の重心位置に最も近い点を選択し、N回を超過してからステップS3において機械学習結果を用いて把持成功率を算出し、ステップS4において把持成功率が最大になる候補を最適把持位置として選択してもよい。
Preferably, assuming that N is a natural number, N, which is the number of gripping operations that does not use the learning result for calculating the gripping success rate in step S3, is set in advance. The point closest to the position of the center of gravity is selected, and after exceeding N times, the gripping success rate is calculated using the machine learning result in step S3, and the candidate having the maximum gripping success rate in step S4 is determined as the optimal gripping position. You may choose.
把持動作を実行した後、ワーク情報処理装置6は、カメラ2でハンド4の画像を撮影し(S6)、把持動作の成否を確認する(S7)。そして、ワーク情報処理装置6は、画像および把持位置座標、把持動作成否を保存し(S8)、ニューラルネットワークの学習処理を実行する(S9)。
After executing the gripping operation, the work information processing apparatus 6 captures an image of the hand 4 with the camera 2 (S6), and checks whether the gripping operation is successful (S7). Then, the work information processing device 6 saves the image, the grip position coordinates, and whether or not to generate a grip motion (S8), and executes a neural network learning process (S9).
また、学習処理の負荷が高い場合、把持成功率の算出に時間を要する場合も考えられる。本実施の形態では、ワーク情報処理装置6の把持位置候補検出部63で学習を行なう例を示したが、特にこれに限定する必要はなく、把持位置候補検出部63の学習処理をワーク情報処理装置6の専用の学習装置で行ってもよい。この場合、たとえば、ワーク情報処理装置6と学習装置をシリアル通信やI/O等で接続し、学習に用いる把持位置候補の座標値および三次元センサのXYZ座標データD、カメラの画像データR,G,B、把持動作の成否判定結果を、ワーク情報処理装置6から学習装置に送信する。学習装置は受信したデータに基づいて学習を行ない、学習結果をワーク情報処理装置6に送信する。ワーク情報処理装置6は学習装置から受信した新たな学習結果を用いて把持成功率を算出する。
場合 In addition, when the load of the learning process is high, it may be time-consuming to calculate the grasping success rate. In the present embodiment, an example in which learning is performed by the gripping position candidate detection unit 63 of the work information processing apparatus 6 has been described. However, the present invention is not particularly limited to this. The learning may be performed by a dedicated learning device of the device 6. In this case, for example, the work information processing device 6 and the learning device are connected by serial communication or I / O, and the coordinate values of the gripping position candidates used for learning, the XYZ coordinate data D of the three-dimensional sensor, the camera image data R, G, B, and the result of the success / failure determination of the gripping operation are transmitted from the work information processing device 6 to the learning device. The learning device performs learning based on the received data, and transmits a learning result to the work information processing device 6. The work information processing device 6 calculates a grasp success rate using the new learning result received from the learning device.
なお、本実施の形態では、学習にバックプロバケーション(誤差逆伝播法)を用いることとしたが、これ以外にもサポートベクタマシン等の教師あり学習法や、オートエンコーダ(自己符号化器)やk平均法、主成分分析などの教師なし学習法、およびそれらを複合した機械学習法を用いてもよい。
In the present embodiment, back learning (error backpropagation method) is used for learning. However, other than this, a supervised learning method such as a support vector machine, an auto encoder (self-encoder), An unsupervised learning method such as a k-means method or principal component analysis, or a machine learning method combining these methods may be used.
以上説明したように、実施の形態1に係る把持装置によれば、三次元センサまたはカメラで実際に撮影したワーク画像と実際の把持動作結果を学習に用いることにより、ワークの寸法や表面性状のバラツキ、ハンドの現在の状態等を学習に反映させることができ、把持成功率の低下を軽減することができる。また、把持成功率の低下を軽減することができるため、作業のタクトタイムを短縮できる。
As described above, according to the gripping device according to the first embodiment, by using the work image actually captured by the three-dimensional sensor or the camera and the actual gripping operation result for learning, the dimensions and surface texture of the workpiece can be obtained. Variations, the current state of the hand, and the like can be reflected in learning, and a decrease in the success rate of gripping can be reduced. In addition, since a decrease in the success rate of gripping can be reduced, the tact time of the operation can be reduced.
[実施の形態2]
実施の形態1では現状の配置のワークのうち一番把持成功率の高いワークを選択した。しかし、隣接ワークとの位置関係によっては、ハンド4が挿入できないことがある。実施の形態2では、このような場合に把持作業を補助する。 [Embodiment 2]
In the first embodiment, the work having the highest gripping success rate is selected from the works in the current arrangement. However, thehand 4 may not be inserted depending on the positional relationship with the adjacent work. In the second embodiment, the gripping work is assisted in such a case.
実施の形態1では現状の配置のワークのうち一番把持成功率の高いワークを選択した。しかし、隣接ワークとの位置関係によっては、ハンド4が挿入できないことがある。実施の形態2では、このような場合に把持作業を補助する。 [Embodiment 2]
In the first embodiment, the work having the highest gripping success rate is selected from the works in the current arrangement. However, the
図15は、実施の形態2の把持装置の構成を示す図である。図15を参照して、把持装置1Aは、図1の把持装置1の構成に加えて、さらに、ワークを収容する容器(ワークトレー7)を載置する振動台8を備える。振動台8には、振動体が設けられている。また、把持装置1Aは、ワーク情報処理装置6に代えてワーク情報処理装置6Aを備える。カメラ2と、三次元センサ3と、ハンド4と、位置決め機構5については、図1の場合と同様であるので説明は繰り返さない。
FIG. 15 is a diagram illustrating a configuration of the gripping device according to the second embodiment. Referring to FIG. 15, gripping device 1A further includes a vibration table 8 on which a container (work tray 7) for accommodating a work is placed in addition to the configuration of gripping device 1 in FIG. The vibrating table 8 is provided with a vibrating body. The gripping device 1A includes a work information processing device 6A instead of the work information processing device 6. The camera 2, the three-dimensional sensor 3, the hand 4, and the positioning mechanism 5 are the same as in FIG.
ワーク情報処理装置6Aは、把持位置候補の把持成功率が閾値よりも低い場合に振動台8を振動させて把持成功率を向上させるように試行する。
The work information processing apparatus 6A attempts to improve the gripping success rate by vibrating the shaking table 8 when the gripping success rate of the gripping position candidate is lower than the threshold.
ワーク情報処理装置6Aは、次回の把持動作において、ハンドで把持を行なう前に容器内のワークを撮影装置で撮影した画像に基づいて決定した把持位置に対応する把持成功率が判定値よりも低い場合には、振動台8によってワークの位置を変更する。
In the next gripping operation, the workpiece information processing device 6A has a gripping success rate corresponding to a gripping position determined based on an image of the workpiece in the container captured by the capturing device before gripping with the hand, which is lower than the determination value. In this case, the position of the work is changed by the vibrating table 8.
図16は、実施の形態2のワーク情報処理装置が実行する処理を説明するためのフローチャートである。図16のフローチャートの処理は、ステップS4の次にステップS51およびS52の処理を実行する点が図14で説明した処理と相違する。
FIG. 16 is a flowchart illustrating a process executed by the work information processing apparatus according to the second embodiment. The processing of the flowchart of FIG. 16 differs from the processing described in FIG. 14 in that the processing of steps S51 and S52 is performed after step S4.
ワーク情報処理装置6Aは、画像を取得(S1)、把持対象の検出(S2)、把持位置候補の把持成功率を予測(S3)、優先度最大、かつ把持成功率最大の把持位置候補の選択(S4)という各処理を図14の場合と同様に順次行なう。
The work information processing apparatus 6A acquires an image (S1), detects a gripping target (S2), predicts a gripping success rate of a gripping position candidate (S3), selects a gripping position candidate having the highest priority and a maximum gripping success rate. Each process of (S4) is sequentially performed as in the case of FIG.
そして、ワーク情報処理装置6Aは、ステップS51において、把持位置候補の予測された成功率Sがある閾値Tよりも高いか否かを判定する。
Then, in step S51, the work information processing apparatus 6A determines whether the predicted success rate S of the gripping position candidate is higher than a certain threshold T.
把持位置候補の予測された成功率Sが閾値Tに到達しない場合(S51でNO)、振動台8を矢印の方向に一定時間振動させた後(S52)、再度ステップS1~S4の処理を実行し、閾値Tを超える成功率Sを与える把持位置候補の検出を試みる。
When the predicted success rate S of the gripping position candidate does not reach the threshold value T (NO in S51), the shaking table 8 is vibrated in the direction of the arrow for a certain time (S52), and the processing of steps S1 to S4 is executed again. Then, an attempt is made to detect a gripping position candidate that gives a success rate S exceeding the threshold value T.
把持位置候補の予測された成功率Sが閾値Tより高くなった場合(S51でYES)、実施の形態1と同様に、ステップS5~S9の処理を実行する。
場合 If the predicted success rate S of the gripping position candidate is higher than the threshold value T (YES in S51), the processes in steps S5 to S9 are executed as in the first embodiment.
なお、ステップS52の処理を数回繰り返しても成功率が向上した把持位置候補が検出されなかった場合、たとえば、ハンド4でワークをずらす動作をさせてもよい。
Note that if a gripping position candidate with an improved success rate is not detected even after repeating the processing of step S52 several times, for example, an operation of shifting the work with the hand 4 may be performed.
実施の形態2の把持装置は、実施の形態1の把持装置と同様な効果を奏するとともに、さらに、ワークの配置が悪いため把持成功率が高くない場合に把持成功率を向上させることができる。
The gripping device according to the second embodiment has the same effect as the gripping device according to the first embodiment, and can further improve the gripping success rate when the gripping success rate is not high due to poor placement of the work.
[実施の形態3]
実施の形態1では、学習は、把持成功率が高くなるように行なわれた。しかし、把持装置は、把持するだけでなく把持後のワークを次工程の加工等のために、作業台等に設置する必要もある。ハンド4の把持位置によっては、ワークが次工程の作業に最適な位置に配置されない場合も考えられる。実施の形態3では、把持動作に加えて配置動作も考慮して学習を実行する。 [Embodiment 3]
In the first embodiment, learning is performed such that the success rate of grasping is increased. However, the gripping device needs to be installed not only on the work table but also on a work table or the like in order to process the gripped work in the next step. Depending on the gripping position of thehand 4, the work may not be arranged at an optimal position for the operation of the next process. In the third embodiment, learning is performed in consideration of the placement operation in addition to the gripping operation.
実施の形態1では、学習は、把持成功率が高くなるように行なわれた。しかし、把持装置は、把持するだけでなく把持後のワークを次工程の加工等のために、作業台等に設置する必要もある。ハンド4の把持位置によっては、ワークが次工程の作業に最適な位置に配置されない場合も考えられる。実施の形態3では、把持動作に加えて配置動作も考慮して学習を実行する。 [Embodiment 3]
In the first embodiment, learning is performed such that the success rate of grasping is increased. However, the gripping device needs to be installed not only on the work table but also on a work table or the like in order to process the gripped work in the next step. Depending on the gripping position of the
図17は、実施の形態3の把持装置の構成を示す図である。図17を参照して、把持装置1Bは、図1の把持装置1の構成において、撮像装置としてカメラ2Aとカメラ2Bを備える。カメラ2Aは、図1のカメラ2と同様なカメラである。カメラ2Bは、次工程のワーク作業台10に設置されたワークの位置を撮影する。把持装置1Bは、また、ワーク情報処理装置6に代えてワーク情報処理装置6Bを備える。カメラ2と、三次元センサ3と、ハンド4と、位置決め機構5については、図1の場合と同様である。
FIG. 17 is a diagram illustrating the configuration of the gripping device according to the third embodiment. Referring to FIG. 17, gripping device 1B includes a camera 2A and a camera 2B as imaging devices in the configuration of gripping device 1 in FIG. The camera 2A is similar to the camera 2 in FIG. The camera 2B captures an image of the position of the work set on the work table 10 in the next process. The gripping device 1B includes a work information processing device 6B instead of the work information processing device 6. The camera 2, the three-dimensional sensor 3, the hand 4, and the positioning mechanism 5 are the same as those in FIG.
すなわち、図17に示すように、撮影装置2は、ハンドで把持される前にワークが置いてある載置場所を撮影する第1カメラ2Aと、ハンドで把持された後にワークが配置される部材を撮影する第2カメラ2Bとを含む。ワーク情報処理装置6Bは、第2カメラ2Bから取得した画像データに基づいてワークの配置位置を決定し、配置位置が基準位置と一致するか否かを判定し判定結果に基づいて配置成功率を算出し、次回の把持動作において、ハンド4で把持を行なう前に載置場所に置かれているワークを第1カメラ2Aで撮影した画像に基づいて決定した把持位置に対応する把持成功率および配置成功率に基づいて、把持位置の採否を決定する。
That is, as shown in FIG. 17, the photographing apparatus 2 includes a first camera 2A for photographing a mounting place where a work is placed before being gripped by the hand, and a member on which the work is arranged after being gripped by the hand. And the second camera 2B that captures the image. The work information processing device 6B determines the arrangement position of the work based on the image data acquired from the second camera 2B, determines whether the arrangement position matches the reference position, and determines the arrangement success rate based on the determination result. Calculated and, in the next gripping operation, the gripping success rate and the arrangement corresponding to the gripping position determined based on the image taken by the first camera 2A of the work placed on the mounting position before gripping with the hand 4 Based on the success rate, it is determined whether to adopt the grip position.
図18は、実施の形態3のワーク情報処理装置が実行する処理を説明するためのフローチャートである。図18のフローチャートの処理は、ステップS7の次にステップS101~S103の処理を実行する点が図14で説明した処理と相違する。
FIG. 18 is a flowchart illustrating a process executed by the work information processing apparatus according to the third embodiment. The processing of the flowchart in FIG. 18 differs from the processing described in FIG. 14 in that the processing of steps S101 to S103 is performed after step S7.
ワーク情報処理装置6Bは、ステップS1~S7の処理を図14の場合と同様に順次行なう。そして、ワーク情報処理装置6Bは、ステップS101において、把持したワークの設置動作を実行し、ステップS102においてカメラ2Bで設置位置の画像を撮影し、ステップS103において設置動作の成否を確認する。
The work information processing apparatus 6B sequentially performs the processing of steps S1 to S7 in the same manner as in the case of FIG. Then, in step S101, the work information processing apparatus 6B executes a setting operation of the gripped work, captures an image of the setting position with the camera 2B in step S102, and checks whether the setting operation is successful in step S103.
そして実施の形態1と同様に、ステップS8~S9の処理を実行する。ステップS9の学習処理では、把持成功率に加えて、設置成功率が考慮される。
Then, as in the first embodiment, the processing of steps S8 to S9 is executed. In the learning process in step S9, the installation success rate is considered in addition to the grasping success rate.
図19は、設置成功時に検出された画像を示す図である。目標位置にワークを配置した画像と、実際にハンド4で把持した後にワークをワーク作業台10に設置した画像とを比較すると、一致する。このように不一致の部分が無い場合には設置成功と判定される。
FIG. 19 is a diagram showing an image detected at the time of successful installation. When the image in which the work is arranged at the target position is compared with the image in which the work is actually held by the hand 4 and the work is set on the work table 10, they match. When there is no mismatched portion, it is determined that the installation is successful.
図20は、設置失敗時に検出された画像を示す図である。目標位置にワークを配置した画像と、実際にハンド4で把持した後にワークをワーク作業台10に設置した画像とを比較すると、不一致が生じる。このように不一致の部分がある場合には設置失敗と判定される。
FIG. 20 is a diagram showing an image detected when the installation has failed. When the image in which the work is placed at the target position is compared with the image in which the work is actually held by the hand 4 and the work is set on the work worktable 10, a mismatch occurs. If there is a mismatch, the installation is determined to have failed.
ワーク情報処理装置6Bは、図19、図20に示したように、あらかじめ撮影されデータ保存部67に保存されていた目標位置に配置したワークの画像を、設置動作後のワークの画像と比較して、2つの画像が一致していれば成功、不一致ならば失敗と判定する。
As shown in FIGS. 19 and 20, the work information processing apparatus 6B compares the image of the work placed at the target position, which has been shot in advance and stored in the data storage unit 67, with the image of the work after the installation operation. If the two images match, it is determined to be successful, and if they do not match, it is determined to be unsuccessful.
実施の形態3の把持装置は、実施の形態1の把持装置と同様な効果を奏するとともに、さらに、ワークの把持位置が悪いため設置成功率が高くない場合に設置成功率を向上させることができる。
The gripping device according to the third embodiment has the same effects as the gripping device according to the first embodiment, and can further improve the installation success rate when the installation success rate is not high due to a poor gripping position of the work. .
なお、実施の形態2と3を組み合わせた把持装置を用いても良い。
今回開示された実施の形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施の形態の説明ではなくて請求の範囲によって示され、請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。 Note that a gripping device that combines Embodiments 2 and 3 may be used.
The embodiments disclosed this time are to be considered in all respects as illustrative and not restrictive. The scope of the present invention is defined by the terms of the claims, rather than the description of the embodiments, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.
今回開示された実施の形態は、すべての点で例示であって制限的なものではないと考えられるべきである。本発明の範囲は、上記した実施の形態の説明ではなくて請求の範囲によって示され、請求の範囲と均等の意味および範囲内でのすべての変更が含まれることが意図される。 Note that a gripping device that combines Embodiments 2 and 3 may be used.
The embodiments disclosed this time are to be considered in all respects as illustrative and not restrictive. The scope of the present invention is defined by the terms of the claims, rather than the description of the embodiments, and is intended to include any modifications within the scope and meaning equivalent to the terms of the claims.
1,1A,1B 把持装置、2,2A,2B カメラ、3 三次元センサ、4 ハンド、5 位置決め機構、6,6A,6B ワーク情報処理装置、7 ワークトレー、8 振動台、10 ワーク作業台、41,42 指、61 画像入力部、62 ワーク検出部、63 把持位置候補検出部、64 把持位置選択部、65 把持動作成否判定部、66 指令出力部、67 データ保存部。
1, 1A, 1B gripping device, 2, 2A, 2B camera, 3 3D sensor, 4 hand, 5 positioning mechanism, 6, 6A, 6B work information processing device, 7 work tray, 8 shaking table, 10 work work table, 41, 42 finger, 61 image input unit, 62 work detection unit, 63 grip position candidate detection unit, 64 grip position selection unit, 65 grip movement creation / non-judgment unit, 66 instruction output unit, 67 data storage unit.
Claims (8)
- ワークの表面を観察するための撮影装置と、
前記ワークの表面上の点の座標データを計測する三次元センサと、
前記ワークを把持するためのハンドと、
把持可能な姿勢に前記ハンドを位置決めする位置決め機構と、
前記撮影装置および前記三次元センサを用いて前記ワークの最適把持位置を検出し、前記位置決め機構および前記ハンドを制御するワーク情報処理装置とを備え、
前記ワーク情報処理装置は、前記三次元センサから取得した座標データまたは前記撮影装置から取得した画像データに基づいて前記ワークの把持位置を決定し、前記把持位置に基づいて前記ハンドが前記ワークを把持するように前記位置決め機構および前記ハンドを作動させ、前記ハンドを前記撮影装置によって撮影させ、取得した画像データに基づいて実際に前記ワークを把持できているか否かを判定し、判定結果と前記把持位置との関係を蓄積して前記把持位置における把持成功率を算出し、前記把持成功率に基づいて次回以降の把持動作時に採用する把持位置を決定する、把持装置。 An imaging device for observing the surface of the work,
A three-dimensional sensor that measures coordinate data of a point on the surface of the work,
A hand for gripping the work,
A positioning mechanism for positioning the hand in a grippable posture;
A work information processing device that detects an optimal gripping position of the work using the photographing device and the three-dimensional sensor, and controls the positioning mechanism and the hand,
The work information processing device determines a gripping position of the work based on coordinate data obtained from the three-dimensional sensor or image data obtained from the imaging device, and the hand grips the work based on the gripping position. Operating the positioning mechanism and the hand so that the hand is photographed by the photographing device, and it is determined whether or not the work can be actually grasped based on the acquired image data. A gripping device that accumulates a relationship with a position, calculates a gripping success rate at the gripping position, and determines a gripping position to be used in the next and subsequent gripping operations based on the gripping success rate. - 前記ワーク情報処理装置は、
前記三次元センサから取得した座標データまたは前記撮影装置から取得した画像データを用いて把持対象とする前記ワークの位置を検出するワーク検出部と、
前記ワーク検出部で検出された前記ワークの位置に基づいて前記ワークの周囲物と前記ハンドとの干渉を判定し、単数または複数の把持位置候補を検出する把持位置候補検出部と、
前記把持位置候補検出部で抽出された前記把持位置候補の中から前記最適把持位置を選択する把持位置選択部と、
前記最適把持位置に基づいて前記位置決め機構または前記ハンドに位置決め指令または把持・開放指令を出力する指令出力部と、
前記撮影装置から取得した画像データに基づいて実際に前記ワークを把持できているか否かを画像処理で判定する把持動作成否判定部とを含み、
前記把持位置候補検出部は、前記把持位置候補について、前記三次元センサまたは前記撮影装置から取得したデータと前記把持動作成否判定部の判定結果とを用いた機械学習結果により前記把持成功率を求め、
前記把持位置選択部は、前記把持成功率が最大になる候補を前記最適把持位置として選択する、請求項1に記載の把持装置。 The work information processing apparatus includes:
A work detection unit that detects the position of the work to be grasped using coordinate data acquired from the three-dimensional sensor or image data acquired from the imaging device,
A grip position candidate detection unit that determines interference between the surrounding object of the work and the hand based on the position of the work detected by the work detection unit, and detects one or more grip position candidates.
A grip position selection unit that selects the optimal grip position from among the grip position candidates extracted by the grip position candidate detection unit,
A command output unit that outputs a positioning command or a grip / release command to the positioning mechanism or the hand based on the optimal grip position,
A gripping motion creation non-determination unit that determines whether or not the workpiece has actually been gripped based on image data acquired from the imaging device by image processing.
The gripping position candidate detection unit obtains the gripping success rate from the machine learning result using the data acquired from the three-dimensional sensor or the imaging device and the determination result of the gripping motion creation determination unit for the gripping position candidate. ,
The gripping device according to claim 1, wherein the gripping position selection unit selects a candidate that maximizes the gripping success rate as the optimal gripping position. - 前記把持位置候補検出部は、機械学習の学習データとして前記三次元センサから取得した座標データまたは前記撮影装置から取得した画像データ、把持動作を実行したときの把持位置、および前記把持動作成否判定部の判定結果とを用いて前記把持成功率を求める、請求項2に記載の把持装置。 The gripping position candidate detection unit is coordinate data acquired from the three-dimensional sensor or image data acquired from the imaging device as learning data of machine learning, a gripping position when a gripping operation is performed, and the gripping movement creation determination unit. The gripping device according to claim 2, wherein the gripping success rate is obtained using the determination result.
- 予め定められたNを自然数とすると、前記把持位置候補検出部が、前記把持成功率の算出に学習結果を利用しない把持動作回数は、N回であり、
前記把持位置選択部は、N回目までの把持では前記把持位置候補のうち、前記ワークの画像上の重心位置に最も近い点を選択し、N回を超過してから前記機械学習結果を用いて得られた前記把持成功率が最大になる候補を前記最適把持位置として選択する、請求項2に記載の把持装置。 Assuming that a predetermined N is a natural number, the number of gripping operations in which the gripping position candidate detecting unit does not use a learning result for calculating the gripping success rate is N times,
The gripping position selection unit selects a point closest to the position of the center of gravity on the image of the workpiece among the gripping position candidates in the gripping up to the Nth time, and uses the machine learning result after exceeding N times. The gripping device according to claim 2, wherein a candidate for which the obtained gripping success rate is maximum is selected as the optimal gripping position. - 前記把持位置候補検出部は、前記把持装置の稼働中に前記三次元センサから取得した座標データまたは前記撮影装置から取得した画像データと前記把持動作成否判定部の判定結果を用い、逐次前記機械学習結果を更新する、請求項2に記載の把持装置。 The gripping position candidate detection unit sequentially uses the machine learning based on coordinate data acquired from the three-dimensional sensor or image data acquired from the imaging device and a determination result of the gripping motion creation determination unit while the gripping device is operating. The gripping device according to claim 2, wherein the result is updated.
- 前記把持位置候補検出部は、一定回数の把持動作毎に前記機械学習結果の少なくとも一部をリセットする、請求項2に記載の把持装置。 The gripping device according to claim 2, wherein the gripping position candidate detection unit resets at least a part of the machine learning result every predetermined number of gripping operations.
- 前記ワークを収容する容器を載置する振動台をさらに備え、
前記ワーク情報処理装置は、次回の把持動作において、前記ハンドで把持を行なう前に前記容器内のワークを前記撮影装置で撮影した画像に基づいて決定した把持位置に対応する前記把持成功率が判定値よりも低い場合には、前記振動台によって前記ワークの位置を変更する、請求項1に記載の把持装置。 The apparatus further includes a vibration table on which a container that stores the work is placed,
In the next gripping operation, the workpiece information processing apparatus determines the gripping success rate corresponding to a gripping position determined based on an image of the workpiece in the container captured by the capturing device before gripping with the hand. The gripping device according to claim 1, wherein when the value is lower than the value, the position of the work is changed by the vibration table. - 前記撮影装置は、
前記ハンドで把持される前に前記ワークが置いてある載置場所を撮影する第1カメラと、
前記ハンドで把持された後に前記ワークが配置されるべき配置場所を撮影する第2カメラとを含み、
前記ワーク情報処理装置は、前記第2カメラから取得した画像データに基づいて前記ワークの前記配置場所における配置位置を決定し、前記配置位置が基準位置と一致するか否かを判定し判定結果に基づいて配置成功率を算出し、
前記ワーク情報処理装置は、次回の把持動作において、前記ハンドで把持を行なう前に前記載置場所に置かれているワークを前記第1カメラで撮影し、撮影した画像に基づいて決定した把持位置に対応する前記把持成功率および前記配置成功率に基づいて、前記把持位置の採否を決定する、請求項1に記載の把持装置。 The photographing device,
A first camera for photographing a placing place where the work is placed before being gripped by the hand;
A second camera that captures an arrangement location where the work is to be arranged after being gripped by the hand,
The work information processing apparatus determines an arrangement position of the work at the arrangement position based on image data acquired from the second camera, determines whether the arrangement position matches a reference position, and determines a determination result. Calculate the deployment success rate based on
In the next gripping operation, the workpiece information processing apparatus captures a workpiece placed at the placement location with the first camera before gripping with the hand, and determines a gripping position determined based on the captured image. The gripping device according to claim 1, wherein whether to adopt the gripping position is determined based on the gripping success rate and the placement success rate corresponding to (i).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-140356 | 2018-07-26 | ||
JP2018140356A JP7191569B2 (en) | 2018-07-26 | 2018-07-26 | gripping device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020022302A1 true WO2020022302A1 (en) | 2020-01-30 |
Family
ID=69181536
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/028757 WO2020022302A1 (en) | 2018-07-26 | 2019-07-23 | Grasping device |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7191569B2 (en) |
WO (1) | WO2020022302A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2021177239A1 (en) * | 2020-03-05 | 2021-09-10 | ||
DE102021210903A1 (en) | 2021-09-29 | 2023-03-30 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for picking up an object using a robotic device |
DE102023201407A1 (en) | 2023-02-17 | 2024-08-22 | Kuka Deutschland Gmbh | Method and system for improving grip accessibility |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6786136B1 (en) * | 2020-04-17 | 2020-11-18 | リンクウィズ株式会社 | Information processing method, information processing system, program |
WO2022239878A1 (en) * | 2021-05-10 | 2022-11-17 | 코가플렉스 주식회사 | Method for robot gripping and training method for robot gripping |
US20240269845A1 (en) * | 2021-05-28 | 2024-08-15 | Kyocera Corporation | Hold position determination device and hold position determination method |
JP7551940B2 (en) | 2021-09-15 | 2024-09-17 | ヤマハ発動機株式会社 | Image processing device, part gripping system, image processing method, and part gripping method |
DE112021008069T5 (en) * | 2021-09-15 | 2024-05-23 | Yamaha Hatsudoki Kabushiki Kaisha | Image processing device, component gripping system, image processing method and component gripping method |
KR20230092293A (en) * | 2021-12-17 | 2023-06-26 | 한국전자기술연구원 | System and method for additive manufacturing process optimization of Data-based Powder Bed Fusion method |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05282007A (en) * | 1991-12-20 | 1993-10-29 | Robert Bosch Gmbh | Method and device for digital adaptive control |
JPH07225748A (en) * | 1993-12-30 | 1995-08-22 | Caterpillar Inc | Equipment and method for supervision and training of neural network |
JP2010207989A (en) * | 2009-03-11 | 2010-09-24 | Honda Motor Co Ltd | Holding system of object and method of detecting interference in the same system |
JP2014210310A (en) * | 2013-04-18 | 2014-11-13 | ファナック株式会社 | Robot system equipped with robot for carrying work |
JP2017030135A (en) * | 2015-07-31 | 2017-02-09 | ファナック株式会社 | Machine learning apparatus, robot system, and machine learning method for learning workpiece take-out motion |
JP2017162449A (en) * | 2016-03-02 | 2017-09-14 | キヤノン株式会社 | Information processing device, and method and program for controlling information processing device |
JP2017185577A (en) * | 2016-04-04 | 2017-10-12 | ファナック株式会社 | Machine learning device for performing learning by use of simulation result, mechanical system, manufacturing system and machine learning method |
WO2018116589A1 (en) * | 2016-12-19 | 2018-06-28 | 株式会社安川電機 | Industrial device image recognition processor and controller |
-
2018
- 2018-07-26 JP JP2018140356A patent/JP7191569B2/en active Active
-
2019
- 2019-07-23 WO PCT/JP2019/028757 patent/WO2020022302A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05282007A (en) * | 1991-12-20 | 1993-10-29 | Robert Bosch Gmbh | Method and device for digital adaptive control |
JPH07225748A (en) * | 1993-12-30 | 1995-08-22 | Caterpillar Inc | Equipment and method for supervision and training of neural network |
JP2010207989A (en) * | 2009-03-11 | 2010-09-24 | Honda Motor Co Ltd | Holding system of object and method of detecting interference in the same system |
JP2014210310A (en) * | 2013-04-18 | 2014-11-13 | ファナック株式会社 | Robot system equipped with robot for carrying work |
JP2017030135A (en) * | 2015-07-31 | 2017-02-09 | ファナック株式会社 | Machine learning apparatus, robot system, and machine learning method for learning workpiece take-out motion |
JP2017162449A (en) * | 2016-03-02 | 2017-09-14 | キヤノン株式会社 | Information processing device, and method and program for controlling information processing device |
JP2017185577A (en) * | 2016-04-04 | 2017-10-12 | ファナック株式会社 | Machine learning device for performing learning by use of simulation result, mechanical system, manufacturing system and machine learning method |
WO2018116589A1 (en) * | 2016-12-19 | 2018-06-28 | 株式会社安川電機 | Industrial device image recognition processor and controller |
Non-Patent Citations (2)
Title |
---|
"Yaskawa Electric's deep learning for gripping three types of workpieces, Mitsubishi Electric's", NIKKEI ROBOT I C S, vol. 30, no. 15, 10 December 2017 (2017-12-10), pages 10 - 16 * |
SHINDOH, TOMONORI, TANAKA, ATSUSHI,: "FANUC/PFN with active learning and Google, etc., with a simulator picking and gripping machine learning", NIKKEI ROBOTICS, vol. 7, no. 3, 10 January 2016 (2016-01-10), pages 20 - 73 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPWO2021177239A1 (en) * | 2020-03-05 | 2021-09-10 | ||
WO2021177239A1 (en) * | 2020-03-05 | 2021-09-10 | ファナック株式会社 | Extraction system and method |
US20230125022A1 (en) * | 2020-03-05 | 2023-04-20 | Fanuc Corporation | Picking system and method |
JP7481427B2 (en) | 2020-03-05 | 2024-05-10 | ファナック株式会社 | Removal system and method |
DE102021210903A1 (en) | 2021-09-29 | 2023-03-30 | Robert Bosch Gesellschaft mit beschränkter Haftung | Method for picking up an object using a robotic device |
DE102023201407A1 (en) | 2023-02-17 | 2024-08-22 | Kuka Deutschland Gmbh | Method and system for improving grip accessibility |
Also Published As
Publication number | Publication date |
---|---|
JP7191569B2 (en) | 2022-12-19 |
JP2020015141A (en) | 2020-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020022302A1 (en) | Grasping device | |
JP7467041B2 (en) | Information processing device, information processing method and system | |
JP4938115B2 (en) | Work take-out device and work take-out method | |
JP4226623B2 (en) | Work picking device | |
US7283661B2 (en) | Image processing apparatus | |
US20180222048A1 (en) | Control device, robot, and robot system | |
US20180222057A1 (en) | Control device, robot, and robot system | |
US20180225113A1 (en) | Control device, robot, and robot system | |
EP1449626B1 (en) | Workpiece conveying apparatus with visual sensor for checking the gripping state | |
EP1428634B1 (en) | Workpiece taking-out robot with a three-dimensional visual sensor | |
JP5685027B2 (en) | Information processing apparatus, object gripping system, robot system, information processing method, object gripping method, and program | |
US20180222058A1 (en) | Control device, robot, and robot system | |
JP2019181622A (en) | Hand control device and hand control system | |
US11213954B2 (en) | Workpiece identification method | |
CN111745640B (en) | Object detection method, object detection device, and robot system | |
JP2012030320A (en) | Work system, working robot controller, and work program | |
CN110303474B (en) | Robot system for correcting teaching of robot using image processing | |
JP2016196077A (en) | Information processor, information processing method, and program | |
CN114670189B (en) | Storage medium, and method and system for generating control program of robot | |
JP2021146403A (en) | Control device and program | |
JP2008168372A (en) | Robot device and shape recognition method | |
JP2021115693A (en) | Control device of robot system, control method of robot system, computer control program and robot system | |
JP4572497B2 (en) | Robot controller | |
US20220134550A1 (en) | Control system for hand and control method for hand | |
JP6666764B2 (en) | Work recognition method and random picking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19840895 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19840895 Country of ref document: EP Kind code of ref document: A1 |