WO2021111929A1 - Système de saisie, procédé de saisie et programme - Google Patents

Système de saisie, procédé de saisie et programme Download PDF

Info

Publication number
WO2021111929A1
WO2021111929A1 PCT/JP2020/043700 JP2020043700W WO2021111929A1 WO 2021111929 A1 WO2021111929 A1 WO 2021111929A1 JP 2020043700 W JP2020043700 W JP 2020043700W WO 2021111929 A1 WO2021111929 A1 WO 2021111929A1
Authority
WO
WIPO (PCT)
Prior art keywords
work
picking
image
analysis unit
picking system
Prior art date
Application number
PCT/JP2020/043700
Other languages
English (en)
Japanese (ja)
Inventor
佐藤 大輔
浩紀 八登
武 坂本
Original Assignee
Arithmer株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arithmer株式会社 filed Critical Arithmer株式会社
Publication of WO2021111929A1 publication Critical patent/WO2021111929A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices

Definitions

  • the present invention relates to a picking system, a picking method, and a program for picking a work.
  • Patent Document 1 Japanese Unexamined Patent Publication No. 2011-167815
  • the workpieces can take an arbitrary shape when the workpieces are flexible and can be oriented in an arbitrary direction, the purpose is particularly among them when a plurality of workpieces are overlapped. It is expected that it will be difficult to calculate the position information and normal information of the work for identifying the work and picking it stably.
  • a picking system for picking a work, an imaging unit that photographs the work to obtain an image and a depth map, and a work is specified based on the image, and the depth map is specified.
  • a picking system including an analysis unit that determines the position and orientation of the work based on the work portion corresponding to the work, and a control unit that drives and controls the robot arm based on the position and orientation of the work.
  • a picking method for picking a work in which the imaging unit captures the work to obtain an image and a depth map, and the analysis unit identifies the work based on the image. , The stage of determining the position and orientation of the work based on the work portion corresponding to the specified work in the depth map, and the stage of driving and controlling the robot arm based on the position and orientation of the work by the control unit.
  • a picking method is provided.
  • a program for causing a computer to execute the picking method of the second aspect is provided.
  • the configuration of the robot according to this embodiment is shown.
  • the configuration of the robot hand is shown.
  • the functional configuration of the picking system according to this embodiment is shown.
  • the configuration of the work separating device is shown.
  • the arrangement of robots and containers in the picking system is shown. Indicates the work housed in the container compartment.
  • the result of identifying the work in the container by image recognition is shown.
  • An example of the image and the depth map obtained by the picking system is shown.
  • An example of the work specified from the image and the work part in the depth map cut out by using this as a mask is shown.
  • An example of point cloud data obtained by reconstructing the work part in the depth map is shown.
  • An example of data points acquired to determine the orientation of the work from the point cloud data is shown.
  • An example of the orientation of the work determined using the three data points is shown.
  • An example of the adsorption point on the work surface determined from the point cloud data is shown.
  • An example of suctioning a workpiece by a robot hand is shown.
  • the flow of the picking method according to this embodiment is shown.
  • An example of the configuration of the computer according to this embodiment is shown.
  • the picking system 100 is a system that picks (also referred to as picking) a work 90, transports the work 90 and places it at a target location (also referred to as bracing), and is a robot 101, a control device 150, and a work separating device. 200 is provided.
  • the work 90 handled in the present embodiment is, for example, a flexible and amorphous bag-shaped container containing a liquid substance, a powdery substance, or a granular substance.
  • the work 90 cannot be stacked to protect the contents, and is arranged upright in the container compartment. Since it is flexible, the surface can face any direction, and when the work 90 is erected, the contents can be oriented. The contents are protected by picking the upper side of the container to concentrate on the lower side of the container.
  • the robot 101 is a device for picking and placing a work 90, and includes a main body 110, a robot arm 120, and a robot hand 130.
  • the main body 110 is a portion that supports the robot arm 120 on a pedestal or a floor surface (not shown), and includes a base 111, a driving device 140, and a communication device (not shown).
  • the base 111 rotatably supports the main body 110.
  • the drive device 140 may employ, for example, an electric motor, which drives each part of the robot arm 120.
  • the communication device for example, a wireless communication device can be adopted, whereby the control signal transmitted from the control device 150 is received and transmitted to the drive device 140, the drive device in the fixing member 134, the suction pad 135, and the like. ..
  • the robot arm 120 is a part that drives the robot hand 130, and includes the first to third arms 121 to 123.
  • the first arm 121 is supported by the upper end of the main body 110 and rotates in the left-right direction in the drawing.
  • the second arm 122 is supported by the tip of the first arm 121 and rotates in the left-right direction in the drawing.
  • the third arm 123 supports the robot hand 130 at its tip and is rotatably supported by the tip of the second arm 122.
  • the robot hand 130 is a portion that grips the work 90, and has a base 131, a shaft 132, an image pickup device 133, a fixing member 134, and a suction pad 135.
  • the base 131 is a portion that supports each part of the robot hand 130, and is supported by the tip of the third arm 123.
  • the shaft 132 is a shaft-shaped member that supports the image pickup apparatus 133 and the suction pad 135, and the upper end thereof is supported by the base 131.
  • the image pickup device 133 for example, a stereo camera can be adopted.
  • the image pickup apparatus 133 is fixed downward to the shaft 132 via the support member 133a, whereby the work 90 can be imaged and an image thereof and a depth map can be obtained.
  • the image is an RGB image as an example.
  • the depth map is a map relating to the distance in the depth direction with respect to the position of the image pickup apparatus 133.
  • the fixing member 134 is fixed to the lower end of the shaft 132 and supports the support member 135a in the horizontal direction.
  • the fixing member 134 has a driving device (not shown) such as an electric motor, whereby the supporting member 135a is rotated.
  • the suction pad 135 is connected to a compressor (not shown) and sucks and holds the work 90 by sucking air. This makes it possible to pick an amorphous work 90.
  • a compressor not shown
  • one suction pad 135 is fixed to both ends of the support member 135a, and the support member 135a can be rotated by a driving device in the fixing member 134 to change the inclination in the vertical direction.
  • the number of suction pads 135 is not limited to two, and may be any number of one or three or more depending on the size and weight of the work 90.
  • the suction pad 135 includes a barometric pressure sensor and the like, and measures the internal barometric pressure when air is sucked by the compressor.
  • a low internal air pressure is measured, and when the work 90 fails to be sucked, a high internal pressure is measured, so that the work is measured for each of the two suction pads 135. It is possible to detect whether the gripping of the 90 is successful or unsuccessful. The result is transmitted to the control unit 154.
  • the control device 150 is a device that controls the robot 101, and is mounted by an arbitrary computer device.
  • the control device 150 includes a processing device (CPU and / or GPU) and a communication device.
  • the processing device (not shown) causes the control device 150 to express the control function of the robot 101 by executing the control program.
  • the control program is stored in, for example, a ROM (not shown), read by a processing device, and expanded in a RAM to be activated.
  • the communication device (not shown) is, for example, a device that wirelessly communicates with the robot 101.
  • the robot 101 is controlled by transmitting a control signal to the robot 101 by the communication device.
  • the control device 150 expresses the imaging unit 151, the analysis unit 152, the learning unit 153, and the control unit 154.
  • the imaging unit 151 photographs the work 90 using the imaging device 133. This makes it possible to obtain an image of the work 90 and a depth map (in the depth direction).
  • the analysis unit 152 identifies the work 90 based on the image, and determines the position and orientation of the work 90 based on the work portion corresponding to the specified work 90 in the depth map. The method of determining the position and orientation of the work 90 will be described later.
  • the analysis unit 152 may specify the work based on the image by machine learning by the learning unit 153.
  • the learning unit 153 generates a machine learning model adopted by the analysis unit 152 by using the correct image in which the work 90 has been specified.
  • a machine learning model for example, deep learning can be adopted.
  • the learning image for example, an image of 100 workpieces 90 may be used.
  • the correct image an image in which the work 90 is specified may be used by performing polygon processing on, for example, several images from those learning images using arbitrary image processing software. Specifically, using image processing software, vertices are manually created so as to surround the work shown in the image, and polygons are created based on the vertices.
  • VGG Image Annotator VGG Image Annotator
  • the control unit 154 drives and controls the robot arm 120 based on the position and orientation of the work 90 determined by the analysis unit 152. By transmitting a control signal to the robot 101, the control unit 154 changes the direction of the main body 110 by the driving device 140 of the robot 101, extends the robot arm 120 back and forth, and uses the driving device in the fixing member 134 to push the suction pad 135. Tilt up and down to turn the suction pad 135 on and off to pick and place the work 90.
  • the control unit 154 determines the driving speed of the robot arm 120 according to the number of suction pads 135 sucked on the work 90 among the plurality of suction pads 135.
  • the number of suction pads 135 sucked on the work 90 is large, for example, when both of the two suction pads 135 are sucked on the work 90, the driving speed of the robot arm 120 is high, and when the number is low, for example, one of the two suction pads 135.
  • FIG. 2 shows the configuration of the work separating device 200.
  • the work separating device 200 is set with respect to the container 70.
  • the container 70 is divided into a total of 12 compartments by a partition 79, 3 in the longitudinal direction (diagonal direction in the drawing) and 4 in the lateral direction (horizontal direction in the drawing), and the work 90 is contained in the compartment Sa.
  • the work 91 is housed in the compartment Sb with the surface facing the lateral side.
  • the work separating device 200 is a device that brings at least the upper portion of the work 90 in one direction to separate it from another work, and includes a set of a support drive device 210 and a rod-shaped member 220.
  • a set of support device driving devices 210 are devices that drive the rod-shaped member 220 in the lateral direction of the container, and are arranged on one side and the other side in the longitudinal direction of the container 70, respectively.
  • Each support drive device 210 includes a main body 211, a table 212, and a support 213.
  • the main body 211 supports the table 212 on its upper surface, and drives the table 212 in the uniaxial direction by rotating a ball screw passing through a guide lock fixed to the bottom of the table 212 by an electric motor (not shown).
  • the table 212 is a member that supports the support tool 213 on the main body 211 and moves in the uniaxial direction.
  • the support tool 213 is a member that stands on the table 212 and supports one end of the rod-shaped member 220.
  • the support 213 has a shape that extends upward, bends inward toward the inside of the container 70, and extends downward along the inner wall of the container 70, and one end of the rod-shaped member 220 is fixed to the tip thereof.
  • the tip of the support tool 213 supports the rod-shaped member 220 and moves up and down by a driving device (not shown).
  • Both ends of the rod-shaped member 220 are fixed to each support 213 of the set of support drive devices 210, and the rod-shaped member 220 is supported above the partition 79 in the container 70.
  • the rod-shaped member 220 may be formed of a material such as metal or plastic, or may be a wire instead.
  • the rod-shaped member 220 is raised by the support tool 213, the support tool 213 is driven by the main body 211 in the left-right direction in the drawing, the rod-shaped member 220 is lowered and arranged between the works 90 and 91, and is supported by the main body 211.
  • the tool 213 By driving the tool 213 to the left side of the drawing, at least the upper part of the work 90 is moved to the left side of the drawing and separated from the other work 91.
  • a space can be provided between the works 90 and 91 so that the surface of the work 90 or the work 91 can be imaged, and a space for inserting the suction pad 135 of the robot hand 130 can be secured. ..
  • the work separating device 200 may not only move the rod-shaped member 220 to move at least the upper part of the work 90 to the left, but also move at least the upper part of the work 91 to the right. As a result, the upper part of the work 90 and the upper part of the work 91 are further separated from each other, so that the possibility that the work 91 is reflected when the work 90 is imaged can be reduced.
  • FIG. 3 shows the arrangement of the robot 101, the containers 70, 80, and the QR code (registered trademark) reader 60 in the picking system 100 according to the present embodiment.
  • the container 70 is arranged on the left side of the robot 101
  • the container 80 is arranged on the upper side
  • the QR code (registered trademark) reader 60 is arranged on the upper left side.
  • the container 70 is a container for accommodating the work to be rearranged, and the inside thereof is divided into 12 sections 71 as an example by a partition 79. It is assumed that the work is cluttered in 12 compartments 71.
  • the work separating device 200 is installed in the container 70.
  • the container 80 is a place container, and the inside thereof is divided into 12 sections as an example by a partition 89. Each work is housed in a designated compartment of the 12 compartments 81.
  • the QR code (registered trademark) reader 60 is a device that reads the QR code (registered trademark).
  • a QR code (registered trademark) that records the contents, the amount of contents, etc. is affixed to the surface of each work.
  • the QR code (registered trademark) of each work is read by the QR code (registered trademark) reader 60, and the section 81 of the container 80 to be accommodated is determined from the record.
  • FIG. 4A shows the workpieces 91 to 97 housed in the container 70.
  • the inner wall of the container 70 is omitted.
  • three workpieces 91, 95, 96 are housed facing the right side of the drawing and leaning against the left inner wall or partition (not shown) of the container 70 while overlapping the upper parts of each other. ..
  • the work 91 is on top, and the works 95 and 96 are on top of it.
  • one work 92, 93 is accommodated, each facing downward in the drawing, with the upper portion leaning toward the upper inner wall or partition (not shown) of the container 70.
  • the two workpieces 94 and 97 are housed facing the left side of the drawing and leaning toward the right inner wall or the partition (not shown) of the container 70 while overlapping the upper parts of each other.
  • the work 94 is on top
  • the work 97 is on top of it.
  • the accommodation state of these works 91 to 97 is imaged from above by the imaging unit 151, and as a result, an image and a depth map are obtained.
  • FIG. 4B shows the identification results of the works 91 to 97 in the container 70 by image recognition.
  • the analysis unit 152 identifies the individual works 91 to 97 by recognizing the images of the works 91 to 97. Specifically, the analysis unit 152 identifies the works 91 to 97 based on the image by using the machine learning model generated from the correct image in which the work has been specified. As a method for specifying, for example, semantic segmentation can be adopted, and in this example, instance segmentation in particular can be adopted. In instance segmentation, each pixel of an image is assigned to an object class, and the shape of each object is captured to identify individual works. In short, the analysis unit 152 identifies the works 91 to 97 by recognizing the pixels occupied by the works 91 to 97 in the image.
  • the work 91 capable of recognizing almost the entire surface 91a is specified.
  • the workpieces 92 and 93 capable of recognizing almost the entire surfaces 92a and 93a are specified in the compartments 72 and 73.
  • the work 94 and 97 in the compartment 74 the work 94 capable of recognizing almost the entire surface 94a has been specified. By instance segmentation, a work that can recognize almost the entire surface from above the container 70 can be specified as a work stacked on top.
  • the workpieces 95 to 97 that can recognize only a part of the surface are specified as workpieces stacked underneath, and a sufficiently wide area on the surface that can be adsorbed and held by the suction pad 135 is exposed upward. Since there is no work, it is excluded from the work to be picked.
  • the analysis unit 152 may identify the work by object contour detection (Object Contour Detection) instead of semantic segmentation.
  • object contour detection Object Contour Detection
  • the analysis unit may identify the work by a rule-based image recognition method instead of machine learning.
  • FIG. 5A shows an example of an image (left figure) and a depth map (right figure) of the work in the container 70 by the imaging unit 151 in the picking system 100.
  • the image may be an RGB image.
  • one work 91 is housed in one section 71 partitioned by the partition 79 of the container 70, and one work is accommodated in each of the other surrounding sections.
  • the analysis unit 152 applies the above-mentioned image recognition to the image to identify the work 91 and other works in the container 70 from the partition 79 and the like.
  • the analysis unit 152 identifies the work 91 to be picked from the identified works.
  • FIG. 5B shows an example of the work 91 (left figure) identified from the image and the work portion 91D (right figure) in the depth map.
  • the analysis unit 152 extracts the specified work 91 from the image and uses this as a mask to cut out the corresponding work portion 91D from the depth map.
  • FIG. 5C shows an example of the point cloud data 91P obtained by reconstructing the work portion 91D in the depth map.
  • the analysis unit 152 converts the depth map into a three-dimensional point cloud based on the position of the image pickup device 133, that is, the position information in the depth direction with reference to the image pickup device 133 into the position information in the three-dimensional space.
  • the three-dimensional position information of the spatially-referenced work can be obtained from the depth map.
  • the analysis unit 152 may downsample the point cloud data 91P as appropriate.
  • the sampling rate may be, for example, 1/10.
  • static outline processing statistical outlier processing
  • the analysis unit 152 may perform static outline processing (statistical outlier processing) on the point cloud data 91P. By removing the outliers from the point cloud data 91P, the position and orientation of the work 91 can be accurately determined.
  • the depth map before reconstruction may be subjected to processing for downsampling and removing outliners.
  • FIG. 5D shows an example of data points P1 to P3 acquired from the point cloud data 91P to determine the orientation of the work 91.
  • the analysis unit 152 determines a rectangular area S including all or most of the point cloud data 91P reconstructed from the work portion 91D.
  • the analysis unit 152 divides the rectangular area S into nine areas S1 to S9, three vertically and three horizontally. Here, it may be divided at equal intervals in the vertical direction or at equal intervals in the left-right direction. However, at least the width in the vertical direction and the horizontal direction of the central area S5 shall be set larger than the minimum width.
  • the analysis unit 152 selects three areas that are not adjacent to each other from the nine areas S1 to S9.
  • the areas S1, S3, S8 in the inverted triangular arrangement are selected.
  • areas S2, S7, S9 in the triangular arrangement, areas S3, S4, S9 in the left-facing triangular arrangement, and areas S1, S6, S7 in the right-facing triangular arrangement may be selected.
  • the analysis unit 152 acquires one data point P1 to P3 from each of the three areas S1, S3, and S8.
  • the data points P1 to P3 may be randomly selected from the data points in each area, or the closest data point may be selected from the predetermined points for each area.
  • FIG. 5E shows an example of the orientation of the work 91 determined using the three data points P1 to P3.
  • the analysis unit 152 calculates the normal vector Vn of the triangular area T defined by the three data points P1 to P3 using the three selected data points P1 to P3.
  • the orientation of the surface of the work 91 is determined from the normal vector Vn.
  • the directions of the work 91 that is, the three data points P1 to P3 for calculating the normal vector Vn are acquired from the three non-adjacent areas S1 to S9 in which the work portion 91D is divided into nine.
  • the data points P1 to P3 it is possible to determine the overall orientation of the surface of the work 91 without being affected by the local unevenness in the case of the flexible work 91.
  • the analysis unit 152 may change the three areas when the data points P1 to P3 for calculating the normal vector cannot be acquired. In such a case, only the area where the decision point could not be obtained may be changed to another area. However, the three areas shall not be adjacent to each other. Alternatively, the areas S1, S3, S8 in the inverted triangular arrangement are changed to any of the areas S2, S7, S9 in the triangular arrangement, the areas S3, S4, S9 in the left-facing triangular arrangement, and the areas S1, S6, S7 in the right-facing triangular arrangement. You may. As a result, for example, even if the depth map cannot be partially obtained for the irregular work 91 and the decision point cannot be obtained, the decision point can be obtained by changing the area to obtain the decision point on the surface of the work. It is possible to determine the overall orientation.
  • the analysis unit 152 may divide the rectangular area S into 9 substantially evenly. By dividing the rectangular area S into 9 substantially evenly, a triangular area T having a size equal to or larger than a predetermined value can be obtained, and a highly robust normal vector can be calculated.
  • the analysis unit 152 divides the rectangular area S into a total of nine areas S1 to S9, but the number of divisions of the rectangular area S is not limited to nine.
  • the analysis unit 152 may divide the work portion 91D into seven or more sections.
  • FIG. 5F shows an example of the suction point (that is, the position determination point) Pa on the work surface determined from the point cloud data 91P.
  • the analysis unit 152 determines the direction along the surface of the work 91 (the direction in which the vertical downward component of the unit length vector that defines the direction in the plane of the triangular area T is substantially maximum) based on the orientation of the work 91. Then, the suction point Pa (this position is called the position of the work 91) in the work portion 91D is determined based on the direction.
  • the tangent vector Vt is determined from the normal vector Vn so as to determine the suction point Pa on the upper side of the work 91 where the inclusions are not concentrated according to the size of the suction pad 135, and the upper end of the rectangular area S is determined.
  • the position of the distance d downward along the tangent vector Vt from the center is defined as the adsorption point Pa.
  • the distance d is set to be larger than the size of the suction pad 135, for example, 3 cm.
  • FIG. 6 shows an example of suction of the work 91 by the robot hand 130 (suction pad 135).
  • the control unit 154 receives the calculation result of the position and orientation of the work 91 from the analysis unit 152, controls the robot arm 120, and sets the direction of the suction pad 135 to the direction of the work 91 (that is, the direction of the normal vector Vn). Together, the suction pad 135 is driven toward the suction point Pa on the surface of the work 91.
  • the orientation of the suction pad 135 By matching the orientation of the suction pad 135 with the orientation of the work 91, it is possible to suck the work surface and pick it stably.
  • FIG. 7 shows the flow of the picking method according to the present embodiment.
  • step S102 the control unit 154 moves the robot hand 130 above the container 70. Alternatively, it may move above one of the plurality of compartments 71 in the container 70.
  • step S104 the imaging unit 151 photographs the work in the container 70 to obtain an image and a depth map.
  • step S106 the control unit 154 analyzes the image and determines whether or not the work 90 can be seen, that is, whether or not the work 90 is included in the image. If the work 90 is visible, proceed to the next step. If you cannot see it, end the flow. Alternatively, returning to step S102, the control unit 154 may move the robot hand 130 above the next compartment 71 of the plurality of compartments 71 in the container 70.
  • the work 90 can be recognized, for example, when it is determined that a sufficiently wide surface area that can be sucked and held by the suction pad 135 is not exposed upward because it overlaps with another work 91.
  • the control unit 154 may separate the work 90 from the other work 91 by moving at least the upper portion of the work 90 in one direction by the work separating device 200. As a result, a space can be provided between the work 91 and the surface of the work 90, and a space for inserting the suction pad 135 of the robot hand 130 can be secured. After separating the work, the process returns to step S104.
  • step S108 the analysis unit 152 identifies the work 90 based on the image, and calculates the position and orientation of the work 90 based on the work portion 91D corresponding to the specified work 90 in the depth map. Details are as described above.
  • the analysis unit 152 identifies the individual works 91 to 97 by performing image recognition (particularly, boundary recognition) on the images of the works 91 to 97.
  • the analysis unit 152 identifies the work 91 to be picked from the works identified in the image of FIG. 5A (left figure).
  • the analysis unit 152 extracts the specified work 91 from the image of FIG. 5B (left figure), uses this as a mask, and cuts out the corresponding work portion 91D from the depth map (right figure).
  • the analysis unit 152 converts the depth map (work portion 91D) into a three-dimensional point cloud based on the position of the image pickup apparatus 133.
  • the analysis unit 152 may downsample the point cloud data 91P as appropriate. Further, the analysis unit 152 may perform static outline processing (statistical outlier processing) on the point cloud data 91P. In addition, downsampling and outline processing may be performed on the depth map before reconstruction.
  • the analysis unit 152 determines a rectangular area S including all or most of the point cloud data 91P reconstructed from the work portion 91D, and three rectangular areas S up and down. It is divided into nine areas S1 to S9, three on the left and right, and one data point P1 to P3 is acquired from each of the three areas that are not adjacent to each other, for example, three areas S1, S3, and S8.
  • the analysis unit 152 calculates the normal vector Vn of the triangular area T defined by the three data points P1 to P3 using the three selected data points P1 to P3. The orientation of the surface of the work 91 is determined from the normal vector Vn.
  • the analysis unit 152 may change the three areas when the data points P1 to P3 for calculating the normal vector Vn cannot be acquired.
  • the analysis unit 152 performs a vertical downward component of a vector having a unit length that defines the direction along the surface of the work 91 (the direction in the plane of the triangular area T) based on the direction of the work 91.
  • the direction in which is substantially maximum is determined, and the suction point Pa (this position is referred to as the position of the work 91) in the work portion 91D is determined based on that direction.
  • step S110 the control unit 154 moves the robot hand 130 above the target work 91 based on the position of the work 91 calculated in step S108.
  • step S112 the control unit 154 turns on the suction pad 135.
  • step S114 the control unit 154 controls the robot arm 120 based on the position and orientation of the work 91 calculated in step S108, and sets the orientation of the suction pad 135 to the orientation of the work 91. (That is, the direction of the normal vector Vn), the suction pad 135 is driven toward the suction point Pa on the surface of the work 91. By aligning the direction of the suction pad 135 with the direction of the work 91, the surface of the work is sucked and picked stably.
  • step S116 the control unit 154 determines whether or not the work 91 has been successfully sucked by the suction pad 135. If successful, the process proceeds to step S118. If it fails, the process returns to step S104. If it fails a plurality of times, the flow may be terminated.
  • step S118 the control unit 154 determines the number of suction pads 135 sucked on the work 91 among the plurality of suction pads 135.
  • the process proceeds to step S120, and the control unit 154 increases the drive speed of the robot arm 120 to convey the work 91 at high speed, and only one of the two suction pads 135 is sucked.
  • the process proceeds to step S122, the driving speed of the robot arm 120 is lowered, and the work 91 is stably conveyed. Since the work 91 is flexible, the work 91 can be stably conveyed even when the work 91 fails to be sucked by one of the suction pads 135.
  • step S124 the control unit 154 conveys the work 91 to the QR code (registered trademark) reader 60, and uses this to read the QR code (registered trademark) attached to the surface of the work.
  • the control unit 154 determines the compartment 81 of the container 80 to be accommodated from the record by the QR code (registered trademark).
  • step S126 the control unit 154 conveys the work 91 to the target section 81.
  • step S128 the control unit 154 turns off the suction pad 135 and places the work 91 in the compartment 81.
  • the control unit 154 returns to step S102 and repeats the above steps.
  • the control unit 154 transports all the works in the container 70 to the container 80, and ends the flow when the works cannot be confirmed in the container 70 in step S106.
  • the learning unit 153 Prior to the flow of the picking method described above, the learning unit 153 generates a machine learning model using the correct image in which the work has been specified. However, the analysis unit 152 may specify the work based on the image by a rule-based image recognition method instead of machine learning.
  • the picking system 100 corresponds to the imaging unit 151 that captures the work and obtains an image and a depth map, specifies the work based on the image, and corresponds to the specified work in the depth map. It includes an analysis unit 152 that determines the position and orientation of the work based on the work portion, and a control unit 154 that drives and controls the robot arm 120 based on the position and orientation of the work.
  • the analysis unit 152 identifies the work by image recognition, determines the position and orientation of the work based on the portion of the depth map corresponding to the specified work, and controls the robot arm 120 by the control unit 154 to control the work. By driving the work to that position in accordance with the direction of the robot arm 120, even an irregular workpiece can be stably picked by the robot arm 120.
  • the suction pad 135 is used for the robot hand 130 for holding the work, but the present invention is not limited to this, and a hand having a configuration for sandwiching the upper part of the work may be used.
  • control device 150 is configured to include the learning unit 153, but the configuration of the control device 150 is not limited to this.
  • the control device 150 may be a device that stores a machine learning model that has been learned in advance, as long as the analysis unit 152 can identify the work. In this case, the control device 150 does not include the learning unit 153.
  • the learning unit 153 may exist as another device and may be connected via a network.
  • Various embodiments of the present invention may be described with reference to flowcharts and block diagrams, wherein the block is (1) a stage of the process in which the operation is performed or (2) a device responsible for performing the operation. May represent a section of. Specific stages and sections are implemented by dedicated circuits, programmable circuits supplied with computer-readable instructions stored on a computer-readable medium, and / or processors supplied with computer-readable instructions stored on a computer-readable medium. You can.
  • Dedicated circuits may include digital and / or analog hardware circuits, and may include integrated circuits (ICs) and / or discrete circuits.
  • Programmable circuits are memory elements such as logical AND, logical OR, logical XOR, logical NAND, logical NOR, and other logical operations, flip-flops, registers, field programmable gate arrays (FPGA), programmable logic arrays (PLA), etc. May include reconfigurable hardware circuits, including, etc.
  • the computer readable medium may include any tangible device capable of storing instructions executed by the appropriate device, so that the computer readable medium having the instructions stored therein is specified in a flowchart or block diagram. It will be equipped with a product that contains instructions that can be executed to create means for performing the operation. Examples of computer-readable media may include electronic storage media, magnetic storage media, optical storage media, electromagnetic storage media, semiconductor storage media, and the like.
  • Computer-readable media include floppy (registered trademark) disks, diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), Electrically erasable programmable read-only memory (EEPROM), static random access memory (SRAM), compact disk read-only memory (CD-ROM), digital versatile disk (DVD), Blu-ray (RTM) disk, memory stick, integrated A circuit card or the like may be included.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • EEPROM Electrically erasable programmable read-only memory
  • SRAM compact disk read-only memory
  • DVD digital versatile disk
  • Blu-ray (RTM) disk memory stick, integrated A circuit card or the like may be included.
  • Computer-readable instructions are assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state-setting data, or object-oriented programming such as Smalltalk, JAVA®, C ++, etc. Contains either source code or object code written in any combination of one or more programming languages, including languages and traditional procedural programming languages such as the "C" programming language or similar programming languages. Good.
  • Computer-readable instructions are applied to a general-purpose computer, a special purpose computer, or the processor or programmable circuit of another programmable data processing device, either locally or in a wide area network (WAN) such as the local area network (LAN), the Internet, etc. ) May be executed to create a means for performing the operation specified in the flowchart or block diagram.
  • WAN wide area network
  • LAN local area network
  • Internet etc.
  • processors include computer processors, processing units, microprocessors, digital signal processors, controllers, microcontrollers and the like.
  • FIG. 8 shows an example of a computer 2200 in which a plurality of aspects of the present invention may be embodied in whole or in part.
  • the program installed on the computer 2200 can cause the computer 2200 to function as an operation associated with the device according to an embodiment of the present invention or as one or more sections of the device, or the operation or the one or more. Sections can be run and / or the computer 2200 can be run a process according to an embodiment of the invention or a stage of such process.
  • Such a program may be run by the CPU 2212 to cause the computer 2200 to perform certain operations associated with some or all of the blocks in the flowcharts and block diagrams described herein.
  • the computer 2200 includes a CPU 2212, a RAM 2214, a graphic controller 2216, and a display device 2218, which are connected to each other by a host controller 2210.
  • the computer 2200 also includes input / output units such as a communication interface 2222, a hard disk drive 2224, a DVD-ROM drive 2226, and an IC card drive, which are connected to the host controller 2210 via the input / output controller 2220.
  • input / output units such as a communication interface 2222, a hard disk drive 2224, a DVD-ROM drive 2226, and an IC card drive, which are connected to the host controller 2210 via the input / output controller 2220.
  • the computer also includes legacy input / output units such as the ROM 2230 and keyboard 2242, which are connected to the input / output controller 2220 via an input / output chip 2240.
  • the CPU 2212 operates according to the programs stored in the ROM 2230 and the RAM 2214, thereby controlling each unit.
  • the graphic controller 2216 acquires the image data generated by the CPU 2212 in a frame buffer or the like provided in the RAM 2214 or itself so that the image data is displayed on the display device 2218.
  • the communication interface 2222 communicates with other electronic devices via the network.
  • the hard disk drive 2224 stores programs and data used by the CPU 2212 in the computer 2200.
  • the DVD-ROM drive 2226 reads the program or data from the DVD-ROM 2201 and provides the program or data to the hard disk drive 2224 via the RAM 2214.
  • the IC card drive reads programs and data from the IC card and / or writes programs and data to the IC card.
  • the ROM 2230 stores a boot program or the like executed by the computer 2200 at the time of activation and / or a program depending on the hardware of the computer 2200.
  • the input / output chip 2240 may also connect various input / output units to the input / output controller 2220 via a parallel port, serial port, keyboard port, mouse port, and the like.
  • the program is provided by a computer-readable medium such as a DVD-ROM 2201 or an IC card.
  • the program is read from a computer-readable medium, installed on a hard disk drive 2224, RAM 2214, or ROM 2230, which is also an example of a computer-readable medium, and executed by the CPU 2212.
  • the information processing described in these programs is read by the computer 2200 and provides a link between the program and the various types of hardware resources described above.
  • the device or method may be configured by implementing manipulation or processing of information in accordance with the use of computer 2200.
  • the CPU 2212 executes a communication program loaded in the RAM 2214, and performs communication processing on the communication interface 2222 based on the processing described in the communication program. You may order.
  • the communication interface 2222 reads and reads transmission data stored in a transmission buffer processing area provided in a recording medium such as a RAM 2214, a hard disk drive 2224, a DVD-ROM 2201, or an IC card. The data is transmitted to the network, or the received data received from the network is written to the reception buffer processing area or the like provided on the recording medium.
  • the CPU 2212 causes the RAM 2214 to read all or necessary parts of a file or database stored in an external recording medium such as a hard disk drive 2224, a DVD-ROM drive 2226 (DVD-ROM2201), or an IC card. Various types of processing may be performed on the data on the RAM 2214. The CPU 2212 then writes back the processed data to an external recording medium.
  • an external recording medium such as a hard disk drive 2224, a DVD-ROM drive 2226 (DVD-ROM2201), or an IC card.
  • Various types of processing may be performed on the data on the RAM 2214.
  • the CPU 2212 then writes back the processed data to an external recording medium.
  • the CPU 2212 describes various types of operations, information processing, conditional judgment, conditional branching, unconditional branching, and information retrieval described in various parts of the present disclosure with respect to the data read from the RAM 2214. Various types of processing may be performed, including / replacement, etc., and the results are written back to RAM 2214. Further, the CPU 2212 may search for information in a file, a database, or the like in the recording medium. For example, when a plurality of entries each having an attribute value of the first attribute associated with the attribute value of the second attribute are stored in the recording medium, the CPU 2212 specifies the attribute value of the first attribute. Search for an entry that matches the condition from the plurality of entries, read the attribute value of the second attribute stored in the entry, and associate it with the first attribute that satisfies the predetermined condition. The attribute value of the second attribute obtained may be acquired.
  • the program or software module described above may be stored on a computer 2200 or on a computer-readable medium near the computer 2200.
  • a recording medium such as a hard disk or RAM provided in a dedicated communication network or a server system connected to the Internet can be used as a computer readable medium, thereby providing the program to the computer 2200 over the network. To do.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

L'invention concerne un système de saisie qui saisit et place une pièce flexible et de forme indéfinie de manière stable. Un système de saisie (100) comporte : une unité d'imagerie (151) qui capture une image d'une pièce pour obtenir une image et une carte de profondeur ; une unité d'analyse (152) qui identifie la pièce en fonction de l'image et détermine une position et une orientation de la pièce en fonction d'une partie de pièce de la carte de profondeur correspondant à la pièce ainsi identifiée ; et une unité de commande (154) qui entraîne et commande un bras de robot (120) en fonction de la position et de l'orientation de la pièce. La pièce est identifiée par l'unité d'analyse (152) par reconnaissance d'image. La position et l'orientation de la pièce sont déterminées en fonction d'une partie de la carte de profondeur correspondant à la pièce ainsi identifiée. Le bras de robot (120) est commandé par l'unité de commande (154) et est entraîné vers la position en fonction de l'orientation de la pièce. Il est ainsi possible pour le bras de robot (120) de saisir de manière stable même une pièce de forme indéfinie.
PCT/JP2020/043700 2019-12-02 2020-11-24 Système de saisie, procédé de saisie et programme WO2021111929A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-218349 2019-12-02
JP2019218349A JP6924448B2 (ja) 2019-12-02 2019-12-02 ピッキングシステム、ピッキング方法、及びプログラム

Publications (1)

Publication Number Publication Date
WO2021111929A1 true WO2021111929A1 (fr) 2021-06-10

Family

ID=76218927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/043700 WO2021111929A1 (fr) 2019-12-02 2020-11-24 Système de saisie, procédé de saisie et programme

Country Status (2)

Country Link
JP (2) JP6924448B2 (fr)
WO (1) WO2021111929A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230286165A1 (en) * 2022-03-08 2023-09-14 Mujin, Inc. Systems and methods for robotic system with object handling
WO2024004746A1 (fr) * 2022-07-01 2024-01-04 ソニーセミコンダクタソリューションズ株式会社 Système et programme

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010247959A (ja) * 2009-04-16 2010-11-04 Ihi Corp 箱状ワーク認識装置および方法
JP2011167815A (ja) * 2010-02-19 2011-09-01 Ihi Corp 物体認識ロボットシステム
JP2017030135A (ja) * 2015-07-31 2017-02-09 ファナック株式会社 ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法
JP2017042859A (ja) * 2015-08-25 2017-03-02 キヤノン株式会社 ピッキングシステム、並びに、そのための処理装置、方法及びプログラム
JP2018134698A (ja) * 2017-02-21 2018-08-30 ファナック株式会社 ワーク取出システム
JP2019181573A (ja) * 2018-04-02 2019-10-24 Kyoto Robotics株式会社 ピッキング装置及びその方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4962123B2 (ja) * 2007-04-27 2012-06-27 日産自動車株式会社 把持候補位置選出装置、把持候補位置選出方法、把持経路生成装置、および把持経路生成方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010247959A (ja) * 2009-04-16 2010-11-04 Ihi Corp 箱状ワーク認識装置および方法
JP2011167815A (ja) * 2010-02-19 2011-09-01 Ihi Corp 物体認識ロボットシステム
JP2017030135A (ja) * 2015-07-31 2017-02-09 ファナック株式会社 ワークの取り出し動作を学習する機械学習装置、ロボットシステムおよび機械学習方法
JP2017042859A (ja) * 2015-08-25 2017-03-02 キヤノン株式会社 ピッキングシステム、並びに、そのための処理装置、方法及びプログラム
JP2018134698A (ja) * 2017-02-21 2018-08-30 ファナック株式会社 ワーク取出システム
JP2019181573A (ja) * 2018-04-02 2019-10-24 Kyoto Robotics株式会社 ピッキング装置及びその方法

Also Published As

Publication number Publication date
JP2021088011A (ja) 2021-06-10
JP6924448B2 (ja) 2021-08-25
JP2021151699A (ja) 2021-09-30

Similar Documents

Publication Publication Date Title
WO2021111929A1 (fr) Système de saisie, procédé de saisie et programme
JP6605711B2 (ja) 計測システム及び計測方法
US20180231973A1 (en) System and Methods for a Virtual Reality Showroom with Autonomous Storage and Retrieval
US10102629B1 (en) Defining and/or applying a planar model for object detection and/or pose estimation
US11045946B2 (en) Holding device, transport system, and controller
JP2019509559A (ja) センサ誘導式ロボットを用いたボックスの位置特定、分離、およびピッキング
WO2019060125A1 (fr) Cadre de délimitation tridimensionnel à partir de données d'image bidimensionnelle et de nuage de points
CN111937034A (zh) 学习数据集的创建方法和装置
US20180215544A1 (en) Distributed Autonomous Robot Systems and Methods
US10649446B2 (en) Techniques for conveyance device control
US11928594B2 (en) Systems and methods for creating training data
US10078333B1 (en) Efficient mapping of robot environment
EP3617936B1 (fr) Système de transport, méthode de contrôle et programme
JP2020197978A (ja) 物体検出装置、物体把持システム、物体検出方法及び物体検出プログラム
JP2021146452A (ja) ハンドリング装置、制御装置、および制御プログラム
US20210343037A1 (en) Location discovery
CN114454168A (zh) 一种动态视觉机械臂抓取方法、系统及电子设备
JP7264247B2 (ja) 情報処理装置及び情報処理方法
WO2023182345A1 (fr) Système de manipulation, système de traitement d'informations, procédé de traitement d'informations, programme et support d'enregistrement
US20220128347A1 (en) System and method to measure object dimension using stereo vision
KR102303521B1 (ko) 적재물 다중 하역 매니퓰레이터의 진동 감쇠 방법
US11679944B2 (en) Article picking system
JP2024082202A (ja) ロボットの制御システム、ロボットの制御プログラム
JP2024082203A (ja) ロボットの制御システム、情報管理プログラム、情報処理プログラム
WO2024122516A1 (fr) Système de commande de robot, main de robot, programme de commande de robot, programme de gestion d'informations et programme de traitement d'informations

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20896854

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20896854

Country of ref document: EP

Kind code of ref document: A1