WO2020026447A1 - Parameter learning method and work system - Google Patents

Parameter learning method and work system Download PDF

Info

Publication number
WO2020026447A1
WO2020026447A1 PCT/JP2018/029287 JP2018029287W WO2020026447A1 WO 2020026447 A1 WO2020026447 A1 WO 2020026447A1 JP 2018029287 W JP2018029287 W JP 2018029287W WO 2020026447 A1 WO2020026447 A1 WO 2020026447A1
Authority
WO
WIPO (PCT)
Prior art keywords
work
image
parameter
imaging
evaluation
Prior art date
Application number
PCT/JP2018/029287
Other languages
French (fr)
Japanese (ja)
Inventor
内田 剛
博史 大池
弘健 江嵜
アヌスヤ ナラサンビ
Original Assignee
株式会社Fuji
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Fuji filed Critical 株式会社Fuji
Priority to PCT/JP2018/029287 priority Critical patent/WO2020026447A1/en
Priority to CN201880096177.2A priority patent/CN112512942B/en
Priority to JP2020534026A priority patent/JP7121127B2/en
Publication of WO2020026447A1 publication Critical patent/WO2020026447A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G47/00Article or material-handling devices associated with conveyors; Methods employing such devices
    • B65G47/02Devices for feeding articles or materials to conveyors
    • B65G47/04Devices for feeding articles or materials to conveyors for feeding articles
    • B65G47/12Devices for feeding articles or materials to conveyors for feeding articles from disorderly-arranged article piles or from loose assemblages of articles
    • B65G47/14Devices for feeding articles or materials to conveyors for feeding articles from disorderly-arranged article piles or from loose assemblages of articles arranging or orientating the articles by mechanical or pneumatic means during feeding

Definitions

  • This specification discloses a parameter learning method and a work system.
  • the present disclosure has a main object of appropriately performing parameter learning without lowering work efficiency for unraveling a bulk state of works.
  • This disclosure employs the following means to achieve the above-mentioned main object.
  • a parameter learning method is a parameter learning method for controlling an actuator that unravels a bulk state of a plurality of works by a predetermined operation, and images the plurality of works during execution of the predetermined operation.
  • a learning step of learning a relationship with the parameter is a learning step of learning a relationship with the parameter.
  • the parameter learning method of the present disclosure captures a plurality of workpieces during execution of a predetermined operation of unraveling a bulk state of a plurality of workpieces, processes the captured images to evaluate a separation state of the plurality of workpieces, The relationship between the evaluation result and the parameter in the predetermined operation being executed is learned. Thereby, the learning of the parameter can be performed using the image captured during the execution of the predetermined operation, so that it is not necessary for the actuator to interrupt the predetermined operation for learning. For this reason, the parameter can be appropriately learned without lowering the work efficiency for unraveling the bulk state of the works.
  • FIG. 1 is a configuration diagram illustrating a schematic configuration of a work system.
  • FIG. 2 is a configuration diagram illustrating an outline of a configuration of a work transfer device.
  • FIG. 3 is a partial external view of the work transfer device 20 as viewed from the back side.
  • Explanatory drawing which shows the electrical connection relationship of the control apparatus 70.
  • Explanatory drawing which shows a mode that the bulk state of the work W is unraveled.
  • FIG. 3 is a block diagram showing functions of a control device 70.
  • 9 is a flowchart illustrating an example of a loosening operation control routine.
  • 9 is a flowchart illustrating an example of a process at the time of starting a loosening operation.
  • Explanatory drawing which shows an example of the calculation method of difference (DELTA) A, (DELTA) G.
  • Explanatory drawing which shows an example of area
  • FIG. 1 is a configuration diagram schematically showing the configuration of the work system 10
  • FIG. 2 is a configuration diagram schematically showing the configuration of the work transfer device 20
  • FIG. 3 is a partial external view of the work transfer device 20 as viewed from the back.
  • FIG. 4 is an explanatory diagram showing an electrical connection relationship of the control device 70. 1 and 2, the left-right direction is the X-axis direction, the front-rear direction is the Y-axis direction, and the up-down direction is the Z-axis direction.
  • the work system 10 is a system in which the work W stored in the supply box 12 is transferred to the mounting table T and aligned. As shown in FIG. 1, the work system 10 includes a mounting table transfer device 16, a work transfer device 20, a supply robot 40, and a sampling robot 50. These are installed on the workbench 11.
  • the mounting table transport device 16 has a pair of belt conveyors that are stretched in the left-right direction (X-axis direction) with an interval in the front-rear direction (Y-axis direction).
  • the mounting table T is transported from left to right by a belt conveyor.
  • the supply robot 40 is a robot for taking out the work W as various parts such as mechanical parts and electric parts from the supply box 12 and supplying the work W to the supply area A1 of the work transfer device 20 (see FIG. 2).
  • the supply robot 40 includes a vertical articulated robot arm 41 and an end effector 42.
  • the robot arm 41 includes a plurality of links, a plurality of joints that rotatably or pivotally connect the links, a drive motor 44 that drives each joint (see FIG. 4), and an encoder 45 that detects the angle of each joint. FIG. 4).
  • the plurality of links include a distal link to which the end effector 42 is attached, and a proximal link fixed to the worktable 11.
  • the end effector 42 can hold and release the work W.
  • the end effector 42 can use, for example, a mechanical chuck, a suction nozzle, an electromagnet, or the like, and supplies the work W to the supply area A1 in a bulk state.
  • the collection robot 50 is a robot for collecting the work W in the collection area A2 (see FIG. 2) of the work transfer device 20, transferring the work W to the mounting table T, and aligning the work W.
  • the sampling robot 50 includes a vertical articulated robot arm 51 and an end effector 52.
  • the robot arm 51 includes a plurality of links, a plurality of joints that rotatably or pivotally connect the links, a drive motor 54 (see FIG. 4) that drives each joint, and an encoder 55 that detects an angle of each joint. FIG. 4).
  • the plurality of links include a distal link to which the end effector 52 is attached, and a proximal link fixed to the worktable 11. The end effector 52 can hold and release the work W.
  • a camera 53 for imaging the work W conveyed by the work conveyance device 20 and the mounting table T conveyed by the mounting table conveyance device 16 to grasp their positions and states is provided at the end link of the robot arm 51. Is also attached.
  • the work transfer device 20 has a plurality of transfer lanes 21 that can transfer the work W from the supply area A1 to the collection area A2 in the front-rear direction (Y-axis direction).
  • a plurality of supply boxes 12 for accommodating works W to be supplied to each of the plurality of transfer lanes 21 are arranged behind the work transfer device 20.
  • the work transfer device 20 includes a conveyor belt 22 and a partition 25. As shown in FIG. 2, the conveyor belt 22 is stretched over a driving roller 23a and a driven roller 23b. The work W is placed on the upper surface portion 22a (placement portion) of the conveyor belt 22, and the work roller W is driven to rotate by the drive motor 38 (see FIG. 4) to convey the work W in the belt feeding direction. On both sides of the conveyor belt 22, side walls 24a and 24b are provided. The driving roller 23a and the driven roller 23b are rotatably supported by the side walls 24a and 24b. As shown in FIG. 3, the work transfer device 20 has a support plate 28 on the back side of the upper surface 22 a of the conveyor belt 22.
  • the support plate 28 prevents the conveyor belt 22 from bending due to the weight of the work W placed on the upper surface 22a.
  • openings 28a are formed at positions corresponding to the collection areas A2 of the plurality of transport lanes 21, respectively.
  • a vertical movement device 30 for pushing up the upper surface portion 22a from the back surface and moving the upper surface portion 22a up and down is arranged below each opening 28a.
  • the vertical movement device 30 includes a contact body 31 and a cylinder 32 for vertically moving the contact body 31 so as to penetrate the opening 28a.
  • the cylinder 32 is supported by a support base 29 fixed to the side walls 24a, 24b.
  • the partition 25 is a partition plate that partitions one conveyor belt 22 (upper surface portion 22a) into a plurality of transport lanes 21.
  • the partitions 25 extend in parallel to the side walls 24a and 24b arranged on both sides of the conveyor belt 22, and are arranged at equal intervals so that each of the transport lanes 21 has the same lane width.
  • control device 70 is configured as a known computer including a CPU, a ROM, a HDD, a RAM, an input / output interface, a communication interface, and the like.
  • Various signals from the encoder 45 of the supply robot 40, the encoder 55 of the collection robot 50, the camera 53, the input device 80, and the like are input to the control device 70.
  • the drive motor 38 of the work transfer device 20 From the control device 70, the drive motor 38 of the work transfer device 20, the vertical movement device 30 (cylinder 32), the drive motor 44 of the supply robot 40, the drive motor 54 of the collection robot 50, the camera 53, the mounting table transfer device 16, and the like. Are output.
  • the control device 70 learns parameters for controlling the cylinder 32 of the vertically moving device 30 and can control the cylinder 32 by determining an appropriate parameter based on the learning result.
  • FIG. 5 is an explanatory diagram showing a state where the bulk of the works W is unraveled.
  • a loosening operation in which the cylinder 32 moves the contact body 31 up and down (vibrates) dislodges the lump of the work W in a bulk state and becomes a separated state, so that the work W can be easily collected by the collection robot 50.
  • the control device 70 controls the cylinder 32 with parameters suitable for unraveling the work W according to the specifications of the work W, such as the weight, size, shape, and material of the work W.
  • examples of the parameters include an impact force and a vibration frequency when the conveyor belt 22 is pushed up by the vertical movement (vibration) of the contact body 31.
  • FIG. 6 is a block diagram showing functions of the control device 70.
  • the control device 70 includes a parameter learning unit 70A that mainly learns parameters, and a drive control unit 70B that mainly determines appropriate parameters and drives and controls the vertical movement device 30.
  • the parameter learning unit 70A includes a learning model 71, an imaging processing unit 72, an evaluation processing unit 73, and a learning processing unit 74.
  • the imaging processing unit 72 causes the camera 53 to image the work W in a bulk state or the separated state in which the work W is unraveled, and inputs the taken image.
  • the evaluation processing unit 73 processes the captured image, calculates a predetermined evaluation value regarding the separation state of the work W, and evaluates the separation state.
  • the learning processing unit 74 learns the relationship between the parameter in the unraveling operation being executed and the evaluation result with the evaluation processing unit 73 by well-known machine learning, and constructs a learning model 71 including a correlation with the specification of the work W and the like. I do. Note that examples of the learning method include reinforcement learning and a genetic algorithm, and other methods may be used.
  • the drive control unit 70B includes a parameter determination unit 75 and a drive unit 76.
  • the parameter determining unit 75 determines a parameter corresponding to the specification of the work W or the like using the learning model 71, or appropriately determines an arbitrary parameter.
  • the drive unit 76 controls the cylinder 32 of the vertical movement device 30 based on the parameters determined by the parameter determination unit 75.
  • each control such as the supply control and the transfer control of the work W, the loosening operation control, and the sampling and placement control is sequentially performed.
  • the supply control is performed by collecting the work W from the supply box 12 in accordance with the supply order and controlling the drive of the supply robot 40 so as to supply the work W to the supply area A1 of the corresponding transfer lane 21.
  • the supply order is, for example, an order specified by an operator by operating the input device 80.
  • the transfer control is performed by controlling the drive of the work transfer device 20 so that the work W supplied to the supply area A1 reaches the collection area A2.
  • the loosening operation control is performed by controlling the driving of the cylinder 32 of the vertical movement device 30 corresponding to the collection area A2 in a state where the work W has reached the collection area A2.
  • Sampling and placement control is performed by sampling the work W that has been unraveled and separated by the unraveling operation control, and drives and controls the sampling robot 50 so as to be aligned and mounted on the mounting table T. .
  • the collection robot 50 is driven and controlled so that the work W in the collection area A2 is captured by the camera 53, and the captured image is processed to collect the selected work W.
  • These controls performed on the work W in each transfer lane 21 may be performed in parallel if they do not affect the control on the work W in the other transfer lanes 21.
  • the details of the loosening operation control will be described based on the loosening operation control routine shown in FIG.
  • the control device 70 determines whether or not it is time to start the loosening operation (S100).
  • the control device 70 determines that it is the start timing when the work W reaches the sampling area A2 by the transport control and the vertical movement device 30 is in a state where it can be driven. Note that the control device 70 may determine that the start timing is reached, for example, when it is necessary to perform another loosening operation on the remaining work W from which some work W has been collected by performing the loosening operation.
  • the control device 70 executes the process for starting the loosening operation shown in FIG. 8 (S105).
  • the control device 70 first initializes the number n indicating the image capturing order to a value of 1 (S200), and before the start of the loosening operation, the image which is the image of the number 1 with the camera 53 1 is imaged (S205). Next, the control device 70 processes the image 1 to detect the outer edge of the lump area of the workpiece W, and calculates the area area A (1) and the center of gravity G (1) of the area area A (1). (S210). The work W is supplied in a bulk state to the supply area A1 and transported to the collection area A2.
  • the image 1 a plurality of works W are intertwined to form a lump, and the area of the work W and a conveyor belt serving as a background.
  • the brightness value and the like of the upper surface portion 22a of the second 22 differ.
  • the image 1 is converted into a gray scale image, and the boundary between the lump of the work W and the upper surface portion 22a is detected as the outer edge of the area of the work W from the gray scale image, and the area surrounded by the outer edge is detected. Is calculated as a region area A (n) (here, A (1)). Further, the position of the center of gravity of the region surrounded by the outer edge is calculated as the center of gravity G (n) (here, G (1)).
  • control device 70 is not limited to a device using a grayscale image, and may use a binarized image. Subsequently, the control device 70 sets the parameters of the loosening operation (S215), starts the loosening operation by driving the cylinder 32 of the vertical movement device 30 based on the set parameters (S220), and starts the loosening operation. The time processing ends. In S215, the control device 70 selects and sets parameters suitable for the current work W from the learning model 71. Note that the control device 70 may appropriately set an arbitrary parameter when the selection is difficult, for example, when the work W is new.
  • the control device 70 determines whether a predetermined timing during the loosening operation has come (S110).
  • the predetermined timing may be a timing each time a predetermined time elapses after starting the loosening operation, or a timing each time the cylinder 32 of the vertical movement device 30 performs the vertical movement a predetermined number of times. It is assumed that the predetermined timing occurs a plurality of times from the start of the loosening operation to the end thereof. If it is determined in S110 that the predetermined timing has come, the control device 70 updates the number n by incrementing the number n by one (S115), and the image n which is the image of the number n by the camera 53 during the execution of the loosening operation.
  • To capture the image n during the execution of the loosening operation means to capture the image n without interrupting the continuous vertical movement of the cylinder 32. For this reason, since the image n is captured in a state where the work W is jumping in various directions due to the vibration, the image is in a state where each work W is shaken.
  • the control device 70 processes the image n to calculate the area area A (n) and the center of gravity G (n) (S125), and separates the workpiece W sufficiently to be able to be collected by the collection robot 50. It is determined whether (spreading) has been performed (S130).
  • the process of S125 is performed in the same manner as the process at the start of the loosening operation S210, except that the image n in which the workpiece W is blurred is used. Note that, even in the image n in which the work W is shaken, since it is possible to detect the approximate boundary between the lump of the work W and the upper surface 22a of the conveyor belt 22, the outer edge of the area of the work W is detected.
  • control device 70 determines that the work W has been sufficiently separated in S130 when a state in which the work can be collected by the collection robot 50 because some of the individual works W can be recognized from the processed image n.
  • control device 70 predicts a region area Ae in which the work W is sufficiently separated based on, for example, the specifications and the number of the work W, and calculates the region area A (n) of the work W calculated in S125. Is greater than or equal to the area Ae, it is determined that the workpiece W has been sufficiently separated.
  • the control device 70 calculates the difference ⁇ A and the difference ⁇ G as evaluation values to evaluate the separation state (S135).
  • the difference ⁇ A is calculated as an area difference between the area A (n) and the reference area.
  • the difference ⁇ G is calculated as the distance difference between the center of gravity G (n) and the reference center of gravity.
  • FIG. 9 is an explanatory diagram showing an example of a method of calculating the differences ⁇ A and ⁇ G.
  • the difference ⁇ A (A (n) ⁇ A (1) is used by using the area A (1) and the center of gravity G (1) of the image 1 captured in the processing at the start of the loosening operation as a reference. )) And the difference ⁇ G (G (n) ⁇ G (1)) are calculated.
  • the difference ⁇ A, ⁇ G can be calculated more accurately than the clear image 1 in which the workpiece W is stationary and does not shake because the image is captured before the start of the loosening operation.
  • the differences ⁇ A and ⁇ G may appear more prominently as the loosening operation time becomes longer, so that the evaluation becomes easier and learning can be performed more appropriately.
  • the area area A (n-1) and the center of gravity G (n-1) of the image (n-1) captured at a predetermined timing immediately before the predetermined timing at which the image n is captured are calculated. Is used as a reference to calculate the difference ⁇ A (A (n) ⁇ A (n ⁇ 1)) and the difference ⁇ G (G (n) ⁇ G (n ⁇ 1)). In this method, evaluation can be performed even when a parameter is changed during a loosening operation in a process described later.
  • 9A and 9B may be selected based on, for example, an operator's designation by operating the input device 80, or the method shown in FIG. May be changed to the method shown in FIG.
  • FIG. 10 is an explanatory view showing an example of the area A and the center of gravity G.
  • the area A (1) and the center of gravity G (1) as references used in the method shown in FIG. 9A are indicated by solid lines, and the area A (n) and the center of gravity G (n) of the image n are indicated by dotted lines.
  • FIG. 10A shows a state in which the area A is large without changing the center of gravity G, and the difference ⁇ G is small and the difference ⁇ A is large. Since such a state is a state in which the work W is unraveled and separated without greatly deviating from the sampling area A2, the evaluation is high.
  • FIG. 10A shows a state in which the area A is large without changing the center of gravity G, and the difference ⁇ G is small and the difference ⁇ A is large. Since such a state is a state in which the work W is unraveled and separated without greatly deviating from the sampling area A2, the evaluation is high.
  • 10B shows a state in which the center of gravity G moves with almost no change in the area A of the region, and the difference ⁇ A is small and the difference ⁇ G is large.
  • the position of the work W is shifted as a whole without sufficiently loosening the lump of the work W, so that the evaluation is low.
  • the image n captured during the unraveling operation and the work W is shaken it is possible to detect the lump area of the work W and evaluate the separation state by simple processing.
  • illustration is omitted, if the area A is large even if the center of gravity G is moved, it can be evaluated that the lump of the work W is unraveled, so the evaluation is higher than that of FIG. 10B. It will be.
  • the control device 70 updates the learning model 71 by learning such an evaluation result for the current parameter (S140), and determines whether or not a change to a more suitable parameter is necessary (S145).
  • the control device 70 makes the determination in S145 based on whether or not a parameter more suitable for the current work W can be selected from the updated learning model 71.
  • the control device 70 changes the parameter to a more suitable one, continues the loosening operation (S150), and returns to S110.
  • the control device 70 determines that the parameter change is unnecessary because the evaluation of the separation state with respect to the current parameter is relatively high, the control device 70 continues the loosening operation with the current parameter (S155), and returns to S110.
  • the control device 70 determines that the work W has been sufficiently separated in S130 while performing such processing, the control device 70 stops driving the cylinder 32 of the vertical movement device 30 to end the loosening operation (S160). , And returns to S100.
  • the cylinder 32 of the vertically moving device 30 of the present embodiment corresponds to an actuator
  • S120 of the loosening operation control routine in FIG. 7 corresponds to an in-operation imaging step
  • S125 and S135 of the same processing correspond to evaluation steps
  • S140 corresponds to a learning step
  • S205 of the processing at the start of the loosening operation in FIG. 7 corresponds to a pre-operation imaging step
  • S210 of the processing corresponds to an acquisition step.
  • the work system 10 corresponds to a work system
  • the sampling robot 50 corresponds to a robot
  • the camera 53 corresponds to an imaging device
  • the imaging processing unit 72 that executes S120 of the unraveling operation control routine corresponds to an imaging processing unit.
  • the evaluation processing unit 73 executing S125 and S135 of the processing corresponds to an evaluation processing unit
  • the learning processing unit 74 executing S140 of the processing corresponds to a learning processing unit.
  • the separation state of the plurality of works W is evaluated by processing the image n captured during the unraveling operation of the plurality of works W. Learn the relationship with the parameters in the operation. For this reason, since the lifting / lowering device 30 does not need to interrupt the loosening operation for learning, learning can be appropriately performed without lowering the work efficiency of the loosening operation.
  • the evaluation value is accurately obtained. Learning can be performed more appropriately. Further, the separation state of the workpiece W is evaluated based on the immediately preceding image (n-1) with respect to the image n captured at a predetermined timing during the unraveling operation. Therefore, even if the parameter is changed during the unraveling operation.
  • the learning can be performed by acquiring the evaluation value without interrupting the loosening operation.
  • the separation state is evaluated using the area A (n) surrounded by the outer edge of the area of the work W, even if a clear image cannot be obtained, the separation state is appropriately evaluated by simple processing. can do. Further, since the separation state is evaluated using the area A (n) and the center of gravity G (n), the separation state can be more appropriately evaluated.
  • the evaluation is performed using the area A (n) and the center of gravity G (n).
  • the present invention is not limited to this, and only the area A (n) may be used.
  • another evaluation value may be used.
  • the height of the lump of the work W is detected from an image taken from the side, and the height is evaluated as an evaluation value. May be used.
  • the bulk state of the works W in the image 1 and the separated state of the works W in the immediately preceding image n are used.
  • the present invention is not limited to this. Only one of them may be used.
  • the image n captured immediately before the parameter change or the separation state of the workpiece W in the image n captured immediately after the parameter change may be used as a reference.
  • the work transfer device 20 includes the vertical movement device 30 for each of the transfer lanes 21. However, the upper and lower portions 22a of the plurality of transfer lanes 21 are moved up and down by one vertical movement device 30. It may be a thing.
  • the work transfer device 20 may include a support plate having an opening formed to extend over the plurality of transfer lanes 21.
  • the work transfer device 20 includes the up-down movement device 30 that moves the collection area A2 of the conveyor belt 22 (upper surface portion 22a) up and down, but moves the supply area A1 and other regions up and down.
  • the vertical movement device 30 may be provided.
  • the cylinder 32 of the up-and-down movement device 30 is exemplified as an actuator for unraveling the bulk state of the works W, but the actuator is not limited to this.
  • the workpiece W may be unraveled by an actuator that reciprocates a leveling member such as a brush in the X-axis direction or the Y-axis direction.
  • a leveling member such as a brush in the X-axis direction or the Y-axis direction.
  • the separation state of the workpiece W can be evaluated without interrupting the reciprocation of the actuator by capturing the image n when the leveling member moves to the forward end position or the backward end position.
  • the parameters include the angle at which the leveling member is brought into contact with the workpiece W, the speed of the reciprocating motion, and the like.
  • a leveling member may be attached as the end effector 52 of the collection robot 50, and the collection robot 50 may perform a loosening operation (leveling operation).
  • a parameter for controlling the drive motor 54 as an actuator of the sampling robot 50 may be learned.
  • the actuator that releases the work by the predetermined operation may be included in a robot that collects the work and performs the predetermined work.
  • the parameter learning method and the work system by the computer of the present disclosure may be configured as follows.
  • the in-operation imaging step the plurality of workpieces are imaged at every predetermined timing during execution of the predetermined operation, and in the evaluation step, the image is captured in the in-operation imaging step.
  • the separation state may be evaluated using the separation state immediately before processing the image captured in the immediately preceding operation-in-progress imaging step as a reference. In this way, even when the parameter is changed during execution of the predetermined operation, the change in the separation state can be properly grasped, so that the learning can be performed more appropriately.
  • the image is processed to detect an outer edge of a region where the plurality of workpieces are present, the area of the region is calculated, and the separation state is calculated based on the area. May be evaluated. In this case, even if a clear image cannot be obtained because an image is captured during execution of the predetermined operation, the separation state can be appropriately evaluated by simple processing.
  • a work system is a work system including an actuator that releases a plurality of works in a bulk state by a predetermined operation, a robot that collects the works and performs a predetermined work, and an imaging device that captures an image.
  • An imaging processing unit that causes the imaging device to image the plurality of workpieces during execution of the predetermined operation; an evaluation processing unit that processes a captured image to evaluate a separation state of the plurality of workpieces;
  • the gist of the present invention is to include a learning processing unit that learns a relationship between the evaluation result of the evaluation processing unit and the parameter in the predetermined operation being executed.
  • the working system performs learning of a parameter for controlling an actuator that performs a predetermined operation using an image captured during execution of the predetermined operation, similarly to the above-described parameter learning method. Therefore, the work device does not need to interrupt the predetermined operation for learning. For this reason, the parameter can be appropriately learned without lowering the work efficiency for unraveling the bulk state of the works.
  • a function for realizing each step of the parameter learning method may be added.
  • the present disclosure is applicable to, for example, the manufacturing industry of working systems.

Abstract

Provided is a parameter learning method for controlling an actuator so as to loosen a bulk state of a plurality of workpieces through a prescribed operation, the method including a mid-operation imaging step for imaging the plurality of workpieces during the execution of a prescribed operation, an evaluation step for processing an image captured in the mid-operation imaging step and evaluating the separation state of the plurality of workpieces, and a learning step for learning the relationship between the evaluation results in the evaluation step and a parameter in the prescribed operation during execution thereof.

Description

パラメータの学習方法および作業システムParameter learning method and work system
 本明細書は、パラメータの学習方法および作業システムを開示する。 This specification discloses a parameter learning method and a work system.
 従来、電子部品や機械部品などのワークが柔軟性のある支持部に供給され、ロボットにより採取されて所定の作業が行われる作業システムにおいて、支持部に供給されたワークのばら積み状態を解きほぐすために、支持部に下方から衝撃を付加するものが開示されている(例えば、特許文献1参照)。このシステムでは、ワークの重さやサイズ、形状、材料などに応じてワークのばら積み状態を適切に解きほぐすことができるように、衝撃エネルギや付加ポイントなどのパラメータが可変となっている。さらに、衝撃を付加する前後の安定状態でワークを撮像し、撮像した画像を処理してワークの分離状態を認識し、変更したパラメータとワークの分離状態との関係を学習し続けることにより、適切なパラメータを決定している。 Conventionally, in a work system in which works such as electronic parts and mechanical parts are supplied to a flexible support part, and are collected by a robot and a predetermined work is performed, to unravel the bulk state of the work supplied to the support part Japanese Patent Application Laid-Open No. H11-163,087 discloses a technique in which a shock is applied to a supporting portion from below. In this system, parameters such as impact energy and additional points are variable so that the work can be properly unraveled in accordance with the weight, size, shape, material, and the like of the work. In addition, by imaging the workpiece in a stable state before and after the impact is applied, processing the captured image to recognize the separation state of the work, and continuing to learn the relationship between the changed parameter and the separation state of the work, Parameters are determined.
特許第3172494号公報Japanese Patent No. 3172494
 しかしながら、上述した作業システムでは、ワークの撮像を安定状態で行うから、支持部に付加した衝撃が収まった状態で撮像する必要がある。即ち、ワークの分離状態を認識する度に衝撃の付加を一時的に停止する必要があるから、学習し続けるために、衝撃の付加を停止している時間が長時間となる場合がある。そうなると、ワークのばら積み状態を解きほぐすのに時間がかかり作業効率の低下に繋がってしまう。 However, in the above-described working system, since the imaging of the work is performed in a stable state, it is necessary to perform the imaging in a state where the impact applied to the supporting portion is contained. In other words, it is necessary to temporarily stop the application of the impact each time the separated state of the workpiece is recognized, so that the time during which the application of the impact is stopped may be long in order to continue learning. In that case, it takes time to loosen the bulk state of the work, which leads to a decrease in work efficiency.
 本開示は、ワークのばら積み状態を解きほぐすための作業効率を低下することなく、パラメータの学習を適切に行うことを主目的とする。 開 示 The present disclosure has a main object of appropriately performing parameter learning without lowering work efficiency for unraveling a bulk state of works.
 本開示は、上述の主目的を達成するために以下の手段を採った。 This disclosure employs the following means to achieve the above-mentioned main object.
 本開示のパラメータの学習方法は、複数のワークのばら積み状態を所定の動作により解きほぐすアクチュエータを制御するためのパラメータの学習方法であって、前記所定の動作の実行中に前記複数のワークを撮像する動作中撮像ステップと、前記動作中撮像ステップで撮像された画像を処理して前記複数のワークの分離状態を評価する評価ステップと、前記評価ステップの評価結果と、実行中の前記所定の動作における前記パラメータとの関係を学習する学習ステップと、を含むことを要旨とする。 A parameter learning method according to an embodiment of the present disclosure is a parameter learning method for controlling an actuator that unravels a bulk state of a plurality of works by a predetermined operation, and images the plurality of works during execution of the predetermined operation. An imaging step during operation, an evaluation step of processing an image captured in the imaging step during operation to evaluate a separation state of the plurality of workpieces, an evaluation result of the evaluation step, and an evaluation result of the predetermined operation being executed. And a learning step of learning a relationship with the parameter.
 本開示のパラメータの学習方法は、複数のワークのばら積み状態を解きほぐす所定の動作の実行中に複数のワークを撮像し、撮像された画像を処理して複数のワークの分離状態を評価し、その評価結果と、実行中の所定の動作におけるパラメータとの関係を学習する。これにより、パラメータの学習を所定の動作の実行中に撮像された画像を用いて行うことができるから、学習のためにアクチュエータが所定の動作を中断する必要がない。このため、ワークのばら積み状態を解きほぐすための作業効率を低下することなく、パラメータの学習を適切に行うことができる。 The parameter learning method of the present disclosure captures a plurality of workpieces during execution of a predetermined operation of unraveling a bulk state of a plurality of workpieces, processes the captured images to evaluate a separation state of the plurality of workpieces, The relationship between the evaluation result and the parameter in the predetermined operation being executed is learned. Thereby, the learning of the parameter can be performed using the image captured during the execution of the predetermined operation, so that it is not necessary for the actuator to interrupt the predetermined operation for learning. For this reason, the parameter can be appropriately learned without lowering the work efficiency for unraveling the bulk state of the works.
作業システム10の構成の概略を示す構成図。FIG. 1 is a configuration diagram illustrating a schematic configuration of a work system. ワーク搬送装置20の構成の概略を示す構成図。FIG. 2 is a configuration diagram illustrating an outline of a configuration of a work transfer device. ワーク搬送装置20を裏側から見た部分外観図。FIG. 3 is a partial external view of the work transfer device 20 as viewed from the back side. 制御装置70の電気的な接続関係を示す説明図。Explanatory drawing which shows the electrical connection relationship of the control apparatus 70. ワークWのばら積み状態が解きほぐされる様子を示す説明図。Explanatory drawing which shows a mode that the bulk state of the work W is unraveled. 制御装置70の機能を示すブロック図。FIG. 3 is a block diagram showing functions of a control device 70. ほぐし動作制御ルーチンの一例を示すフローチャート。9 is a flowchart illustrating an example of a loosening operation control routine. ほぐし動作開始時処理の一例を示すフローチャート。9 is a flowchart illustrating an example of a process at the time of starting a loosening operation. 差分ΔA,ΔGの演算方法の一例を示す説明図。Explanatory drawing which shows an example of the calculation method of difference (DELTA) A, (DELTA) G. 領域面積A,重心Gの一例を示す説明図。Explanatory drawing which shows an example of area | region area A and the center of gravity G.
 次に、本開示を実施するための形態について図面を参照しながら説明する。 Next, embodiments for implementing the present disclosure will be described with reference to the drawings.
 図1は作業システム10の構成の概略を示す構成図であり、図2はワーク搬送装置20の構成の概略を示す構成図であり、図3はワーク搬送装置20を裏側から見た部分外観図であり、図4は制御装置70の電気的な接続関係を示す説明図である。なお、図1,2中、左右方向はX軸方向であり、前後方向はY軸方向であり、上下方向はZ軸方向である。 1 is a configuration diagram schematically showing the configuration of the work system 10, FIG. 2 is a configuration diagram schematically showing the configuration of the work transfer device 20, and FIG. 3 is a partial external view of the work transfer device 20 as viewed from the back. FIG. 4 is an explanatory diagram showing an electrical connection relationship of the control device 70. 1 and 2, the left-right direction is the X-axis direction, the front-rear direction is the Y-axis direction, and the up-down direction is the Z-axis direction.
 本実施形態の作業システム10は、供給ボックス12に収容されているワークWを載置台Tに移送して整列させるシステムである。この作業システム10は、図1に示すように、載置台搬送装置16と、ワーク搬送装置20と、供給ロボット40と、採取ロボット50とを備える。これらは、作業台11上に設置されている。 The work system 10 according to the present embodiment is a system in which the work W stored in the supply box 12 is transferred to the mounting table T and aligned. As shown in FIG. 1, the work system 10 includes a mounting table transfer device 16, a work transfer device 20, a supply robot 40, and a sampling robot 50. These are installed on the workbench 11.
 載置台搬送装置16は、前後方向(Y軸方向)に間隔を空けて左右方向(X軸方向)に架け渡された一対のベルトコンベアを有する。載置台Tは、ベルトコンベアによって左から右へと搬送される。 (4) The mounting table transport device 16 has a pair of belt conveyors that are stretched in the left-right direction (X-axis direction) with an interval in the front-rear direction (Y-axis direction). The mounting table T is transported from left to right by a belt conveyor.
 供給ロボット40は、機械部品や電機部品などの各種部品としてのワークWを供給ボックス12から取り出してワーク搬送装置20の供給エリアA1(図2参照)に供給するためのロボットである。この供給ロボット40は、垂直多関節型のロボットアーム41と、エンドエフェクタ42とを備える。ロボットアーム41は、複数のリンクと、各リンクを回転または旋回可能に連結する複数の関節と、各関節を駆動する駆動モータ44(図4参照)と、各関節の角度を検出するエンコーダ45(図4参照)とを有する。複数のリンクは、エンドエフェクタ42が取り付けられる先端リンクと、作業台11に固定される基端リンクとを含む。エンドエフェクタ42は、ワークWの保持とその解除とが可能となっている。エンドエフェクタ42は、例えば、メカチャックや吸着ノズル、電磁石などを用いることができ、供給エリアA1にワークWをばら積み状態で供給する。 The supply robot 40 is a robot for taking out the work W as various parts such as mechanical parts and electric parts from the supply box 12 and supplying the work W to the supply area A1 of the work transfer device 20 (see FIG. 2). The supply robot 40 includes a vertical articulated robot arm 41 and an end effector 42. The robot arm 41 includes a plurality of links, a plurality of joints that rotatably or pivotally connect the links, a drive motor 44 that drives each joint (see FIG. 4), and an encoder 45 that detects the angle of each joint. FIG. 4). The plurality of links include a distal link to which the end effector 42 is attached, and a proximal link fixed to the worktable 11. The end effector 42 can hold and release the work W. The end effector 42 can use, for example, a mechanical chuck, a suction nozzle, an electromagnet, or the like, and supplies the work W to the supply area A1 in a bulk state.
 採取ロボット50は、ワーク搬送装置20の採取エリアA2(図2参照)でワークWを採取して載置台T上に移送,整列させるためのロボットである。この採取ロボット50は、垂直多関節型のロボットアーム51と、エンドエフェクタ52とを備える。ロボットアーム51は、複数のリンクと、各リンクを回転または旋回可能に連結する複数の関節と、各関節を駆動する駆動モータ54(図4参照)と、各関節の角度を検出するエンコーダ55(図4参照)とを有する。複数のリンクは、エンドエフェクタ52が取り付けられる先端リンクと、作業台11に固定される基端リンクとを含む。エンドエフェクタ52は、ワークWの保持とその解除とが可能となっている。エンドエフェクタ52は、例えば、メカチャックや吸着ノズル、電磁石などを用いることができる。また、ロボットアーム51の先端リンクには、ワーク搬送装置20により搬送されたワークWや載置台搬送装置16により搬送された載置台Tを撮像してそれらの位置や状態を把握するためのカメラ53も取り付けられている。 (4) The collection robot 50 is a robot for collecting the work W in the collection area A2 (see FIG. 2) of the work transfer device 20, transferring the work W to the mounting table T, and aligning the work W. The sampling robot 50 includes a vertical articulated robot arm 51 and an end effector 52. The robot arm 51 includes a plurality of links, a plurality of joints that rotatably or pivotally connect the links, a drive motor 54 (see FIG. 4) that drives each joint, and an encoder 55 that detects an angle of each joint. FIG. 4). The plurality of links include a distal link to which the end effector 52 is attached, and a proximal link fixed to the worktable 11. The end effector 52 can hold and release the work W. As the end effector 52, for example, a mechanical chuck, a suction nozzle, an electromagnet, or the like can be used. A camera 53 for imaging the work W conveyed by the work conveyance device 20 and the mounting table T conveyed by the mounting table conveyance device 16 to grasp their positions and states is provided at the end link of the robot arm 51. Is also attached.
 ワーク搬送装置20は、図1,2に示すように、ワークWをそれぞれ供給エリアA1から採取エリアA2まで前後方向(Y軸方向)に搬送可能な複数の搬送レーン21を有する。ワーク搬送装置20の後方には、複数の搬送レーン21のそれぞれに供給するワークWを収容するための複数の供給ボックス12が配置されている。 As shown in FIGS. 1 and 2, the work transfer device 20 has a plurality of transfer lanes 21 that can transfer the work W from the supply area A1 to the collection area A2 in the front-rear direction (Y-axis direction). A plurality of supply boxes 12 for accommodating works W to be supplied to each of the plurality of transfer lanes 21 are arranged behind the work transfer device 20.
 ワーク搬送装置20は、コンベアベルト22と、仕切り25とを備える。コンベアベルト22は、図2に示すように、駆動ローラ23aと従動ローラ23bとに架け渡されている。コンベアベルト22は、上面部22a(載置部)にワークWが載置され、駆動モータ38(図4参照)により駆動ローラ23aを回転駆動することによってワークWをベルト送り方向に搬送する。コンベアベルト22の両サイドには、側壁24a,24bが設けられている。駆動ローラ23aおよび従動ローラ23bは、側壁24a,24bに回転自在に支持されている。また、図3に示すように、ワーク搬送装置20は、コンベアベルト22の上面部22aの裏側に支持プレート28を有する。支持プレート28は、上面部22aに載置されたワークWの重みによってコンベアベルト22が撓むのを防止する。また、支持プレート28は、複数の搬送レーン21の採取エリアA2に対応する位置にそれぞれ開口部28aが形成されている。各開口部28aの下方には、上面部22aを裏面から突き上げて上下動させるための上下動装置30が配置されている。上下動装置30は、当接体31と、開口部28aを貫通するように当接体31を上下動させるシリンダ32とを備える。シリンダ32は、側壁24a,24bに固定された支持台29に支持されている。仕切り25は、1つのコンベアベルト22(上面部22a)を複数の搬送レーン21に仕切る仕切り板である。仕切り25は、コンベアベルト22の両サイドに配置された側壁24a,24bに対して平行に延在し且つ各搬送レーン21が同じレーン幅となるように等間隔に配置される。 The work transfer device 20 includes a conveyor belt 22 and a partition 25. As shown in FIG. 2, the conveyor belt 22 is stretched over a driving roller 23a and a driven roller 23b. The work W is placed on the upper surface portion 22a (placement portion) of the conveyor belt 22, and the work roller W is driven to rotate by the drive motor 38 (see FIG. 4) to convey the work W in the belt feeding direction. On both sides of the conveyor belt 22, side walls 24a and 24b are provided. The driving roller 23a and the driven roller 23b are rotatably supported by the side walls 24a and 24b. As shown in FIG. 3, the work transfer device 20 has a support plate 28 on the back side of the upper surface 22 a of the conveyor belt 22. The support plate 28 prevents the conveyor belt 22 from bending due to the weight of the work W placed on the upper surface 22a. In the support plate 28, openings 28a are formed at positions corresponding to the collection areas A2 of the plurality of transport lanes 21, respectively. Below each opening 28a, a vertical movement device 30 for pushing up the upper surface portion 22a from the back surface and moving the upper surface portion 22a up and down is arranged. The vertical movement device 30 includes a contact body 31 and a cylinder 32 for vertically moving the contact body 31 so as to penetrate the opening 28a. The cylinder 32 is supported by a support base 29 fixed to the side walls 24a, 24b. The partition 25 is a partition plate that partitions one conveyor belt 22 (upper surface portion 22a) into a plurality of transport lanes 21. The partitions 25 extend in parallel to the side walls 24a and 24b arranged on both sides of the conveyor belt 22, and are arranged at equal intervals so that each of the transport lanes 21 has the same lane width.
 制御装置70は、図示は省略するが、CPUやROM、HDD、RAM、入出力インタフェース、通信インタフェースなどを備える周知のコンピュータとして構成されている。制御装置70には、供給ロボット40のエンコーダ45や採取ロボット50のエンコーダ55、カメラ53、入力装置80などからの各種信号が入力される。制御装置70からは、ワーク搬送装置20の駆動モータ38、上下動装置30(シリンダ32)、供給ロボット40の駆動モータ44、採取ロボット50の駆動モータ54、カメラ53、載置台搬送装置16などへの各種制御信号が出力される。 Although not shown, the control device 70 is configured as a known computer including a CPU, a ROM, a HDD, a RAM, an input / output interface, a communication interface, and the like. Various signals from the encoder 45 of the supply robot 40, the encoder 55 of the collection robot 50, the camera 53, the input device 80, and the like are input to the control device 70. From the control device 70, the drive motor 38 of the work transfer device 20, the vertical movement device 30 (cylinder 32), the drive motor 44 of the supply robot 40, the drive motor 54 of the collection robot 50, the camera 53, the mounting table transfer device 16, and the like. Are output.
 また、制御装置70は、上下動装置30のシリンダ32を制御するためのパラメータを学習し、学習結果に基づいて適切なパラメータを決定してシリンダ32を制御することが可能となっている。図5はワークWのばら積み状態が解きほぐされる様子を示す説明図である。図示するように、シリンダ32が当接体31を上下動(振動)させるほぐし動作により、ばら積み状態のワークWの塊が解きほぐされて分離状態となり、採取ロボット50によりワークWを採取し易いものとなる。制御装置70は、ワークWの重さやサイズ、形状、材料などのワークWの仕様に応じて、ワークWを解きほぐすのに適切なパラメータでシリンダ32を制御する。また、パラメータとしては、例えば、当接体31の上下動(振動)によりコンベアベルト22を突き上げる際の衝撃力や振動数などが挙げられる。 The control device 70 learns parameters for controlling the cylinder 32 of the vertically moving device 30 and can control the cylinder 32 by determining an appropriate parameter based on the learning result. FIG. 5 is an explanatory diagram showing a state where the bulk of the works W is unraveled. As shown in the drawing, a loosening operation in which the cylinder 32 moves the contact body 31 up and down (vibrates) dislodges the lump of the work W in a bulk state and becomes a separated state, so that the work W can be easily collected by the collection robot 50. Becomes The control device 70 controls the cylinder 32 with parameters suitable for unraveling the work W according to the specifications of the work W, such as the weight, size, shape, and material of the work W. In addition, examples of the parameters include an impact force and a vibration frequency when the conveyor belt 22 is pushed up by the vertical movement (vibration) of the contact body 31.
 図6は制御装置70の機能を示すブロック図である。図示するように、制御装置70は、主にパラメータの学習を行うパラメータ学習部70Aと、主に適切なパラメータを決定して上下動装置30を駆動制御する駆動制御部70Bとを有する。パラメータ学習部70Aは、学習モデル71と、撮像処理部72と、評価処理部73と、学習処理部74とを有する。撮像処理部72は、ワークWのばら積み状態やワークWが解きほぐされた状態である分離状態をカメラ53に撮像させ、撮像された画像を入力する。評価処理部73は、撮像された画像を処理してワークWの分離状態に関する所定の評価値を演算して分離状態を評価する。学習処理部74は、実行中のほぐし動作におけるパラメータと、評価処理部73との評価結果との関係を公知の機械学習により学習し、ワークWの仕様などとの相関を含む学習モデル71を構築する。なお、学習の手法としては、例えば強化学習や遺伝的アルゴリズムなどが挙げられ、これら以外の手法を用いてもよい。駆動制御部70Bは、パラメータ決定部75と、駆動部76とを有する。パラメータ決定部75は、学習モデル71を用いてワークWの仕様などに応じたパラメータを決定したり、任意のパラメータを適宜決定したりする。駆動部76は、パラメータ決定部75により決定されたパラメータに基づいて、上下動装置30のシリンダ32を制御する。 FIG. 6 is a block diagram showing functions of the control device 70. As shown in the figure, the control device 70 includes a parameter learning unit 70A that mainly learns parameters, and a drive control unit 70B that mainly determines appropriate parameters and drives and controls the vertical movement device 30. The parameter learning unit 70A includes a learning model 71, an imaging processing unit 72, an evaluation processing unit 73, and a learning processing unit 74. The imaging processing unit 72 causes the camera 53 to image the work W in a bulk state or the separated state in which the work W is unraveled, and inputs the taken image. The evaluation processing unit 73 processes the captured image, calculates a predetermined evaluation value regarding the separation state of the work W, and evaluates the separation state. The learning processing unit 74 learns the relationship between the parameter in the unraveling operation being executed and the evaluation result with the evaluation processing unit 73 by well-known machine learning, and constructs a learning model 71 including a correlation with the specification of the work W and the like. I do. Note that examples of the learning method include reinforcement learning and a genetic algorithm, and other methods may be used. The drive control unit 70B includes a parameter determination unit 75 and a drive unit 76. The parameter determining unit 75 determines a parameter corresponding to the specification of the work W or the like using the learning model 71, or appropriately determines an arbitrary parameter. The drive unit 76 controls the cylinder 32 of the vertical movement device 30 based on the parameters determined by the parameter determination unit 75.
 こうして構成された作業システム10では、ワークWの供給制御と搬送制御、ほぐし動作制御、採取載置制御などの各制御が順に行われる。供給制御は、供給順序に従って供給ボックス12からワークWを採取して、対応する搬送レーン21の供給エリアA1に供給するように供給ロボット40を駆動制御することにより行われる。なお、供給順序は、例えば入力装置80の操作によってオペレータにより指定された順序などとする。搬送制御は、供給エリアA1に供給されたワークWが採取エリアA2に到達するようにワーク搬送装置20を駆動制御することにより行われる。ほぐし動作制御は、採取エリアA2にワークWが到達した状態などにおいて、採取エリアA2に対応する上下動装置30のシリンダ32を駆動制御することにより行われる。採取載置制御は、ほぐし動作制御により解きほぐされて分離状態となったワークWを採取して、載置台T上に整列して載置するように採取ロボット50を駆動制御することにより行われる。なお、採取載置制御では、採取エリアA2のワークWをカメラ53で撮像し、撮像された画像を処理して選定したワークWを採取するように採取ロボット50が駆動制御される。各搬送レーン21のワークWに対して行われるこれらの制御は、他の搬送レーン21のワークWに対する制御に影響を及ぼさなければ並行して実行されるものとすればよい。以下、ほぐし動作制御の詳細を、図7に示すほぐし動作制御ルーチンに基づいて説明する。 作業 In the work system 10 configured as described above, each control such as the supply control and the transfer control of the work W, the loosening operation control, and the sampling and placement control is sequentially performed. The supply control is performed by collecting the work W from the supply box 12 in accordance with the supply order and controlling the drive of the supply robot 40 so as to supply the work W to the supply area A1 of the corresponding transfer lane 21. The supply order is, for example, an order specified by an operator by operating the input device 80. The transfer control is performed by controlling the drive of the work transfer device 20 so that the work W supplied to the supply area A1 reaches the collection area A2. The loosening operation control is performed by controlling the driving of the cylinder 32 of the vertical movement device 30 corresponding to the collection area A2 in a state where the work W has reached the collection area A2. Sampling and placement control is performed by sampling the work W that has been unraveled and separated by the unraveling operation control, and drives and controls the sampling robot 50 so as to be aligned and mounted on the mounting table T. . In the collection and placement control, the collection robot 50 is driven and controlled so that the work W in the collection area A2 is captured by the camera 53, and the captured image is processed to collect the selected work W. These controls performed on the work W in each transfer lane 21 may be performed in parallel if they do not affect the control on the work W in the other transfer lanes 21. Hereinafter, the details of the loosening operation control will be described based on the loosening operation control routine shown in FIG.
 図7のほぐし動作制御ルーチンでは、制御装置70は、ほぐし動作の開始タイミングであるか否かを判定する(S100)。制御装置70は、搬送制御によりワークWが採取エリアA2に到達し上下動装置30が駆動できる状態である場合に、開始タイミングであると判定する。なお、制御装置70は、ほぐし動作を行っていくつかのワークWが採取された残りのワークWに対して、再度のほぐし動作が必要な場合などに開始タイミングであると判定することもある。制御装置70は、ほぐし動作の開始タイミングとなったと判定すると、図8に示すほぐし動作開始時処理を実行する(S105)。 In the loosening operation control routine of FIG. 7, the control device 70 determines whether or not it is time to start the loosening operation (S100). The control device 70 determines that it is the start timing when the work W reaches the sampling area A2 by the transport control and the vertical movement device 30 is in a state where it can be driven. Note that the control device 70 may determine that the start timing is reached, for example, when it is necessary to perform another loosening operation on the remaining work W from which some work W has been collected by performing the loosening operation. When it is determined that the timing for starting the loosening operation has come, the control device 70 executes the process for starting the loosening operation shown in FIG. 8 (S105).
 図8のほぐし動作開始時処理では、制御装置70は、まず画像の撮像順を示す番号nを値1に初期化し(S200)、ほぐし動作の開始前にカメラ53で番号1の画像である画像1を撮像する(S205)。次に、制御装置70は、画像1を処理してワークWの塊の領域の外縁を検出してその領域面積A(1)と、領域面積A(1)の重心G(1)とを演算する(S210)。ワークWは、ばら積み状態で供給エリアA1に供給されて採取エリアA2まで搬送されるから、画像1では複数のワークWが絡み合って塊となっており、そのワークWの領域と背景となるコンベアベルト22の上面部22aとにおける輝度値などが異なる。このため、S210では、例えば画像1をグレースケール画像に変換し、そのグレースケール画像からワークWの塊と上面部22aとの境界をワークWの領域の外縁として検出し、その外縁に囲まれる領域の面積を領域面積A(n)(ここではA(1))として演算する。また、外縁に囲まれる領域の重心となる位置を重心G(n)(ここではG(1))として演算する。なお、制御装置70はグレースケール画像を用いるものに限られず、2値化画像を用いてもよい。続いて、制御装置70は、ほぐし動作のパラメータを設定し(S215)、設定したパラメータに基づいて上下動装置30のシリンダ32を駆動することによりほぐし動作を開始して(S220)、ほぐし動作開始時処理を終了する。制御装置70は、S215では学習モデル71から今回のワークWに適したパラメータを選択して設定する。なお、制御装置70は、ワークWが新規であるなど選択が困難な場合に、任意のパラメータを適宜設定してもよい。 In the processing at the start of the loosening operation in FIG. 8, the control device 70 first initializes the number n indicating the image capturing order to a value of 1 (S200), and before the start of the loosening operation, the image which is the image of the number 1 with the camera 53 1 is imaged (S205). Next, the control device 70 processes the image 1 to detect the outer edge of the lump area of the workpiece W, and calculates the area area A (1) and the center of gravity G (1) of the area area A (1). (S210). The work W is supplied in a bulk state to the supply area A1 and transported to the collection area A2. Therefore, in the image 1, a plurality of works W are intertwined to form a lump, and the area of the work W and a conveyor belt serving as a background. The brightness value and the like of the upper surface portion 22a of the second 22 differ. For this reason, in S210, for example, the image 1 is converted into a gray scale image, and the boundary between the lump of the work W and the upper surface portion 22a is detected as the outer edge of the area of the work W from the gray scale image, and the area surrounded by the outer edge is detected. Is calculated as a region area A (n) (here, A (1)). Further, the position of the center of gravity of the region surrounded by the outer edge is calculated as the center of gravity G (n) (here, G (1)). Note that the control device 70 is not limited to a device using a grayscale image, and may use a binarized image. Subsequently, the control device 70 sets the parameters of the loosening operation (S215), starts the loosening operation by driving the cylinder 32 of the vertical movement device 30 based on the set parameters (S220), and starts the loosening operation. The time processing ends. In S215, the control device 70 selects and sets parameters suitable for the current work W from the learning model 71. Note that the control device 70 may appropriately set an arbitrary parameter when the selection is difficult, for example, when the work W is new.
 こうしてS105のほぐし動作開始時処理を実行すると、制御装置70は、ほぐし動作中の所定タイミングとなったか否かを判定する(S110)。ここで、所定タイミングは、ほぐし動作を開始してから所定時間が経過する毎のタイミングとしてもよいし、上下動装置30のシリンダ32が上下動を所定回数行う毎のタイミングなどとしてもよい。この所定タイミングは、ほぐし動作を開始してから終了するまでに複数回発生するものとする。制御装置70は、S110で所定タイミングとなったと判定すると、番号nを値1インクリメントすることにより番号nを更新し(S115)、ほぐし動作の実行中にカメラ53で番号nの画像である画像nを撮像する(S120)。なお、ほぐし動作の実行中に画像nを撮像するとは、シリンダ32の連続的な上下動を中断することなく画像nを撮像することをいう。このため、ワークWが振動により様々な方向に跳ねたりしている状態で画像nが撮像されるから、各ワークWがぶれた状態の画像となる。 After performing the loosening operation start process in S105, the control device 70 determines whether a predetermined timing during the loosening operation has come (S110). Here, the predetermined timing may be a timing each time a predetermined time elapses after starting the loosening operation, or a timing each time the cylinder 32 of the vertical movement device 30 performs the vertical movement a predetermined number of times. It is assumed that the predetermined timing occurs a plurality of times from the start of the loosening operation to the end thereof. If it is determined in S110 that the predetermined timing has come, the control device 70 updates the number n by incrementing the number n by one (S115), and the image n which is the image of the number n by the camera 53 during the execution of the loosening operation. Is imaged (S120). To capture the image n during the execution of the loosening operation means to capture the image n without interrupting the continuous vertical movement of the cylinder 32. For this reason, since the image n is captured in a state where the work W is jumping in various directions due to the vibration, the image is in a state where each work W is shaken.
 続いて、制御装置70は、画像nを処理して領域面積A(n)と重心G(n)とを演算すると共に(S125)、ワークWが採取ロボット50による採取可能な程度に十分に分離(拡散)されたか否かを判定する(S130)。なお、S125の処理は、ワークWがぶれた画像nを用いる点を除いて、ほぐし動作開始時処理のS210と同様に行われる。なお、ワークWがぶれた画像nであっても、ワークWの塊とコンベアベルト22の上面部22aとの大凡の境界を検出することは可能であるから、ワークWの領域の外縁を検出して領域面積A(n)と重心G(n)とを演算することができる。また、制御装置70は、処理した画像nから個々のワークWをいくつか認識できるために採取ロボット50で採取可能な状態となった場合に、S130でワークWが十分に分離されたと判定する。あるいは、制御装置70は、例えばワークWの仕様やその数などに基づいてワークWが十分に分離された状態となる領域面積Aeを予測し、S125で演算したワークWの領域面積A(n)が領域面積Ae以上となった場合に、ワークWが十分に分離されたと判定する。 Subsequently, the control device 70 processes the image n to calculate the area area A (n) and the center of gravity G (n) (S125), and separates the workpiece W sufficiently to be able to be collected by the collection robot 50. It is determined whether (spreading) has been performed (S130). The process of S125 is performed in the same manner as the process at the start of the loosening operation S210, except that the image n in which the workpiece W is blurred is used. Note that, even in the image n in which the work W is shaken, since it is possible to detect the approximate boundary between the lump of the work W and the upper surface 22a of the conveyor belt 22, the outer edge of the area of the work W is detected. To calculate the area A (n) and the center of gravity G (n). Further, the control device 70 determines that the work W has been sufficiently separated in S130 when a state in which the work can be collected by the collection robot 50 because some of the individual works W can be recognized from the processed image n. Alternatively, the control device 70 predicts a region area Ae in which the work W is sufficiently separated based on, for example, the specifications and the number of the work W, and calculates the region area A (n) of the work W calculated in S125. Is greater than or equal to the area Ae, it is determined that the workpiece W has been sufficiently separated.
 制御装置70は、S130でワークWが十分に分離されていないと判定すると、評価値としての差分ΔAおよび差分ΔGをそれぞれ演算して分離状態の評価を行う(S135)。ここで、差分ΔAは、領域面積A(n)と基準の領域面積との面積差として演算される。また、差分ΔGは、重心G(n)と基準の重心との距離差として演算される。 If the control device 70 determines in S130 that the work W is not sufficiently separated, the control device 70 calculates the difference ΔA and the difference ΔG as evaluation values to evaluate the separation state (S135). Here, the difference ΔA is calculated as an area difference between the area A (n) and the reference area. The difference ΔG is calculated as the distance difference between the center of gravity G (n) and the reference center of gravity.
 図9は差分ΔA,ΔGの演算方法の一例を示す説明図である。図9Aに示す方法では、ほぐし動作開始時処理で撮像された画像1の領域面積A(1)と重心G(1)とが基準として用いられて、差分ΔA(A(n)-A(1))や差分ΔG(G(n)-G(1))が演算される。この方法では、ほぐし動作の開始前に撮像されたためにワークWが静止しておりぶれのない鮮明な画像1と比較して、差分ΔA,ΔGを精度よく演算することができる。また、ほぐし動作時間が長くなるにつれて差分ΔA,ΔGが顕著に現れる場合があるから、評価がし易いものとなって学習をより適切に行うことができる。図9Bに示す方法では、画像nが撮像された所定タイミングに対して直前の所定タイミングで撮像された画像(n-1)の領域面積A(n-1)と重心G(n-1)とが基準として用いられて、差分ΔA(A(n)-A(n-1))や差分ΔG(G(n)-G(n-1))が演算される。この方法では、後述する処理でほぐし動作中にパラメータを変更した場合などにおいても評価を行うことができる。即ち、制御装置70は、ほぐし動作を継続しながらパラメータを変更して評価を行うことができるから、ほぐし動作を中断することなくパラメータの学習を適切に行うことができる。なお、例えば入力装置80の操作によるオペレータの指定に基づいて図9A,図9Bに示す方法のいずれかを選択してもよいし、後述するパラメータの変更に伴って図9Aに示す方法から図9Bに示す方法に変更してもよい。 FIG. 9 is an explanatory diagram showing an example of a method of calculating the differences ΔA and ΔG. In the method shown in FIG. 9A, the difference ΔA (A (n) −A (1) is used by using the area A (1) and the center of gravity G (1) of the image 1 captured in the processing at the start of the loosening operation as a reference. )) And the difference ΔG (G (n) −G (1)) are calculated. According to this method, the difference ΔA, ΔG can be calculated more accurately than the clear image 1 in which the workpiece W is stationary and does not shake because the image is captured before the start of the loosening operation. In addition, the differences ΔA and ΔG may appear more prominently as the loosening operation time becomes longer, so that the evaluation becomes easier and learning can be performed more appropriately. In the method shown in FIG. 9B, the area area A (n-1) and the center of gravity G (n-1) of the image (n-1) captured at a predetermined timing immediately before the predetermined timing at which the image n is captured are calculated. Is used as a reference to calculate the difference ΔA (A (n) −A (n−1)) and the difference ΔG (G (n) −G (n−1)). In this method, evaluation can be performed even when a parameter is changed during a loosening operation in a process described later. That is, since the control device 70 can perform the evaluation while changing the parameter while continuing the loosening operation, the control device 70 can appropriately learn the parameter without interrupting the loosening operation. 9A and 9B may be selected based on, for example, an operator's designation by operating the input device 80, or the method shown in FIG. May be changed to the method shown in FIG.
 また、図10は領域面積A,重心Gの一例を示す説明図である。図10では、図9Aに示す方法で用いられる基準としての領域面積A(1)や重心G(1)を実線で示し、画像nの領域面積A(n)や重心G(n)を点線で示す。図10Aでは、重心Gがほとんど変わらずに領域面積Aが大きくなっており、差分ΔGが小さく差分ΔAが大きな状態を示す。このような状態は、ワークWが採取エリアA2から大きく外れることなく解きほぐされて分離されていく状態であるから、評価が高いものとなる。また、図10Bでは、領域面積Aがほとんど変わらずに重心Gが移動しており、差分ΔAが小さく差分ΔGが大きな状態を示す。このような状態は、ワークWの塊が十分に解きほぐされることなく全体的にワークWの位置がずれていくから、評価が低いものとなる。このように、ほぐし動作中に撮像されてワークWがぶれた画像nを用いても、簡易な処理でワークWの塊の領域を検出して分離状態の評価を行うことができる。なお、図示は省略するが、重心Gが移動していても領域面積Aが大きくなっていればワークWの塊が解きほぐされていると評価することができるから、図10Bよりは評価が高いものとなる。 FIG. 10 is an explanatory view showing an example of the area A and the center of gravity G. In FIG. 10, the area A (1) and the center of gravity G (1) as references used in the method shown in FIG. 9A are indicated by solid lines, and the area A (n) and the center of gravity G (n) of the image n are indicated by dotted lines. Show. FIG. 10A shows a state in which the area A is large without changing the center of gravity G, and the difference ΔG is small and the difference ΔA is large. Since such a state is a state in which the work W is unraveled and separated without greatly deviating from the sampling area A2, the evaluation is high. FIG. 10B shows a state in which the center of gravity G moves with almost no change in the area A of the region, and the difference ΔA is small and the difference ΔG is large. In such a state, the position of the work W is shifted as a whole without sufficiently loosening the lump of the work W, so that the evaluation is low. As described above, even if the image n captured during the unraveling operation and the work W is shaken is used, it is possible to detect the lump area of the work W and evaluate the separation state by simple processing. Although illustration is omitted, if the area A is large even if the center of gravity G is moved, it can be evaluated that the lump of the work W is unraveled, so the evaluation is higher than that of FIG. 10B. It will be.
 制御装置70は、現在のパラメータに対するこのような評価結果を学習することにより学習モデル71を更新し(S140)、より適したパラメータに変更が必要か否かを判定する(S145)。制御装置70は、更新された学習モデル71から、現在のワークWに対してより適したパラメータを選択できるか否かに基づいてS145の判定を行う。制御装置70は、パラメータの変更が必要と判定すると、より適したパラメータに変更してほぐし動作を継続して(S150)、S110に戻る。一方、制御装置70は、現在のパラメータに対する分離状態の評価が比較的高いなどにより、パラメータの変更が不要と判定すると、現在のパラメータのままでほぐし動作を継続して(S155)、S110に戻る。制御装置70は、こうした処理を実行するうちに、S130でワークWが十分に分離されたと判定すると、上下動装置30のシリンダ32の駆動を停止させることで、ほぐし動作を終了して(S160)、S100に戻る。 The control device 70 updates the learning model 71 by learning such an evaluation result for the current parameter (S140), and determines whether or not a change to a more suitable parameter is necessary (S145). The control device 70 makes the determination in S145 based on whether or not a parameter more suitable for the current work W can be selected from the updated learning model 71. When determining that the parameter needs to be changed, the control device 70 changes the parameter to a more suitable one, continues the loosening operation (S150), and returns to S110. On the other hand, when the control device 70 determines that the parameter change is unnecessary because the evaluation of the separation state with respect to the current parameter is relatively high, the control device 70 continues the loosening operation with the current parameter (S155), and returns to S110. . If the control device 70 determines that the work W has been sufficiently separated in S130 while performing such processing, the control device 70 stops driving the cylinder 32 of the vertical movement device 30 to end the loosening operation (S160). , And returns to S100.
 ここで、本実施形態の構成要素と本開示の構成要素との対応関係を明らかにする。本実施形態の上下動装置30のシリンダ32がアクチュエータに相当し、図7のほぐし動作制御ルーチンのS120が動作中撮像ステップに相当し、同処理のS125,S135が評価ステップに相当し、同処理のS140が学習ステップに相当する。図7のほぐし動作開始時処理のS205が動作前撮像ステップに相当し、同処理のS210が取得ステップに相当する。また、作業システム10が作業システムに相当し、採取ロボット50がロボットに相当し、カメラ53が撮像装置に相当し、ほぐし動作制御ルーチンのS120を実行する撮像処理部72が撮像処理部に相当し、同処理のS125,S135を実行する評価処理部73が評価処理部に相当し、同処理のS140を実行する学習処理部74が学習処理部に相当する。 Here, the correspondence between the components of the present embodiment and the components of the present disclosure will be clarified. The cylinder 32 of the vertically moving device 30 of the present embodiment corresponds to an actuator, S120 of the loosening operation control routine in FIG. 7 corresponds to an in-operation imaging step, and S125 and S135 of the same processing correspond to evaluation steps. S140 corresponds to a learning step. S205 of the processing at the start of the loosening operation in FIG. 7 corresponds to a pre-operation imaging step, and S210 of the processing corresponds to an acquisition step. The work system 10 corresponds to a work system, the sampling robot 50 corresponds to a robot, the camera 53 corresponds to an imaging device, and the imaging processing unit 72 that executes S120 of the unraveling operation control routine corresponds to an imaging processing unit. The evaluation processing unit 73 executing S125 and S135 of the processing corresponds to an evaluation processing unit, and the learning processing unit 74 executing S140 of the processing corresponds to a learning processing unit.
 以上説明した本実施形態のパラメータの学習方法では、複数のワークWのほぐし動作中に撮像した画像nを処理して複数のワークWの分離状態を評価し、その評価結果と、実行中のほぐし動作におけるパラメータとの関係を学習する。このため、学習のために上下動装置30がほぐし動作を中断する必要がないから、ほぐし動作の作業効率を低下することなく学習を適切に行うことができる。 In the parameter learning method of the present embodiment described above, the separation state of the plurality of works W is evaluated by processing the image n captured during the unraveling operation of the plurality of works W. Learn the relationship with the parameters in the operation. For this reason, since the lifting / lowering device 30 does not need to interrupt the loosening operation for learning, learning can be appropriately performed without lowering the work efficiency of the loosening operation.
 また、ほぐし動作の開始前に撮像した画像1を処理して複数のワークWのばら積み状態を取得し、そのばら積み状態を基準としてワークWの分離状態を評価するから、評価値を精度よく取得して学習をより適切に行うことができる。また、ほぐし動作中の所定のタイミング毎に撮像した画像nに対し、直前の画像(n-1)を基準としてワークWの分離状態を評価するから、ほぐし動作中にパラメータが変更された場合でも、ほぐし動作を中断することなく、評価値を取得して学習を行うことができる。 In addition, since the image 1 captured before the start of the unraveling operation is processed to obtain a plurality of works W in a bulk state and the separated state of the works W is evaluated based on the bulk state, the evaluation value is accurately obtained. Learning can be performed more appropriately. Further, the separation state of the workpiece W is evaluated based on the immediately preceding image (n-1) with respect to the image n captured at a predetermined timing during the unraveling operation. Therefore, even if the parameter is changed during the unraveling operation. The learning can be performed by acquiring the evaluation value without interrupting the loosening operation.
 また、ワークWの領域の外縁に囲まれる領域面積A(n)を用いて分離状態を評価するから、鮮明な画像が得られない場合であっても、簡易な処理で分離状態を適切に評価することができる。また、領域面積A(n)とその重心G(n)とを用いて分離状態を評価するから、さらに適切に分離状態を評価することができる。 Further, since the separation state is evaluated using the area A (n) surrounded by the outer edge of the area of the work W, even if a clear image cannot be obtained, the separation state is appropriately evaluated by simple processing. can do. Further, since the separation state is evaluated using the area A (n) and the center of gravity G (n), the separation state can be more appropriately evaluated.
 なお、本開示は上述した実施形態に何ら限定されることはなく、本開示の技術的範囲に属する限り種々の態様で実施し得ることはいうまでもない。 Note that the present disclosure is not limited to the above-described embodiments at all, and it goes without saying that the present disclosure can be implemented in various modes as long as they belong to the technical scope of the present disclosure.
 例えば、上述した実施形態では、領域面積A(n)と重心G(n)とを用いて評価を行うものとしたが、これに限られず、領域面積A(n)のみを用いるものとしてもよいし、別の評価値を用いるものとしてもよい。例えば、ばら積み状態が解きほぐされていくうちにワークWの塊の高さも減少していくから、側方から撮像された画像からワークWの塊の高さを検出し、その高さを評価値として用いるものなどとしてもよい。 For example, in the above-described embodiment, the evaluation is performed using the area A (n) and the center of gravity G (n). However, the present invention is not limited to this, and only the area A (n) may be used. Alternatively, another evaluation value may be used. For example, since the height of the lump of the work W also decreases while the bulk state is unraveled, the height of the lump of the work W is detected from an image taken from the side, and the height is evaluated as an evaluation value. May be used.
 上述した実施形態では、評価する際の基準として、画像1におけるワークWのばら積み状態や直前の画像nにおけるワークWの分離状態を用いるものとしたが、これに限られず、基準として用いられるのはいずれか一方のみとしてもよい。あるいは、パラメータの変更直前に撮像された画像nやパラメータの変更直後に撮像された画像nにおけるワークWの分離状態を基準として用いるものなどとしてもよい。 In the above-described embodiment, as a criterion at the time of evaluation, the bulk state of the works W in the image 1 and the separated state of the works W in the immediately preceding image n are used. However, the present invention is not limited to this. Only one of them may be used. Alternatively, the image n captured immediately before the parameter change or the separation state of the workpiece W in the image n captured immediately after the parameter change may be used as a reference.
 上述した実施形態では、ワーク搬送装置20は、搬送レーン21ごとに上下動装置30を備えるものとしたが、1つの上下動装置30で複数の搬送レーン21の上面部22aをまとめて上下動させるものとしてもよい。この場合、ワーク搬送装置20は、複数の搬送レーン21に跨がるように開口部が形成された支持プレートを備えるものとしてもよい。 In the above-described embodiment, the work transfer device 20 includes the vertical movement device 30 for each of the transfer lanes 21. However, the upper and lower portions 22a of the plurality of transfer lanes 21 are moved up and down by one vertical movement device 30. It may be a thing. In this case, the work transfer device 20 may include a support plate having an opening formed to extend over the plurality of transfer lanes 21.
 上述した実施形態では、ワーク搬送装置20は、コンベアベルト22(上面部22a)の採取エリアA2を上下動させる上下動装置30を備えるものとしたが、供給エリアA1やその他の領域を上下動させる上下動装置30を配置してもよい。 In the above-described embodiment, the work transfer device 20 includes the up-down movement device 30 that moves the collection area A2 of the conveyor belt 22 (upper surface portion 22a) up and down, but moves the supply area A1 and other regions up and down. The vertical movement device 30 may be provided.
 上述した実施形態では、ワークWのばら積み状態を解きほぐすアクチュエータとして上下動装置30のシリンダ32を例示したが、これに限られるものではない。例えば、ハケのようなならし部材をX軸方向またはY軸方向に往復動させるアクチュエータによりワークWを解きほぐしてもよい。その場合でも、ならし部材が往動端位置や復動端位置に移動した際などに画像nを撮像することで、アクチュエータの往復動を中断することなくワークWの分離状態を評価して学習することが可能である。なお、パラメータとしては、ならし部材をワークWに当てる角度や往復動の速さなどが挙げられる。また、例えば採取ロボット50のエンドエフェクタ52としてならし部材を取付可能とし、採取ロボット50にほぐし動作(ならし動作)を実行させてもよい。そのようにする場合、採取ロボット50のアクチュエータとしての駆動モータ54を制御するためのパラメータを学習するものとすればよい。このように、所定の作動によりワークを解きほぐすアクチュエータが、ワークを採取して所定作業を行うロボットに含まれるものなどとしてもよい。 In the above-described embodiment, the cylinder 32 of the up-and-down movement device 30 is exemplified as an actuator for unraveling the bulk state of the works W, but the actuator is not limited to this. For example, the workpiece W may be unraveled by an actuator that reciprocates a leveling member such as a brush in the X-axis direction or the Y-axis direction. Even in this case, the separation state of the workpiece W can be evaluated without interrupting the reciprocation of the actuator by capturing the image n when the leveling member moves to the forward end position or the backward end position. It is possible to The parameters include the angle at which the leveling member is brought into contact with the workpiece W, the speed of the reciprocating motion, and the like. Alternatively, for example, a leveling member may be attached as the end effector 52 of the collection robot 50, and the collection robot 50 may perform a loosening operation (leveling operation). In this case, a parameter for controlling the drive motor 54 as an actuator of the sampling robot 50 may be learned. As described above, the actuator that releases the work by the predetermined operation may be included in a robot that collects the work and performs the predetermined work.
 ここで、本開示のコンピュータによるパラメータの学習方法および作業システムは、以下のように構成してもよい。例えば、本開示のパラメータの学習方法において、前記所定の動作の開始前に、前記複数のワークを撮像する動作前撮像ステップと、前記動作前撮像ステップで撮像された画像を処理して前記複数のワークのばら積み状態を取得する取得ステップと、を含み、前記評価ステップでは、前記取得ステップで取得されたばら積み状態を基準として用いて、前記分離状態を評価するものとしてもよい。こうすれば、作動の実行開始前に撮像された比較的鮮明な画像を基準として分離状態を精度よく評価することができるから、学習をより適切に行うことができる。 Here, the parameter learning method and the work system by the computer of the present disclosure may be configured as follows. For example, in the parameter learning method of the present disclosure, before the start of the predetermined operation, a pre-operation imaging step of imaging the plurality of workpieces, and processing the images captured in the pre-operation imaging step to process the plurality of images. An acquiring step of acquiring a bulk state of the work; and the evaluating step evaluates the separation state using the bulk state acquired in the acquiring step as a reference. By doing so, the separation state can be accurately evaluated based on a relatively clear image taken before the execution of the operation is started, so that learning can be performed more appropriately.
 本開示のパラメータの学習方法において、前記動作中撮像ステップでは、前記所定の動作の実行中における所定のタイミング毎に、前記複数のワークを撮像し、前記評価ステップでは、前記動作中撮像ステップで画像が撮像される毎に、その直前の前記動作中撮像ステップで撮像された画像を処理した際の直前の前記分離状態を基準として用いて、前記分離状態を評価するものとしてもよい。こうすれば、所定の動作の実行中にパラメータが変更された場合などにおいても、分離状態の変化を適切に把握することができるから、学習をより適切に行うことができる。 In the parameter learning method according to an embodiment of the present disclosure, in the in-operation imaging step, the plurality of workpieces are imaged at every predetermined timing during execution of the predetermined operation, and in the evaluation step, the image is captured in the in-operation imaging step. Each time is captured, the separation state may be evaluated using the separation state immediately before processing the image captured in the immediately preceding operation-in-progress imaging step as a reference. In this way, even when the parameter is changed during execution of the predetermined operation, the change in the separation state can be properly grasped, so that the learning can be performed more appropriately.
 本開示のパラメータの学習方法において、前記評価ステップでは、前記画像を処理して前記複数のワークが存在する領域の外縁を検出して前記領域の面積を演算し、前記面積に基づいて前記分離状態を評価するものとしてもよい。こうすれば、所定の動作の実行中に画像を撮像するために鮮明な画像が得られない場合であっても、簡易な処理で分離状態を適切に評価することができる。 In the parameter learning method according to an embodiment of the present disclosure, in the evaluating step, the image is processed to detect an outer edge of a region where the plurality of workpieces are present, the area of the region is calculated, and the separation state is calculated based on the area. May be evaluated. In this case, even if a clear image cannot be obtained because an image is captured during execution of the predetermined operation, the separation state can be appropriately evaluated by simple processing.
 本開示の作業システムは、複数のワークのばら積み状態を所定の動作により解きほぐすアクチュエータと、前記ワークを採取して所定の作業を行うロボットと、画像を撮像する撮像装置と、を備える作業システムであって、前記所定の動作の実行中に前記撮像装置により前記複数のワークを撮像させる撮像処理部と、撮像された画像を処理して前記複数のワークの分離状態を評価する評価処理部と、前記評価処理部の評価結果と、実行中の前記所定の動作における前記パラメータとの関係を学習する学習処理部と、を備えることを要旨とする。 A work system according to an embodiment of the present disclosure is a work system including an actuator that releases a plurality of works in a bulk state by a predetermined operation, a robot that collects the works and performs a predetermined work, and an imaging device that captures an image. An imaging processing unit that causes the imaging device to image the plurality of workpieces during execution of the predetermined operation; an evaluation processing unit that processes a captured image to evaluate a separation state of the plurality of workpieces; The gist of the present invention is to include a learning processing unit that learns a relationship between the evaluation result of the evaluation processing unit and the parameter in the predetermined operation being executed.
 本開示の作業システムは、上述したパラメータの学習方法と同様に、所定の動作を実行するアクチュエータを制御するためのパラメータの学習を、所定の動作の実行中に撮像された画像を用いて行うことができるから、学習のために作業装置が所定の動作を中断する必要がない。このため、ワークのばら積み状態を解きほぐすための作業効率を低下することなく、パラメータの学習を適切に行うことができる。なお、この作業システムにおいて、パラメータの学習方法の各ステップを実現するような機能を追加してもよい。 The working system according to the present disclosure performs learning of a parameter for controlling an actuator that performs a predetermined operation using an image captured during execution of the predetermined operation, similarly to the above-described parameter learning method. Therefore, the work device does not need to interrupt the predetermined operation for learning. For this reason, the parameter can be appropriately learned without lowering the work efficiency for unraveling the bulk state of the works. In this working system, a function for realizing each step of the parameter learning method may be added.
 本開示は、作業システムの製造産業などに利用可能である。 The present disclosure is applicable to, for example, the manufacturing industry of working systems.
 10 作業システム、11 作業台、12 供給ボックス、16 載置台搬送装置、20 ワーク搬送装置、21 搬送レーン、22 コンベアベルト、22a 上面部、23a 駆動ローラ、23b 従動ローラ、24a,24b 側壁、28 支持プレート、28a 開口部、29 支持台、30 上下動装置、31 当接体、32 シリンダ、38 駆動モータ、40 供給ロボット、41 ロボットアーム、42 エンドエフェクタ、44 駆動モータ、45 エンコーダ、50 採取ロボット、51 ロボットアーム、52 エンドエフェクタ、53 カメラ、54 駆動モータ、55 エンコーダ、70 制御装置、70A パラメータ学習部、70B 駆動制御部、71 学習モデル、72 撮像処理部、73 評価処理部、74 学習処理部、75 パラメータ決定部、76 駆動部、80 入力装置、T 載置台。 10 work system, 11 work table, 12 supply box, 16 work table transfer device, 20 work transfer device, 21 transfer lane, 22 conveyor belt, 22a top surface, 23a drive roller, 23b driven roller, 24a, 24b side wall, 28 support Plate, 28a opening, 29 support, 30 vertical movement device, 31 contact body, 32 cylinder, 38 drive motor, 40 supply robot, 41 robot arm, 42 end effector, 44 drive motor, 45 encoder, 50 sampling robot, 51 robot arm, 52 end effector, 53 camera, 54 drive motor, 55 encoder, 70 control device, 70A parameter learning unit, 70B drive control unit, 71 learning model, 72 imaging processing unit, 73 evaluation processing unit 74 learning processing unit, 75 parameter determination unit, 76 drive unit, 80 input unit, T table.

Claims (5)

  1.  複数のワークのばら積み状態を所定の動作により解きほぐすアクチュエータを制御するためのパラメータの学習方法であって、
     前記所定の動作の実行中に前記複数のワークを撮像する動作中撮像ステップと、
     前記動作中撮像ステップで撮像された画像を処理して前記複数のワークの分離状態を評価する評価ステップと、
     前記評価ステップの評価結果と、実行中の前記所定の動作における前記パラメータとの関係を学習する学習ステップと、
     を含むパラメータの学習方法。
    A parameter learning method for controlling an actuator that unravels a bulk state of a plurality of works by a predetermined operation,
    In-operation imaging step of imaging the plurality of workpieces during execution of the predetermined operation,
    An evaluation step of processing an image captured in the imaging step during operation to evaluate a separation state of the plurality of works,
    A learning step of learning a relationship between the evaluation result of the evaluation step and the parameter in the predetermined operation being executed;
    Learning method of parameters including.
  2.  請求項1に記載のパラメータの学習方法であって、
     前記所定の動作の開始前に、前記複数のワークを撮像する動作前撮像ステップと、
     前記動作前撮像ステップで撮像された画像を処理して前記複数のワークのばら積み状態を取得する取得ステップと、を含み、
     前記評価ステップでは、前記取得ステップで取得されたばら積み状態を基準として用いて、前記分離状態を評価する
     パラメータの学習方法。
    The parameter learning method according to claim 1, wherein:
    Before the start of the predetermined operation, a pre-operation imaging step of imaging the plurality of works,
    An acquisition step of processing an image captured in the pre-operation imaging step to obtain a bulk state of the plurality of workpieces,
    In the evaluation step, a parameter learning method for evaluating the separation state using the bulk state acquired in the acquisition step as a reference.
  3.  請求項1に記載のパラメータの学習方法であって、
     前記動作中撮像ステップでは、前記所定の動作の実行中における所定のタイミング毎に、前記複数のワークを撮像し、
     前記評価ステップでは、前記動作中撮像ステップで画像が撮像される毎に、その直前の前記動作中撮像ステップで撮像された画像を処理した際の直前の前記分離状態を基準として用いて、前記分離状態を評価する
     パラメータの学習方法。
    The parameter learning method according to claim 1, wherein:
    In the in-operation imaging step, at each predetermined timing during execution of the predetermined operation, image the plurality of workpieces,
    In the evaluation step, each time an image is captured in the active imaging step, the separation state immediately before processing the image captured in the active imaging step is used as a reference. How to learn parameters to evaluate the state.
  4.  請求項1ないし3のいずれか1項に記載のパラメータの学習方法であって、
     前記評価ステップでは、前記画像を処理して前記複数のワークが存在する領域の外縁を検出して前記領域の面積を演算し、前記面積に基づいて前記分離状態を評価する
     パラメータの学習方法。
    A parameter learning method according to any one of claims 1 to 3,
    In the evaluation step, a parameter learning method for processing the image, detecting an outer edge of a region where the plurality of works are present, calculating an area of the region, and evaluating the separation state based on the area.
  5.  複数のワークのばら積み状態を所定の動作により解きほぐすアクチュエータと、前記ワークを採取して所定の作業を行うロボットと、画像を撮像する撮像装置と、を備える作業システムであって、
     前記所定の動作の実行中に前記撮像装置により前記複数のワークを撮像させる撮像処理部と、
     撮像された画像を処理して前記複数のワークの分離状態を評価する評価処理部と、
     前記評価処理部の評価結果と、実行中の前記所定の動作における前記パラメータとの関係を学習する学習処理部と、
     を備える作業システム。
    An operation system comprising: an actuator that unravels a bulk state of a plurality of works by a predetermined operation, a robot that performs the predetermined work by collecting the work, and an imaging device that captures an image,
    An imaging processing unit that causes the imaging device to image the plurality of workpieces during execution of the predetermined operation;
    An evaluation processing unit that processes a captured image to evaluate a separation state of the plurality of works,
    A learning processing unit that learns a relationship between the evaluation result of the evaluation processing unit and the parameter in the predetermined operation being executed;
    A working system comprising:
PCT/JP2018/029287 2018-08-03 2018-08-03 Parameter learning method and work system WO2020026447A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/JP2018/029287 WO2020026447A1 (en) 2018-08-03 2018-08-03 Parameter learning method and work system
CN201880096177.2A CN112512942B (en) 2018-08-03 2018-08-03 Parameter learning method and operating system
JP2020534026A JP7121127B2 (en) 2018-08-03 2018-08-03 Parameter learning method and working system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/029287 WO2020026447A1 (en) 2018-08-03 2018-08-03 Parameter learning method and work system

Publications (1)

Publication Number Publication Date
WO2020026447A1 true WO2020026447A1 (en) 2020-02-06

Family

ID=69231559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/029287 WO2020026447A1 (en) 2018-08-03 2018-08-03 Parameter learning method and work system

Country Status (3)

Country Link
JP (1) JP7121127B2 (en)
CN (1) CN112512942B (en)
WO (1) WO2020026447A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184034A1 (en) * 2022-03-31 2023-10-05 Ats Automation Tooling Systems Inc. Systems and methods for feeding workpieces to a manufacturing line

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3172494B2 (en) * 1997-11-17 2001-06-04 アデプト テクノロジー インコーポレイティッド Impact type parts feeder
JP2010241592A (en) * 2009-04-03 2010-10-28 Satoru Kobayashi Non-vibration type part feeder
JP2017030135A (en) * 2015-07-31 2017-02-09 ファナック株式会社 Machine learning apparatus, robot system, and machine learning method for learning workpiece take-out motion
WO2018092211A1 (en) * 2016-11-16 2018-05-24 株式会社Fuji Transfer device and transport system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62174617A (en) * 1986-01-29 1987-07-31 Teijin Eng Kk Weighing method for granule
AU2011201445A1 (en) * 2010-04-01 2011-10-20 Siemens Aktiengesellschaft Method and apparatus for measuring a parameter during the transport of objects to a processing device
EP2789400B1 (en) * 2011-12-07 2017-07-19 Kao Corporation Method and device for applying powder or granule and method for manufacturing a heat generating element using said method
CN104085667B (en) * 2014-06-30 2016-05-25 合肥美亚光电技术股份有限公司 The automatic adjustment module of charging and method thereof, device, bulk cargo foreign matter testing agency
WO2016043324A1 (en) * 2014-09-19 2016-03-24 株式会社イシダ Dispersion and supply device and combination weighing device
CH711104A2 (en) * 2015-05-18 2016-11-30 Finatec Holding Ag Test method and test system for testing workpieces.

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3172494B2 (en) * 1997-11-17 2001-06-04 アデプト テクノロジー インコーポレイティッド Impact type parts feeder
JP2010241592A (en) * 2009-04-03 2010-10-28 Satoru Kobayashi Non-vibration type part feeder
JP2017030135A (en) * 2015-07-31 2017-02-09 ファナック株式会社 Machine learning apparatus, robot system, and machine learning method for learning workpiece take-out motion
WO2018092211A1 (en) * 2016-11-16 2018-05-24 株式会社Fuji Transfer device and transport system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184034A1 (en) * 2022-03-31 2023-10-05 Ats Automation Tooling Systems Inc. Systems and methods for feeding workpieces to a manufacturing line

Also Published As

Publication number Publication date
JP7121127B2 (en) 2022-08-17
CN112512942B (en) 2022-05-17
JPWO2020026447A1 (en) 2021-08-02
CN112512942A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
JP6734402B2 (en) Work machine
JP7163506B2 (en) Work robots and work systems
JP7231706B2 (en) work machine
WO2020026447A1 (en) Parameter learning method and work system
JP7283881B2 (en) work system
JP6898374B2 (en) Motion adjustment device for adjusting the operation of the robot device and motion adjustment method for adjusting the motion of the robot device
CN110582193B (en) Optimization device and optimization method for component mounting line
EP3205457B1 (en) Transfer method and transfer apparatus
WO2018092211A1 (en) Transfer device and transport system
JP2013094936A (en) Method and system for taking out part
JP6814295B2 (en) Parts supply equipment and work system
WO2020021643A1 (en) End effector selection method and selection system
WO2023013056A1 (en) Workpiece picking method and workpiece picking system
CN111278612B (en) Component transfer device
JP7440635B2 (en) robot system
CN114450133A (en) Robot control system, robot control method, and program
JP7257514B2 (en) Component mounting system and learning device
JP4846678B2 (en) Electronic component mounting device
JP6915085B2 (en) Working machine and gripping position search method
CN111508014B (en) System for eliminating interference of multiple workpieces stacked randomly
TW202027934A (en) System for eliminating interference of randomly stacked workpieces
JP5013124B2 (en) Spatter removal apparatus and spatter removal method
CN116529028A (en) Device and method for feeding flexible annular workpieces

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928452

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020534026

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18928452

Country of ref document: EP

Kind code of ref document: A1