CN112512942B - Parameter learning method and operating system - Google Patents

Parameter learning method and operating system Download PDF

Info

Publication number
CN112512942B
CN112512942B CN201880096177.2A CN201880096177A CN112512942B CN 112512942 B CN112512942 B CN 112512942B CN 201880096177 A CN201880096177 A CN 201880096177A CN 112512942 B CN112512942 B CN 112512942B
Authority
CN
China
Prior art keywords
workpieces
image
state
evaluation
workpiece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201880096177.2A
Other languages
Chinese (zh)
Other versions
CN112512942A (en
Inventor
内田刚
大池博史
江嵜弘健
阿努苏亚·纳拉达比
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuji Corp
Original Assignee
Fuji Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Corp filed Critical Fuji Corp
Publication of CN112512942A publication Critical patent/CN112512942A/en
Application granted granted Critical
Publication of CN112512942B publication Critical patent/CN112512942B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G47/00Article or material-handling devices associated with conveyors; Methods employing such devices
    • B65G47/02Devices for feeding articles or materials to conveyors
    • B65G47/04Devices for feeding articles or materials to conveyors for feeding articles
    • B65G47/12Devices for feeding articles or materials to conveyors for feeding articles from disorderly-arranged article piles or from loose assemblages of articles
    • B65G47/14Devices for feeding articles or materials to conveyors for feeding articles from disorderly-arranged article piles or from loose assemblages of articles arranging or orientating the articles by mechanical or pneumatic means during feeding

Abstract

The learning method for controlling the parameter of an actuator in a bulk state for disentangling a plurality of works by a predetermined action includes the steps of: an action period photographing step of photographing a plurality of workpieces during execution of a predetermined action; an evaluation step of processing the image captured in the operation period capturing step to evaluate a separation state of the plurality of workpieces; and a learning step of learning a relationship between the evaluation result of the evaluation step and a parameter in a predetermined operation during execution.

Description

Parameter learning method and operating system
Technical Field
The present specification discloses a parameter learning method and an operating system.
Background
Conventionally, in a work system in which a workpiece such as an electronic component or a mechanical component is supplied to a flexible support portion and picked up by a robot to perform a predetermined work, there has been disclosed a work system in which an impact is applied to the support portion from below in order to release a state in which the workpiece supplied to the support portion is stacked in bulk (for example, see patent document 1). In this system, parameters such as impact energy and application point are variable so that the bulk state of the workpieces can be appropriately loosened according to the weight, size, shape, material, and the like of the workpieces. Further, the workpiece is photographed in a stable state before and after the impact is applied, the photographed image is processed to recognize the separation state of the workpiece, and the relationship between the changed parameters and the separation state of the workpiece is continuously learned, thereby determining appropriate parameters.
Documents of the prior art
Patent document 1: japanese patent No. 3172494
Disclosure of Invention
Problems to be solved by the invention
However, in the above-described work system, since the image of the workpiece is taken in a stable state, it is necessary to take an image in a state where the impact applied to the support portion is smoothed. That is, since it is necessary to temporarily stop the application of the impact every time the separated state of the workpiece is recognized, the time for stopping the application of the impact may be long for the purpose of continuing the learning. This takes time to loosen the bulk state of the workpieces, and reduces the work efficiency.
A main object of the present disclosure is to appropriately learn parameters without reducing the work efficiency in a state of bulk loading for releasing workpieces.
Means for solving the problems
In order to achieve the above main object, the present disclosure adopts the following means.
A parameter learning method according to the present disclosure is a parameter learning method for controlling an actuator in a state in which a plurality of workpieces are stacked in bulk by a predetermined operation, and includes: an operation period imaging step of imaging the plurality of workpieces during execution of the predetermined operation; an evaluation step of processing the image captured in the operation period capturing step to evaluate a separation state of the plurality of workpieces; and a learning step of learning a relationship between an evaluation result of the evaluation step and the parameter in the predetermined operation during execution.
The parameter learning method of the present disclosure images a plurality of works during execution of a predetermined operation for releasing a bulk state of the plurality of works, processes the imaged images to evaluate a separation state of the plurality of works, and learns a relationship between the evaluation result and a parameter in the predetermined operation during execution. Thus, since parameter learning can be performed using an image captured during execution of the predetermined action, it is not necessary for the actuator to interrupt the predetermined action for learning. Therefore, the parameter can be appropriately learned without lowering the work efficiency for releasing the bulk stacked state of the workpieces.
Drawings
Fig. 1 is a schematic configuration diagram showing the configuration of an operating system 10.
Fig. 2 is a schematic configuration diagram showing the configuration of the work conveying apparatus 20.
Fig. 3 is a partial external view of the work transfer device 20 as viewed from the back side.
Fig. 4 is an explanatory diagram showing an electrical connection relationship of the control device 70.
Fig. 5 is an explanatory diagram showing a state in which the works W are loosened in bulk.
Fig. 6 is a block diagram showing the functions of the control device 70.
Fig. 7 is a flowchart showing an example of the unlock operation control routine.
Fig. 8 is a flowchart showing an example of the processing at the start of the unlatching operation.
Fig. 9 is an explanatory diagram illustrating an example of a method of calculating the differences Δ a and Δ G.
Fig. 10 is an explanatory diagram showing an example of the area a and the center of gravity G.
Detailed Description
Next, a mode for carrying out the present disclosure will be described with reference to the drawings.
Fig. 1 is a schematic configuration diagram showing the configuration of the work system 10, fig. 2 is a schematic configuration diagram showing the configuration of the workpiece transfer device 20, fig. 3 is a partial external view of the workpiece transfer device 20 as viewed from the back side, and fig. 4 is an explanatory diagram showing the electrical connection relationship of the control device 70. In fig. 1 and 2, the left-right direction is the X-axis direction, the front-rear direction is the Y-axis direction, and the up-down direction is the Z-axis direction.
The work system 10 according to the present embodiment is a system that transfers the workpiece W stored in the supply cassette 12 to the table T and aligns the workpiece W. As shown in fig. 1, the work system 10 includes: a stage conveyance device 16, a workpiece conveyance device 20, a supply robot 40, and a pickup robot 50. They are provided on the work table 11.
The mounting table conveying device 16 includes a pair of belt conveyors that are arranged in the left-right direction (X-axis direction) with a gap therebetween in the front-back direction (Y-axis direction). The table T is conveyed from left to right by the belt conveyor.
The supply robot 40 is a robot for taking out the workpiece W, which is a variety of components such as a mechanical component and a motor component, from the supply cassette 12 and supplying the workpiece W to the supply area a1 (see fig. 2) of the workpiece conveying device 20. The supply robot 40 includes a vertical articulated robot arm 41 and an end effector 42. The robot arm 41 includes: a plurality of links, a plurality of joints connecting the links to each other so as to be rotatable or rotatable, a drive motor 44 (see fig. 4) driving each joint, and an encoder 45 (see fig. 4) detecting an angle of each joint. The plurality of links include a front end link to which the end effector 42 is attached and a base end link fixed to the work table 11. The end effector 42 can hold and release the workpiece W. The end effector 42 can supply the workpieces W in bulk to the supply area a1 using, for example, a mechanical chuck, a suction nozzle, an electromagnet, or the like.
The pick-up robot 50 is a robot for picking up the workpiece W in a pick-up area a2 (see fig. 2) of the workpiece conveying device 20, transferring the workpiece W onto the mounting table T, and aligning the workpiece W. The pickup robot 50 includes a vertical articulated robot arm 51 and an end effector 52. The robot arm 51 has: a plurality of links, a plurality of joints connecting the links to each other so as to be rotatable or rotatable, a drive motor 54 (see fig. 4) driving each joint, and an encoder 55 (see fig. 4) detecting an angle of each joint. The plurality of links include a distal end link to which the end effector 52 is attached and a proximal end link fixed to the table 11. The end effector 52 can hold and release the workpiece W. The end effector 52 can be, for example, a mechanical chuck, a suction nozzle, an electromagnet, or the like. A camera 53 for photographing the workpiece W conveyed by the workpiece conveying device 20 and the stage T conveyed by the stage conveying device 16 and grasping the position and state thereof is also attached to the distal end link of the robot arm 51.
As shown in fig. 1 and 2, the workpiece conveying apparatus 20 has a plurality of conveying paths 21 capable of conveying the workpieces W in the front-rear direction (Y-axis direction) from the supply area a1 to the pickup area a2, respectively. A plurality of supply cassettes 12 for storing the workpieces W supplied to each of the plurality of conveyance paths 21 are arranged behind the workpiece conveyance device 20.
The work conveying device 20 includes a conveyor belt 22 and a partition 25. As shown in fig. 2, the conveying belt 22 is mounted on a driving roller 23a and a driven roller 23 b. The conveyor belt 22 has a workpiece W placed on an upper surface portion 22a (placement portion), and conveys the workpiece W in a belt feeding direction by rotationally driving a drive roller 23a by a drive motor 38 (see fig. 4). Side walls 24a, 24b are provided on both sides of the conveyor belt 22. The driving roller 23a and the driven roller 23b are rotatably supported by the side walls 24a and 24 b. As shown in fig. 3, the work conveying apparatus 20 includes a support plate 28 on the back side of the upper surface portion 22a of the conveyor belt 22. The support plate 28 prevents the conveyor belt 22 from being deflected by the weight of the workpiece W placed on the upper surface portion 22 a. In addition, the support plate 28 is formed with opening portions 28a at positions corresponding to the pickup areas a2 of the plurality of conveyance paths 21, respectively. A vertical movement device 30 for lifting the upper surface portion 22a from the back surface and moving it vertically is disposed below each opening 28 a. The vertical movement device 30 includes a contact body 31 and a cylinder 32 that vertically moves the contact body 31 so as to penetrate the opening 28 a. The cylinder 32 is supported by a support table 29 fixed to the side walls 24a, 24 b. The partition 25 is a partition plate that partitions one conveyor belt 22 (upper surface portion 22a) into a plurality of conveyor paths 21. The spacers 25 extend parallel to the side walls 24a and 24b disposed on both sides of the conveyor belt 22 and are disposed at equal intervals so that the conveyor paths 21 have the same path width.
Although not shown, the control device 70 is configured as a well-known computer including a CPU, a ROM, an HDD, a RAM, an input/output interface, a communication interface, and the like. Various signals from the encoder 45 of the supply robot 40, the encoder 55 of the pickup robot 50, the camera 53, the input device 80, and the like are input to the control device 70. Various control signals for the drive motor 38 of the workpiece conveying device 20, the vertical movement device 30 (cylinder 32), the drive motor 44 of the supply robot 40, the drive motor 54 of the pickup robot 50, the camera 53, the table conveying device 16, and the like are output from the control device 70.
The control device 70 can learn parameters for controlling the cylinder 32 of the vertical movement device 30, and determine appropriate parameters based on the learning result to control the cylinder 32. Fig. 5 is an explanatory diagram showing a state in which the works W are loosened in bulk. As shown in the drawing, the bulk of the workpieces W in a bulk state is released to be separated by a releasing operation of moving (vibrating) the abutting body 31 up and down by the cylinder 32, and the workpieces W are easily picked up by the picking robot 50. The control device 70 controls the cylinder 32 according to the specification of the workpiece W such as the weight, size, shape, and material of the workpiece W with parameters suitable for unwinding the workpiece W. Examples of the parameters include impact force and vibration frequency when the conveyor belt 22 is lifted up by vertical movement (vibration) of the contact body 31.
Fig. 6 is a block diagram showing the functions of the control device 70. As shown in the drawing, the control device 70 includes a parameter learning unit 70A that mainly performs parameter learning, and a drive control unit 70B that mainly determines appropriate parameters and performs drive control of the vertical movement device 30. The parameter learning unit 70A includes: a learning model 71, an imaging processing unit 72, an evaluation processing unit 73, and a learning processing unit 74. The image pickup processing unit 72 causes the camera 53 to pick up an image of the loose-packed state of the workpieces W or the separated state after the workpieces W are unfastened, and inputs the picked-up image. The evaluation processing unit 73 processes the captured image to calculate a predetermined evaluation value regarding the separation state of the workpiece W and evaluate the separation state. The learning processing unit 74 learns the relationship between the parameter during the unwinding operation and the evaluation result of the evaluation processing unit 73 during execution by known machine learning, and builds a learning model 71 including the correlation with the specification of the workpiece W and the like. Examples of the learning method include reinforcement learning and genetic algorithm, and other methods may be used. The drive control unit 70B includes a parameter determination unit 75 and a drive unit 76. The parameter determination unit 75 learns parameters corresponding to the specification of the workpiece W or the like using the learning model 71, or determines arbitrary parameters as appropriate. The driving unit 76 controls the cylinder 32 of the vertical movement device 30 based on the parameter determined by the parameter determining unit 75.
In the work system 10 configured as described above, the respective controls such as the supply control and the conveyance control of the workpiece W, the release operation control, and the pick-and-place control are performed in sequence. The supply control is performed by performing drive control on the supply robot 40 in such a manner that the workpieces W are picked up from the supply cassettes 12 and supplied to the supply areas a1 of the corresponding conveyance paths 21 in the supply order. The supply sequence is, for example, a sequence designated by an operator by operation of the input device 80. The conveyance control is performed by performing drive control of the workpiece conveying device 20 so that the workpiece W supplied to the supply area a1 reaches the pick-up area a 2. The release operation control is performed by controlling the driving of the cylinder 32 of the vertical mover 30 corresponding to the pick-up area a2 in a state where the workpiece W reaches the pick-up area a2, or the like. The pick-up and placement control is performed by controlling the drive of the pick-up robot 50 so as to pick up the workpiece W separated by the disentangling control and place the workpiece W on the table T in alignment. In the pick-and-place control, the camera 53 picks up an image of the workpiece W in the pick-up area a2, and the pick-up robot 50 is driven and controlled to pick up the workpiece W selected by processing the picked-up image. These controls for the workpiece W of each conveyance path 21 may be executed in parallel, as long as they do not affect the control of the workpiece W of the other conveyance path 21. The details of the unlock operation control will be described below based on the unlock operation control routine shown in fig. 7.
In the release operation control routine of fig. 7, the control device 70 determines whether or not it is the start timing of the release operation (S100). The control device 70 determines that the timing is the start timing when the workpiece W reaches the pickup area a2 by the conveyance control and the vertical movement device 30 is in the drivable state. The control device 70 may determine that the timing is the start timing when the unwinding operation is performed and the remaining workpieces W are picked up after some workpieces W are unwound, and the like. When it is determined that the timing of the start of the unwinding operation is reached, the control device 70 executes the unwinding operation start processing shown in fig. 8 (S105).
In the unlock operation start time processing of fig. 8, the control device 70 first initializes the number n indicating the image capturing order to a value of 1(S200), and captures an image 1, which is the image of the number 1, with the camera 53 before the unlock operation starts (S205). Next, the control device 70 processes the image 1 to detect the outer edge of the region of the workpiece W lump and calculates the region area a (1) and the center of gravity G (1) of the region area a (1) (S210). Since the workpieces W are supplied to the supply area a1 in a bulk state and conveyed to the pickup area a2, the plurality of workpieces W are intertwined with each other in the image 1 to be a lump, and the region of the workpieces W and the upper surface portion 22a of the conveyor belt 22 which becomes the background differ in luminance value or the like. Therefore, in S210, for example, the image 1 is converted into a grayscale image, the boundary between the blob of the workpiece W and the upper surface portion 22a is detected from the grayscale image as the outer edge of the region of the workpiece W, and the area of the region surrounded by the outer edge is calculated as the region area a (n) (here, a (1)). Further, the position of the center of gravity of the region surrounded by the outer edge is calculated as the center of gravity G (n) (here, G (1)). The control device 70 is not limited to the use of a grayscale image, and may use a 2-valued image. Next, the control device 70 sets parameters for the unlatching operation (S215), starts the unlatching operation by driving the cylinder 32 of the vertical mover 30 based on the set parameters (S220), and ends the process when the unlatching operation is started. In S215, the control device 70 selects and sets parameters suitable for the current workpiece W from the learning model 71. In addition, the control device 70 may appropriately set any parameter when it is difficult to select a new workpiece W.
When the operation at the time of the opening operation start in S105 is executed in this way, the control device 70 determines whether or not the predetermined timing is reached in the opening operation (S110). Here, the predetermined timing may be a timing at which a predetermined time elapses from the start of the release operation, or a timing at which the cylinder 32 of the vertical movement device 30 moves up and down a predetermined number of times. It is assumed that the predetermined timing is generated a plurality of times from the start of the unlatching operation to the end of the unlatching operation. When it is determined in S110 that the predetermined timing is reached, control device 70 updates number n by increasing number n by 1 (S115), and captures image n, which is an image of number n, with camera 53 during the execution of the release operation (S120). Note that capturing the image n during the execution of the release operation means capturing the image n without interrupting the continuous vertical movement of the cylinder 32. Therefore, the image n is captured in a state where the workpiece W jumps in various directions due to vibration, and therefore, the workpieces W are in a shaken state.
Next, the control device 70 processes the image n to calculate the area a (n) and the center of gravity g (n) (S125), and determines whether or not the workpiece W is sufficiently separated (diffused) to the extent that the pickup robot 50 can pick up the workpiece W (S130). The process of S125 is performed in the same manner as S210 of the process at the start of the unwinding operation, except that the image n of the workpiece W is used. In addition, even in the image n in which the workpiece W is shaken, since a substantial boundary between the lump of the workpiece W and the upper surface portion 22a of the conveyor belt 22 can be detected, the outer edge of the region of the workpiece W can be detected, and the region area a (n) and the center of gravity g (n) can be calculated. Further, when some of the workpieces W can be recognized from the processed image n and the picking robot 50 can pick them, the control device 70 determines in S130 that the workpieces W are sufficiently separated. Alternatively, the controller 70 predicts the area Ae of the work W in a sufficiently separated state based on, for example, the specification and the number of works W, and determines that the works W are sufficiently separated when the area a (n) of the work W calculated in S125 is equal to or larger than the area Ae.
When it is determined in S130 that the workpiece W is not sufficiently separated, the control device 70 calculates the difference Δ a and the difference Δ G as evaluation values, respectively, and evaluates the separation state (S135). Here, the difference Δ a is calculated as an area difference between the area a (n) and the reference area. The difference Δ G is calculated as a difference between the center of gravity G (n) and the center of gravity of the reference.
Fig. 9 is an explanatory diagram illustrating an example of a method of calculating the differences Δ a and Δ G. In the method shown in fig. 9A, the difference Δ a (n) -a (1)) and the difference Δ G (n) -G (1)) are calculated using the area a (1) of the region and the center of gravity G (1) of the image 1 captured at the start of the unlocking operation as references. In this method, since the image is taken before the unwinding operation is started, the difference values Δ a and Δ G can be calculated with high accuracy by comparing the image with a clear image 1 in which the workpiece W is stationary and there is no shaking. Further, since the difference values Δ a and Δ G may appear significantly as the unwinding operation time becomes longer, the evaluation is easy to perform and the learning can be performed more appropriately. In the method shown in fig. 9B, the difference Δ a (n) -a (n-1)), the difference Δ G (n) -G (n-1)) are calculated using, as references, the area a (n-1) of the region of the image (n-1) captured at a predetermined timing immediately before the predetermined timing at which the image n is captured and the center of gravity G (n-1). In this method, evaluation can be performed even when the parameter is changed during the unwinding operation in the processing described later. That is, since the control device 70 can change the parameter and evaluate the parameter while continuing the unwinding operation, the parameter can be appropriately learned without interrupting the unwinding operation. For example, either one of the methods shown in fig. 9A and 9B may be selected based on a designation by an operator by an operation of the input device 80, or the method shown in fig. 9A may be changed to the method shown in fig. 9B in accordance with a change in a parameter described later.
Fig. 10 is an explanatory diagram showing an example of the area a and the center of gravity G. In fig. 10, the area a (1) and the center of gravity G (1) which are the references used in the method shown in fig. 9A are shown by solid lines, and the area a (n) and the center of gravity G (n) of the image n are shown by broken lines. In fig. 10A, a state is shown in which the center of gravity G is almost unchanged and the area a of the region becomes large so that the difference Δ G is small and the difference Δ a is large. Such a state is a state in which the workpiece W is separated gradually without being greatly deviated from the pickup area a2, and therefore the evaluation is high. Fig. 10B shows a state in which the region area a is almost unchanged, the center of gravity G is shifted, and the difference Δ a is small and the difference Δ G is large. In such a state, the entire lump of the workpiece W is not decomposed, and the workpiece W is entirely displaced, so the evaluation is low. In this way, even if the image n of the workpiece W which is photographed and shaken during the unwinding operation is used, the region of the lump of the workpiece W can be detected by a simple process and the separated state can be evaluated. Although not shown, when the center of gravity G moves and the area a of the region becomes large, it can be evaluated that the workpiece W has a lump undone, and therefore the evaluation is higher than that in fig. 10B.
The control device 70 updates the learning model 71 by learning such an evaluation result for the current parameter (S140), and determines whether or not it is necessary to change to a more appropriate parameter (S145). The control device 70 determines S145 based on whether or not a parameter more appropriate for the current workpiece W can be selected from the updated learning model 71. If it is determined that the parameter change is necessary, control device 70 changes the parameter to a more appropriate parameter and continues the release operation (S150), and the process returns to S110. On the other hand, when it is determined that the parameter change is unnecessary due to a relatively high evaluation of the separation state of the current parameter, the control device 70 continues the release operation with the current parameter kept (S155), and returns to S110. When it is determined in S130 that the workpiece W is sufficiently separated while such processing is being executed, the control device 70 stops the driving of the cylinder 32 of the vertical mover 30 to terminate the unwinding operation (S160), and returns to S100.
Here, the correspondence relationship between the components of the present embodiment and the components of the present disclosure is explicitly described. The cylinder 32 of the vertical movement device 30 of the present embodiment corresponds to an actuator, S120 of the release operation control routine of fig. 7 corresponds to an operation period imaging step, S125 and S135 of the process correspond to evaluation steps, and S140 of the process corresponds to a learning step. S205 of the processing at the time of the opening operation start in fig. 7 corresponds to the pre-operation shooting step, and S210 of the processing corresponds to the acquisition step. The work system 10 corresponds to a work system, the pickup robot 50 corresponds to a robot, the camera 53 corresponds to an imaging device, the imaging processing unit 72 that executes S120 of the release operation control routine corresponds to an imaging processing unit, the evaluation processing unit 73 that executes S125 and S135 of the processing corresponds to an evaluation processing unit, and the learning processing unit 74 that executes S140 of the processing corresponds to a learning processing unit.
In the parameter learning method according to the present embodiment described above, the image n captured during the unwinding operation of the plurality of workpieces W is processed to evaluate the separation state of the plurality of workpieces W, and the relationship between the evaluation result and the parameter during the unwinding operation during execution is learned. Therefore, it is not necessary to interrupt the unlocking operation of the vertical movement device 30 for learning, and therefore, it is possible to appropriately perform learning without reducing the working efficiency of the unlocking operation.
Further, since the image 1 captured before the start of the unwinding operation is processed to obtain the bulk state of the plurality of works W and the separation state of the works W is evaluated based on the bulk state, it is possible to obtain the evaluation value with high accuracy and to perform learning more appropriately. Further, since the separation state of the workpiece W is evaluated with reference to the previous image (n-1) of the image n captured at each predetermined timing during the unwinding operation, even when the parameter is changed during the unwinding operation, it is possible to acquire the evaluation value and learn it without interrupting the unwinding operation.
Further, since the separation state is evaluated using the area a (n) of the region surrounded by the outer edge of the region of the workpiece W, the separation state can be appropriately evaluated by simple processing even when a clear image cannot be obtained. In addition, since the separation state is evaluated using the area a (n) and the center of gravity g (n) thereof, the separation state can be further appropriately evaluated.
It is needless to say that the present disclosure is not limited to the above embodiments, and can be implemented in various forms as long as the present disclosure falls within the technical scope of the present disclosure.
For example, in the above embodiment, the evaluation is performed using the area a (n) and the center of gravity g (n), but the evaluation is not limited to this, and only the area a (n) may be used, or another evaluation value may be used. For example, since the height of the lump of the workpiece W gradually decreases while the bulk state is gradually released, the height of the lump of the workpiece W may be detected from an image captured from the side and used as an evaluation value or the like.
In the above embodiment, the bulk state of the workpiece W in the image 1 and the separated state of the workpiece W in the previous image n are used as the criteria for the evaluation, but the present invention is not limited thereto, and any one of the two may be used as the criteria. Alternatively, the separation state of the workpiece W in the image n captured immediately before the parameter is changed and the image n captured immediately after the parameter is changed may be used as a reference.
In the above embodiment, the workpiece transport device 20 is provided with the vertical movement device 30 for each transport path 21, but may be moved up and down in a lump on the upper surface portions 22a of the plurality of transport paths 21 by one vertical movement device 30. In this case, the work conveying apparatus 20 may be provided with a support plate having an opening formed so as to straddle the plurality of conveying paths 21.
In the above embodiment, the workpiece conveying device 20 is provided with the vertical movement device 30 that moves vertically in the pickup area a2 of the conveyor belt 22 (upper surface portion 22a), but may be disposed in the vertical movement device 30 that moves vertically in the supply area a1 or other areas.
In the above embodiment, the cylinder 32 of the vertical movement device 30 is exemplified as an actuator for releasing the bulk stacked state of the works W, but the present invention is not limited thereto. For example, the workpiece W may be unwound by an actuator that reciprocates a leveling member such as a brush in the X-axis direction or the Y-axis direction. Even in this case, by capturing the image n when the flat member moves to the forward end position or the backward end position, the separated state of the workpiece W can be evaluated and learned without interrupting the reciprocating movement of the actuator. Further, the parameters include an angle at which the flat member abuts against the workpiece W, a reciprocating speed, and the like. For example, a leveling member may be attached as the end effector 52 of the pick robot 50, and the pick robot 50 may be allowed to perform an unwinding operation (leveling operation). In such a case, it is sufficient to learn parameters for controlling the drive motor 54 serving as an actuator of the pickup robot 50. In this way, the actuator for releasing the workpiece by a predetermined operation may be included in a robot or the like that picks up the workpiece and performs a predetermined operation.
Here, the computer parameter learning method and the operating system according to the present disclosure may be configured as follows. For example, the parameter learning method according to the present disclosure may include: a pre-operation imaging step of imaging the plurality of workpieces before the predetermined operation is started; and an acquisition step of processing the image captured in the pre-operation capturing step to acquire a bulk stacked state of the plurality of workpieces, wherein in the evaluation step, the separation state is evaluated using the bulk stacked state acquired in the acquisition step as a reference. In this way, the separated state can be evaluated with high accuracy based on a relatively clear image captured before the start of execution of the job, and therefore, learning can be performed more appropriately.
In the parameter learning method according to the present disclosure, in the operation period imaging step, the plurality of workpieces may be imaged at predetermined timings for each execution period of the predetermined operation, and in the evaluation step, the separation state may be evaluated using, as a reference, the previous separation state when a previous image captured in the operation period imaging step was processed every time an image is captured in the operation period imaging step. In this way, even when the parameter is changed during the execution of the predetermined operation, the change in the separation state can be appropriately grasped, and therefore, the learning can be performed more appropriately.
In the parameter learning method according to the present disclosure, in the evaluation step, the image may be processed to detect an outer edge of a region in which the plurality of workpieces are located, an area of the region may be calculated, and the separation state may be evaluated based on the area. In this way, even when a clear image cannot be obtained because an image is captured during execution of a predetermined operation, the separation state can be appropriately evaluated by simple processing.
The disclosed work system is provided with: an actuator for releasing the bulk state of the plurality of workpieces by a predetermined operation; a robot for picking up the workpiece and performing a predetermined operation; and an imaging device that images an image, the work system including: an imaging processing unit that images the plurality of workpieces by the imaging device during execution of the predetermined operation; an evaluation processing unit for processing the captured image to evaluate the separation state of the plurality of workpieces; and a learning processing unit configured to learn a relationship between an evaluation result of the evaluation processing unit and the parameter in the predetermined operation during execution.
The work system of the present disclosure can perform learning of parameters for controlling an actuator that executes a predetermined action using an image captured during execution of the predetermined action, as in the above-described parameter learning method, and therefore, it is not necessary for the work apparatus to interrupt the predetermined action for learning. Therefore, the parameter can be appropriately learned without reducing the work efficiency for releasing the bulk stacked state of the workpieces. In addition, in this work system, a function of realizing each step of the parameter learning method may be added.
Industrial applicability
The present disclosure can be utilized in the manufacturing industry of work systems and the like.
Description of the reference numerals
10 work system, 11 work table, 12 supply cassette, 16 table conveying device, 20 workpiece conveying device, 21 conveying channel, 22 conveying belt, 22a upper surface part, 23a driving roller, 23B driven roller, 24a, 24B side wall, 28 supporting plate, 28a opening part, 29 supporting table, 30 up-down moving device, 31 abutting body, 32 cylinders, 38 driving motor, 40 supply robot, 41 mechanical arm, 42 end executor, 44 driving motor, 45 encoder, 50 picking robot, 51 mechanical arm, 52 end executor, 53 camera, 54 driving motor, 55 encoder, 70 control device, 70A parameter learning part, 70B driving control part, 71 learning model, 72 shooting processing part, 73 evaluation processing part, 74 learning processing part, 75 parameter determining part, 76 driving part, 80 input device, T table.

Claims (4)

1. A parameter learning method for controlling parameters of an actuator in a state in which a plurality of works are piled up in bulk by a predetermined operation,
the parameter learning method comprises the following steps:
an action period photographing step of photographing the plurality of workpieces during execution of the predetermined action;
an evaluation step of processing the image captured in the operation period capturing step to evaluate a separation state of the plurality of workpieces;
a learning step of learning a relationship between an evaluation result of the evaluation step and the parameter in the predetermined action during execution;
a pre-action shooting step of shooting the plurality of workpieces before the predetermined action is started; and
an acquisition step of processing the image captured in the pre-operation capturing step to acquire a bulk stacked state of the plurality of workpieces,
in the evaluation step, the separation state is evaluated using the bulk state acquired in the acquisition step as a reference.
2. The method of learning parameters according to claim 1,
in the action period photographing step, the plurality of workpieces are photographed at each predetermined timing during execution of the predetermined action,
in the evaluation step, each time an image is captured in the during-motion capturing step, the separation state is evaluated using, as a reference, the previous separation state when a previous image captured in the during-motion capturing step was processed.
3. The method of learning parameters according to claim 1 or 2,
in the evaluation step, the image is processed to detect outer edges of regions where the plurality of workpieces are located, an area of the regions is calculated, and the separation state is evaluated based on the area.
4. An operation system includes: an actuator for releasing the bulk stacked state of the plurality of workpieces by a predetermined operation; a robot picking up the workpiece and performing a predetermined operation; and a photographing device for photographing the image,
the work system includes:
an imaging processing section that images the plurality of workpieces with the imaging device during execution of the predetermined action;
an evaluation processing unit that processes the captured image to evaluate the separation state of the plurality of workpieces; and
a learning processing unit that learns a relationship between an evaluation result of the evaluation processing unit and a parameter in the predetermined action during execution,
the evaluation processing unit processes an image obtained by the imaging processing unit imaging the plurality of workpieces before the predetermined operation starts to acquire a bulk stacked state of the plurality of workpieces, and evaluates the separated state using the acquired bulk stacked state as a reference.
CN201880096177.2A 2018-08-03 2018-08-03 Parameter learning method and operating system Active CN112512942B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/029287 WO2020026447A1 (en) 2018-08-03 2018-08-03 Parameter learning method and work system

Publications (2)

Publication Number Publication Date
CN112512942A CN112512942A (en) 2021-03-16
CN112512942B true CN112512942B (en) 2022-05-17

Family

ID=69231559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880096177.2A Active CN112512942B (en) 2018-08-03 2018-08-03 Parameter learning method and operating system

Country Status (3)

Country Link
JP (1) JP7121127B2 (en)
CN (1) CN112512942B (en)
WO (1) WO2020026447A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023184034A1 (en) * 2022-03-31 2023-10-05 Ats Automation Tooling Systems Inc. Systems and methods for feeding workpieces to a manufacturing line

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62174617A (en) * 1986-01-29 1987-07-31 Teijin Eng Kk Weighing method for granule
JP3172494B2 (en) * 1997-11-17 2001-06-04 アデプト テクノロジー インコーポレイティッド Impact type parts feeder
CN103958075A (en) * 2011-12-07 2014-07-30 花王株式会社 Application method for powder and application device and method for manufacturing heating element using same
CN104085667A (en) * 2014-06-30 2014-10-08 合肥美亚光电技术股份有限公司 Automatic feeding adjustment module and method, device and bulk foreign body detection mechanism thereof
CN106687779A (en) * 2014-09-19 2017-05-17 株式会社石田 Dispersion and supply device and combination weighing device
CN107635677A (en) * 2015-05-18 2018-01-26 费南泰克控股有限公司 Inspection method and inspection system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010241592A (en) * 2009-04-03 2010-10-28 Satoru Kobayashi Non-vibration type part feeder
CA2735512A1 (en) * 2010-04-01 2011-10-01 Siemens Aktiengesellschaft Method and apparatus for measuring a parameter during the transport of objects to a processing device
JP6522488B2 (en) * 2015-07-31 2019-05-29 ファナック株式会社 Machine learning apparatus, robot system and machine learning method for learning work taking-out operation
WO2018092211A1 (en) * 2016-11-16 2018-05-24 株式会社Fuji Transfer device and transport system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62174617A (en) * 1986-01-29 1987-07-31 Teijin Eng Kk Weighing method for granule
JP3172494B2 (en) * 1997-11-17 2001-06-04 アデプト テクノロジー インコーポレイティッド Impact type parts feeder
CN103958075A (en) * 2011-12-07 2014-07-30 花王株式会社 Application method for powder and application device and method for manufacturing heating element using same
CN104085667A (en) * 2014-06-30 2014-10-08 合肥美亚光电技术股份有限公司 Automatic feeding adjustment module and method, device and bulk foreign body detection mechanism thereof
CN106687779A (en) * 2014-09-19 2017-05-17 株式会社石田 Dispersion and supply device and combination weighing device
CN107635677A (en) * 2015-05-18 2018-01-26 费南泰克控股有限公司 Inspection method and inspection system

Also Published As

Publication number Publication date
JP7121127B2 (en) 2022-08-17
CN112512942A (en) 2021-03-16
WO2020026447A1 (en) 2020-02-06
JPWO2020026447A1 (en) 2021-08-02

Similar Documents

Publication Publication Date Title
JP6734402B2 (en) Work machine
CN114286740B (en) Work robot and work system
JP6279581B2 (en) Mounting apparatus and component detection method
WO2007108352A1 (en) Electronic part mounting device and electronic part mounting method
US20120240388A1 (en) Component mounting method and component mounting device
CN112512942B (en) Parameter learning method and operating system
JP7283881B2 (en) work system
CN112166660B (en) Component mounting system and method for instructing placement of component supply unit
EP3205457B1 (en) Transfer method and transfer apparatus
JP7312903B2 (en) Parts mounting machine
JP5606424B2 (en) Component extraction method and component extraction system
WO2018092211A1 (en) Transfer device and transport system
CN108136595B (en) Component supply system and pick-up device for distributed components
JP2018153899A (en) Component supply system
WO2023013056A1 (en) Workpiece picking method and workpiece picking system
JPWO2019016948A1 (en) Parts supply device and work system
CN111278612B (en) Component transfer device
CN114450133A (en) Robot control system, robot control method, and program
JP7440635B2 (en) robot system
JP7257514B2 (en) Component mounting system and learning device
JP6959128B2 (en) Component mounting device
WO2020255186A1 (en) Component mounter
JPH03159682A (en) Transporting apparatus of sheet material
CN116671272A (en) Component supply control system
TW202027934A (en) System for eliminating interference of randomly stacked workpieces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant