WO2023084713A1 - Object sorting system - Google Patents

Object sorting system Download PDF

Info

Publication number
WO2023084713A1
WO2023084713A1 PCT/JP2021/041598 JP2021041598W WO2023084713A1 WO 2023084713 A1 WO2023084713 A1 WO 2023084713A1 JP 2021041598 W JP2021041598 W JP 2021041598W WO 2023084713 A1 WO2023084713 A1 WO 2023084713A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
recognition result
unit
bin
storage unit
Prior art date
Application number
PCT/JP2021/041598
Other languages
French (fr)
Japanese (ja)
Inventor
裕紀 谷崎
雅信 本江
健 李
Original Assignee
株式会社Pfu
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Pfu filed Critical 株式会社Pfu
Priority to PCT/JP2021/041598 priority Critical patent/WO2023084713A1/en
Publication of WO2023084713A1 publication Critical patent/WO2023084713A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B07SEPARATING SOLIDS FROM SOLIDS; SORTING
    • B07CPOSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
    • B07C5/00Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
    • B07C5/34Sorting according to other particular properties
    • B07C5/342Sorting according to other particular properties according to optical properties, e.g. colour

Definitions

  • the present disclosure relates to an object sorting system.
  • sorting waste is performed manually. While sorting waste is a simple task, it places a heavy burden on workers who sort waste (hereafter sometimes referred to as "sorters"). A device (sometimes referred to hereinafter as a “waste sorter”) has been developed to do so.
  • the waste sorting device When the waste sorting device performs the work that the sorting worker was doing instead of the sorting worker, the waste sorting device recognizes each waste flowing on the belt conveyor, and based on the recognition result, the robot hand It is conceivable to extract the desired waste (hereinafter sometimes referred to as "desired waste") from the waste mass flowing on the belt conveyor using a vacuum cleaner or a suction pad. Therefore, when a plurality of types of waste are mixed and flowed on the belt conveyor, it is necessary for the waste sorting device to identify the type of waste. Recognition using a trained model generated by machine learning is effective for the waste sorting device to recognize various types of waste.
  • this disclosure proposes a technique that can reduce the labor required to generate training data.
  • the object sorting system of the present disclosure has a camera, a recognition unit, a storage unit, a sorting unit, and an extraction unit.
  • the camera captures a first image, which is an image of the object group, on a transport path along which the object group is transported.
  • the recognition unit recognizes a second image, which is an image of each object existing in the first image, using the trained model, and recognizes feature information, which is information indicating features of the second image, to the second image. to generate a recognition result in which the characteristic information is added to the second image in the first image.
  • the storage unit stores the recognition result.
  • the sorting unit sorts out a desired object from the object group conveyed on the conveying path based on the recognition result.
  • the extracting unit includes an image of the erroneously selected undesired object in the recognition result stored in the storage unit when the selecting unit erroneously selects an undesired object other than the desired object.
  • An erroneous recognition result which is a recognition result, is determined, and the erroneous recognition result is extracted from the storage unit.
  • the effort required to generate training data can be reduced.
  • FIG. 1 is a diagram illustrating a configuration example of an object sorting system according to Example 1 of the present disclosure.
  • FIG. 2 is a diagram illustrating a configuration example of a control device according to the first embodiment of the present disclosure;
  • FIG. 3 is a diagram illustrating an example of feature information according to Example 1 of the present disclosure.
  • FIG. 4 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure.
  • FIG. 5 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure.
  • FIG. 6 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure.
  • FIG. 1 is a diagram illustrating a configuration example of an object sorting system according to Example 1 of the present disclosure.
  • FIG. 2 is a diagram illustrating a configuration example of a control device according to the first embodiment of the present disclosure;
  • FIG. 3 is a diagram illustrating an example of feature information according to
  • FIG. 7 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating an example of extraction of desired waste in Example 1 of the present disclosure.
  • FIG. 9 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure;
  • FIG. 10 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure;
  • FIG. 11 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure;
  • FIG. 12 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure;
  • FIG. 13 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure;
  • FIG. 10 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure.
  • FIG. 11 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure;
  • FIG. 14 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure
  • FIG. 15 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure
  • FIG. 16 is a diagram illustrating an example of button color transition according to the first embodiment of the present disclosure
  • FIG. 17 is a diagram illustrating an example of button color transition according to the first embodiment of the present disclosure.
  • FIG. 18 is a diagram illustrating a configuration example of a control device according to a second embodiment of the present disclosure
  • FIG. 19 is a diagram illustrating a configuration example of a control device according to a third embodiment of the present disclosure;
  • FIG. 1 is a diagram illustrating a configuration example of an object sorting system according to Example 1 of the present disclosure.
  • the object sorting system 1 has a control device 10, a camera 20, an object sorting device 30, a belt conveyor 40, a push button 50, a display 60, and an input device 70.
  • Push buttons 50 , display 60 and input device 70 are connected to control device 10 .
  • Examples of the input device 70 include a pointing device such as a mouse and a keyboard.
  • the control device 10, camera 20, and object sorting device 30 are connected to each other via a network.
  • the object sorting system 1 shown in FIG. In other words, the case where the objects to be sorted by the object sorting system 1 are waste will be described below as an example.
  • the object sorting system 1 may be installed in an assembly factory or the like where a group of parts flows on a belt conveyor. That is, the objects to be sorted by the object sorting system 1 are not limited to waste, and the object sorting system 1 can be used for various objects.
  • the belt conveyor 40 conveys the waste group placed on the belt conveyor 40 in the conveying direction CD. That is, the belt conveyor 40 forms a transport path along which the waste group is transported in the transport direction CD.
  • the camera 20 is arranged above the belt conveyor 40 on which the waste group is conveyed, has a predetermined angle of view, and continuously scans a predetermined area on the upper surface of the belt conveyor 40 from above the belt conveyor 40 at a constant frame rate. to shoot. Therefore, the image captured by the camera 20 (hereinafter sometimes referred to as "captured image”) is the image of the waste group. A captured image is transmitted from the camera 20 to the control device 10 .
  • FIG. 2 is a diagram illustrating a configuration example of a control device according to the first embodiment of the present disclosure
  • a control device 10a shown in FIG. 2 corresponds to the control device 10 shown in FIG. 1
  • an object sorting device 30a shown in FIG. 2 corresponds to the object sorting device 30 shown in FIG. 2
  • the control device 10a includes an image recognition unit 11, a recognition result storage unit 12, a recognition result extraction unit 13a, a label correction unit 14, a teacher data storage unit 15, a machine learning unit 16, a learned It has a model storage unit 17 and a button color control unit 18 .
  • the image recognition unit 11 uses the learned model stored in the learned model storage unit 17 to identify each waste image (hereinafter sometimes referred to as “waste image”) present in the captured image. is recognized, and information indicating characteristics of each waste image (hereinafter sometimes referred to as “feature information”) is added to each recognized waste image.
  • feature information includes information indicating the type of the waste image (hereinafter sometimes referred to as "type information”), information indicating the contour of the waste image (hereinafter sometimes referred to as “contour information”), and the coordinates of the area centroid of the waste image (hereinafter sometimes referred to as "centroid coordinates").
  • the image recognition unit 11 recognizes the waste image by, for example, performing instance segmentation on the captured image.
  • the image recognition unit 11 associates the waste image and the feature information already assigned to the waste image with each other, and obtains a recognition result (hereinafter referred to as “image recognition result”), and outputs the generated image recognition result to the recognition result storage unit 12 and the object sorting device 30a.
  • the recognition result storage unit 12 stores the image recognition results sequentially output from the image recognition unit 11 in chronological order. That is, the recognition result storage unit 12 stores captured images in which the waste image and the feature information are associated with each other. Further, the image recognition unit 11 outputs a signal (hereinafter sometimes referred to as a “recognition completion signal”) indicating completion of recognition of the waste image and addition of characteristic information to the waste image. Output to unit 18 .
  • the object sorting device 30a extracts the desired waste from the waste group conveyed on the belt conveyor 40 based on the image recognition result (that is, waste image and feature information) output from the image recognition unit 11. Select by The object sorting device 30a extracts the desired waste using, for example, a robot hand or a suction pad.
  • the recognition result extraction unit 13a extracts waste other than the desired waste (hereinafter referred to as “undesired waste”) from the photographed images stored in the recognition result storage unit 12. ), which includes an image of undesired waste erroneously sorted by the object sorting device 30a (hereinafter sometimes referred to as an ⁇ erroneously sorted waste image'') (hereinafter referred to as an ⁇ erroneously sorted waste image''). (sometimes referred to as "object existence image”). Further, the recognition result extracting unit 13a determines that the recognition result including the incorrectly sorted waste presence image is an erroneous recognition result.
  • the recognition result extracting unit 13a extracts the erroneously sorted waste presence image and the characteristic information given to each waste image in the erroneously sorted waste presence image from the recognition result storage unit 12 as the error recognition result. Extract.
  • the recognition result extraction unit 13 a outputs the incorrect recognition result extracted from the recognition result storage unit 12 to the label correction unit 14 and displays it on the display 60 .
  • the recognition result extraction unit 13a outputs a signal indicating that the extraction of the erroneous recognition result from the recognition result storage unit 12 has been completed (hereinafter sometimes referred to as an “extraction completion signal”) to the button color control unit 18. do.
  • the push button 50 is pressed by the operator, and the operator presses the push button 50 when the object sorting device 30a erroneously sorts undesired waste.
  • a signal indicating that the push button 50 has been pressed (hereinafter sometimes referred to as a "button pressing signal") is sent from the push button 50 to the recognition result extraction unit 13a and the button color control unit. 18.
  • the recognition result extracting unit 13a extracts an erroneous recognition result from the recognition result storage unit 12 when a button pressing signal is input in response to pressing of the push button 50.
  • the push button 50 is an example of an interface (hereinafter sometimes referred to as an "erroneous sorting input interface") that receives an input indicating that the object sorting device 30a has erroneously sorted unwanted waste.
  • a software button displayed on a touch panel for example, can be used as an erroneous sorting input interface other than the push button 50 . That is, the recognition result extracting unit 13a extracts the erroneous recognition result from the recognition result storage unit 12 when an input indicating that the object sorting device 30a has erroneously sorted undesired waste is received in the erroneous sorting input interface.
  • the input device 70 is operated by the operator, and the operator can use the input device 70 to specify any part of the incorrectly sorted waste existence image displayed on the display 60 .
  • the label correcting unit 14 enables the operator to correct the label information using the input device 70 when the arbitrarily specified portion in the image of the erroneously sorted waste is the display portion of the label information.
  • the label correcting unit 14 outputs a recognition result (hereinafter sometimes referred to as “post-correction recognition result”) after correcting the label information in the misrecognition result to the teacher data storage unit 15 .
  • the training data storage unit 15 stores in advance many waste images to which feature information is added as training data. Further, the teacher data storage unit 15 newly stores the post-correction recognition result output from the label corrector 14 as additional teacher data.
  • the machine learning unit 16 performs machine learning using the teacher data stored in the teacher data storage unit 15, and converts the learned model stored in the learned model storage unit 17 to the learned model after machine learning. Update. Therefore, the recognition of the waste image by the image recognition unit 11 is performed using the learned model updated by additional machine learning performed by the machine learning unit 16 using the post-correction recognition result. .
  • the object sorting device 30a outputs a signal indicating the operating state of the object sorting device 30a (hereinafter sometimes referred to as a "sorting state signal") to the button color control section 18.
  • the button color control unit 18 changes the color of the push button 50 (hereinafter sometimes referred to as "button color") based on the button depression signal, the recognition completion signal, the extraction completion signal, and the sorting state signal.
  • the push button 50 is configured using, for example, an LED (Light Emitting Diode), and the button color can be changed.
  • FIG. 3 is a diagram illustrating an example of feature information according to Example 1 of the present disclosure.
  • empty bottles are assumed as waste, and in the object sorting device 30a, each empty bottle flowing on the belt conveyor 40 is a brown empty bottle (hereinafter sometimes referred to as a "brown bottle") and a brown bottle.
  • the bins are sorted into two types of empty bins, ie, empty bins having colors other than those (hereinafter sometimes referred to as “other colors”) (hereinafter sometimes referred to as “other color bins”).
  • other colors hereinafter sometimes referred to as “other color bins”.
  • the desired waste is a brown bottle and training data for the brown bottle is stored in the training data storage unit 15.
  • FIG. 1 is a diagram illustrating an example of feature information according to Example 1 of the present disclosure.
  • the image recognition unit 11 stores the learned model storage unit 17 recognizes an empty bin image BI existing in the photographed image using the learned model stored in the . Give the feature information that contains.
  • the empty bin image BI is an image of a brown bin (hereinafter sometimes referred to as a “brown bin image”)
  • the image recognition unit 11 recognizes the empty bin image as label information LA with the type information “brown bin”.
  • the image recognition unit 11 labels the type information "bin of another color".
  • the information LA is added to the empty bin image BI.
  • the image recognition unit 11 adds outline information CO to the empty bin image BI in the captured image.
  • the outline information CO is formed by a plurality of coordinate points (x0, y0), (x1, y1), . .
  • the image recognition unit 11 calculates the area barycenter of the empty bin image BI in the captured image, and assigns the barycenter coordinate DG to the empty bin image BI.
  • a barycentric coordinate DG is formed by a single coordinate point (X, Y).
  • a plurality of coordinate points (x0, y0), (x1, y1), . is a coordinate point in the coordinate system of the captured image, that is, the coordinate system of the camera 20 (hereinafter sometimes referred to as the “camera coordinate system”).
  • ⁇ Trace of barycentric coordinates> 4 5, 6, and 7 are diagrams showing trace examples of barycentric coordinates according to the first embodiment of the present disclosure.
  • the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t11, and calculates the barycentric coordinates DG1 of the brown bin image BO at time t11.
  • the image recognition unit 11 acquires at time t11 based on the frame rate FR [frame/sec] of the camera 20 and the conveying speed CS [mm/sec] of the belt conveyor 40.
  • the barycentric coordinates EG2 of the brown bin image BO in the captured image that will be acquired at time t12 next to the captured image are predicted.
  • the barycentric coordinate EG2 at time t12 is predicted to move CS/FR per frame in the conveying direction CD of the belt conveyor 40 (that is, the X direction in FIG. 1) with respect to the barycentric coordinate DG1 at time t11.
  • the image recognition unit 11 sets a circular allowable range TR of a predetermined size centered on the predicted barycentric coordinate EG2.
  • the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t12, and calculates the barycentric coordinates DG2 of the brown bin image BO at time t12. Further, the image recognition unit 11 determines whether or not the barycentric coordinate DG2 is within the allowable range TR.
  • the image recognition unit 11 determines that the subject of both the brown bin images BO at time t11 and time t12 is the same brown bin, and as shown in FIG. , a straight trace line TL connecting from the barycentric coordinate DG1 to the barycentric coordinate DG2 is set. On the other hand, when the barycentric coordinate DG2 is not within the allowable range TR, the image recognition unit 11 stops tracing the barycentric coordinate.
  • the image recognition unit 11 sequentially traces the barycentric coordinates as described above for the captured images that are sequentially acquired at the frame rate FR.
  • FIG. 8 is a diagram illustrating an example of extraction of desired waste in Example 1 of the present disclosure.
  • the frame rate FR of the camera 20 is based on, for example, the conveying speed CS of the belt conveyor 40 and the angle of view of the camera 20, and the same empty bottle conveyed in the conveying direction CD can be photographed three times at maximum. frame rate is set.
  • the object sorting device 30a also uses the suction pad 31 to extract brown bottles, which are desired waste, from the empty bottle group.
  • the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t11, and calculates the barycentric coordinates DG1 (X1, Y1) of the brown bin image BO at time t11.
  • the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t12, and calculates barycentric coordinates DG2 (X2, Y2) of the brown bin image BO at time t12.
  • the image recognition unit 11 determines that the subjects of both the brown bin images BO at time t11 and time t12 are the same brown bin, and determines the coordinates DG2 (X2, Y2) from the coordinates DG1 (X1, Y1) to the coordinates DG2 (X2, Y2).
  • a straight trace line TL connecting to is set.
  • the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t13, and calculates barycentric coordinates DG3 (X3, Y3) of the brown bin image BO at time t13.
  • the image recognition unit 11 determines that the subjects of both the brown bin images BO at time t12 and time t13 are the same brown bin, and the coordinates of the center of gravity DG1 (X1, Y1) to the coordinates of the center of gravity DG3 (X3, Y3) are determined.
  • a straight trace line TL connecting to is set.
  • the image recognition unit 11 extends the trace line TL connecting from the barycentric coordinates DG1 (X1, Y1) to the barycentric coordinates DG3 (X3, Y3) to the installation position of the suction pad 31 in the X direction by linear approximation, and By converting the camera coordinate system into the coordinate system of the object sorting device 30a, extraction target coordinates TC(X, Y) of the brown bin are calculated. Based on the transport speed CS, the image recognition unit 11 also determines the times (hereinafter referred to as the Then, tH, which may be called "extraction target time", is calculated. The image recognition unit 11 transmits a control signal including the calculated extraction target coordinates TC(X, Y) and the extraction target time tH to the object sorting device 30a.
  • the suction pad 31 moves to the extraction target coordinates TC(X,Y) according to the extraction target coordinates TC(X,Y) and the extraction target time tH received from the image recognition unit 11, and the extraction target At the time tH, an empty bin positioned at the extraction target coordinates TC(X,Y) is extracted from the empty bin group.
  • brown bottles which are subjects of the brown bottle images BO recognized by the image recognition unit 11 at times t11, t12, and t13, are extracted from the empty bottle group.
  • ⁇ Operation of the control device> 9 to 15 are diagrams showing operation examples of the control device according to the first embodiment of the present disclosure.
  • the empty bins BOa, BOb, BOc, BOd, BOe, and BOf are sequentially conveyed by the belt conveyor 40 in the conveying direction CD from time ta to time ti.
  • the empty bottle BOa is a brown bottle that is the desired waste, and the empty bottles BOb, BOc, BOd, BOe, and BOf are non-desired waste of other colors.
  • the camera 20 acquires a photographed image CIa that does not include the image of the empty bin at time ta, and includes the brown bin image BIa that is the image of the brown bin BOa at time tb.
  • a photographed image CIb is acquired, at time tc a photographed image CIc including a brown bin image BIa and another color bin image BIb that is an image of the other color bin BOb is acquired, and at time td, the brown bin image BIa and , the other-color bin image BIb and the other-color bin image BIc, which is the image of the other-color bin BOc.
  • a photographed image CIe including the other-color bin image BId, which is the image of the other-color bin BOd, is acquired.
  • the camera 20 captures the other-color bin image BIc, the other-color bin image BId, and the other-color bin image BIe, which is the image of the other-color bin BOe, along with the transportation of the empty bin group.
  • a photographed image CIf is acquired, and at time tg, a photographed image CIg including the other-color bin image BId, the other-color bin image BIe, and the other-color bin image BIf, which is the image of the other-color bin BOf, is acquired.
  • a photographed image CIh containing the other color bin image BIe and the other color bin image BIf is acquired, at time ti a photographed image CIi containing the other color bin image BIf is acquired, and at time tj an empty bin is acquired.
  • a captured image CIj that does not include the image of is acquired.
  • the captured images CIa, CIb, CIc, CId, CIe, CIf, CIg, CIh, and CIi to which the feature information is assigned are stored in the recognition result storage unit 12 at the respective shooting times ta, tb, They are stored in one-to-one correspondence with tc, td, te, tf, tg, th, and ti.
  • the image recognition unit 11 correctly recognizes the brown bin image BIa as a brown bin image. Therefore, at time tf, the object sorting device 30a uses the suction pad 31 to extract the brown bottle BOa from the empty bottle group, and at time tg, moves the suction pad 31 to the release position and removes the brown bottle BOa from the suction pad 31 at time tg. to release.
  • the image recognition unit 11 erroneously recognizes the other-color bin image BIc as a brown bin image. Therefore, at time th, the object sorting device 30a uses the suction pad 31 to extract the other-color bin BOc from the empty bin group, and at time ti, moves the suction pad 31 to the release position to remove the other-color bin BOc from the suction pad 31 at time ti. Release bin BOc. That is, as a result of the image recognition unit 11 erroneously recognizing the other-color bin image BIc as a brown bin image, the object sorting device 30a erroneously sorts the other-color bin BOc as a brown bin.
  • the recognition result extraction unit 13a determines, at time ti, an erroneous recognition result including the other-color bin image BIc, which is an erroneously sorted waste image, in response to the button press signal.
  • the recognition result extraction unit 13a extracts images stored in the recognition result storage unit 12 based on the frame rate FR, the distance XD in the X direction between the camera 20 and the object sorting device 30a, and the transport speed CS.
  • the photographed images CId, CIe, and CIf associated with the times td, te, and tf, respectively, are the missorted waste presence images. judge.
  • the recognition result extracting unit 13a determines that the time te before the time ti at which the push button 50 was pressed by XD/CS [sec] is the time te, and ((XD/CS)-(1/ The time FR)) [sec] earlier is determined as time td, and the time earlier than time ti by ((XD/CS)+(1/FR)) [sec] is determined as time tf.
  • the recognition result extracting unit 13a selects the captured images CIa, CIb, CIc, CId, CIe, CIf, CIg, CIh, and CIi stored in the recognition result storage unit 12 at the time ti, at the times td, te, Recognition results including captured images CId, CIe, and CIf corresponding to each time of tf are extracted as erroneous recognition results.
  • the recognition result extraction unit 13a outputs the photographed images CId, CIe, and CIf extracted from the recognition result storage unit 12 together with the feature information to the label correction unit 14, and causes the display 60 to display them together with the outline represented by the outline information.
  • the image recognition unit 11 Since the image recognition unit 11 correctly recognized the brown bin image BIa as a brown bin image at time tb, as shown in FIG. , and correct contour information COa1. Further, since the image recognition unit 11 correctly recognized the other-color bin image BIb as the other-color bin image at time tc, as shown in FIG. and feature information RRb1 including correct label information LAb and correct contour information COb1. On the other hand, since the image recognition unit 11 erroneously recognizes the other-color bin image BIc as a brown bin image at time td, as shown in FIG. Feature information RRc1 including erroneous label information LAc and correct outline information COc1 is provided.
  • the other-color bin image BIb is given feature information RRb2 including correct label information LAb of "other-color bin” and correct contour information COb2.
  • Feature information RRc2 including incorrect label information LAc of "brown bin” and correct contour information COc2 is added to the color bin image BIc.
  • the image recognition unit 11 correctly recognized the other-color bin image BId as the other-color bin image at time te, as shown in FIG. and feature information RRd1 including correct label information LAd and correct contour information COd1.
  • the other-color bin image BIc is given feature information RRc3 including erroneous label information LAc of "brown bin” and correct contour information COc3.
  • Feature information RRd2 including correct label information LAd of "other color bin” and correct outline information COd2 is added to the color bin image BId.
  • the image recognition unit 11 correctly recognized the other-color bin image BIe as the other-color bin image at time tf, as shown in FIG. and feature information RRe1 including correct label information LAe and correct contour information COe1.
  • Fig. 13 shows a display example of misrecognition results including missorted waste images.
  • the display 60 displays the photographed image CI as shown in FIG. 13 extracted from the recognition result storage unit 12 by the recognition result extraction unit 13a.
  • the captured image CI includes empty bin images BI1, BI2, BI3, BI4, and BI5.
  • Empty bin image BI1 is surrounded by a contour represented by contour information CO1
  • empty bin image BI2 is surrounded by a contour represented by contour information CO2
  • empty bin image BI3 is surrounded by a contour represented by contour information CO3.
  • the empty bin image BI4 is surrounded by the outline represented by the outline information CO4
  • the empty bin image BI5 is surrounded by the outline represented by the outline information CO5.
  • the empty bin image BI5 which is the other-color bin image
  • the photographed image CI corresponds to the incorrectly sorted waste existence image including the empty bin image BI5, which is the incorrectly sorted waste image.
  • the operator operates the input device 70 to use the pointer PO displayed on the photographed image CI, as shown in FIG. ”, the label information LA5a of the empty bin image BI5 is specified by clicking the display position of the label information LA5a.
  • the label correction unit 14 enables correction of the specified label information LA5a. Therefore, the operator uses, for example, a keyboard to correct the label information given to the empty bin image BI5 from "brown bin” to "other color bin” as shown in FIG.
  • the label correcting unit 14 replaces the incorrect label information LA5a of "brown bottle” attached to the empty bin image BI5 with label information LA5b of "other color bin” indicating the correct type of the empty bin image BI5. to be corrected. In this manner, erroneous label information included in the erroneous recognition result is corrected by the label corrector 14.
  • FIG. 1
  • the minimum required storage capacity of the recognition result storage unit 12 can be calculated according to equation (1).
  • “Ts” is the time [sec] required from the time the brown bottle enters the angle of view of the camera 20 until the brown bottle sucked to the suction pad 31 is released from the suction pad 31.
  • “Tj” is the time [sec] required for the operator to press the push button 50 after the erroneously sorted bins of other colors are released from the suction pad 31
  • FR is the frame rate of the camera 20.
  • DS is the data size [MB] per photographed image.
  • buttons color transition> 16 and 17 are diagrams illustrating examples of button color transitions according to the first embodiment of the present disclosure. As shown in FIGS. 16 and 17, the button color transitions between green, red, and yellow, for example.
  • the button color control unit 18 does not extract the erroneous recognition result. Set the button color to green, which indicates that reception is possible. When the button color is green, pressing the push button 50 is valid.
  • the button color control unit 18 determines that an instruction to start extraction of erroneous recognition results has been given by the operator, and changes the button color to green. to yellow. When the button color is yellow, pressing the push button 50 is valid.
  • the button color control unit 18 determines that the operator has instructed to stop extracting the erroneous recognition result, and changes the button color to yellow. to green. Alternatively, the button color control unit 18 changes the button color from yellow to green when an extraction completion signal is output from the recognition result extraction unit 13a without the push button 50 being pressed while the button color is yellow.
  • the button color control unit 18 changes the button color from green to red when the recognition completion signal is output from the image recognition unit 11 without the push button 50 being pressed while the button color is green. .
  • the button color control unit 18 keeps the button color red until the movement of the suction pad 31 to the target position corresponding to the extraction target coordinates is completed.
  • the button color control unit 18 determines the state of the suction pad 31 based on the sorting state signal output from the object sorting device 30a.
  • the button color control unit 18 keeps the button color red until the desired waste reaches the target position after the suction pad 31 has completely moved to the target position corresponding to the extraction target coordinates. Further, when the button color control unit 18 determines based on the sorting state signal that the suction pad 31 has failed to suck the desired waste, it changes the button color from red to green. On the other hand, the button color control unit 18 maintains the button color to red when it is determined based on the sorting state signal that the suction pad 31 has successfully adsorbed the desired waste. The button color control unit 18 keeps the button color red while the suction pad 31 is moving to the desired waste release position.
  • the button color control unit 18 determines based on the sorting state signal that the suction pad 31 has completed releasing the desired waste at the release position, it changes the button color from red to green.
  • FIG. 17 shows the relationship between button color transitions and task transitions in the control device 10a.
  • empty bottles BOf and BOg are non-desired waste bottles of other colors, and empty bottle Boh is a desired waste brown bottle. It is also assumed that the empty bottle BOg is correctly recognized as a bottle of another color, the empty bottle Boh is correctly recognized as a brown bottle, and the empty bottle Bof is erroneously recognized as a brown bottle. Therefore, the control device 10a generates a task TA1 for the empty bin BOf and a task TA2 for the empty bin Boh. A new task is generated when an empty bin whose barycentric coordinates have been successfully traced is out of the angle of view of the camera 20 . Also, the existing task is deleted when the empty bottle is released from the suction pad 31 or when the suction pad 31 fails to suck the empty bottle.
  • the button color control unit 18 sets the button color to green.
  • the button color control section 18 sets the button color to green. Further, the button color control unit 18 sets the button color to green until time t24 when the task TA2 for the empty bin BOh is generated.
  • the button color control section 18 changes the button color from green to red.
  • the button color control unit 18 changes the button color from red to green.
  • the button color control unit 18 changes the button color from green to red.
  • the button color control unit 18 changes the button color from red to green.
  • buttons displayed on the touch panel are used instead of the push buttons 50 as the misselection input interface, the button color control unit 18 can change the color of the software buttons in the same manner as described above. be.
  • the first embodiment has been described above.
  • FIG. 18 is a diagram illustrating a configuration example of a control device according to a second embodiment of the present disclosure
  • a control device 10b shown in FIG. 18 corresponds to the control device 10 shown in FIG. 1
  • an object sorting device 30b shown in FIG. 18 corresponds to the object sorting device 30 shown in FIG.
  • the push button 50 shown in FIGS. 1 and 2 is not connected to the control device 10b.
  • the object sorting device 30b differs from the object sorting device 30a shown in FIG. 2 in that it does not output a sorting state signal.
  • the control device 10b includes an image recognition unit 11, a recognition result storage unit 12, a recognition result extraction unit 13b, a label correction unit 14, a teacher data storage unit 15, a machine learning unit 16, a learned and a model storage unit 17 .
  • the recognition result extraction unit 13b extracts empty bin images having an area equal to or larger than the threshold value TH2 among the empty bin images to which the label information “brown bin” is assigned in the photographed images stored in the recognition result storage unit 12. is determined to be a missorted waste image. Then, the recognition result extracting unit 13b extracts the photographed image including the empty bin image having the area equal to or larger than the threshold value TH2 to which the label information "brown bottle” is assigned, from the recognition result storage unit 12 as the false recognition result together with the feature information. Extract.
  • FIG. 19 is a diagram illustrating a configuration example of a control device according to a third embodiment of the present disclosure.
  • a control device 10c shown in FIG. 19 corresponds to the control device 10 shown in FIG. 1
  • an object sorting device 30c shown in FIG. 19 corresponds to the object sorting device 30 shown in FIG.
  • the push button 50 shown in FIGS. 1 and 2 is not connected to the control device 10c.
  • the control device 10c includes an image recognition unit 11, a recognition result storage unit 12, a recognition result extraction unit 13c, a label correction unit 14, a teacher data storage unit 15, a machine learning unit 16, a learned and a model storage unit 17 .
  • the object sorting device 30 c has a suction pad 31 .
  • the suction pad 31 has a weight sensor for detecting the weight of the sucked empty bin, and the object sorting device 30c receives information indicating the weight of the empty bin sucked by the suction pad 31 (hereinafter referred to as "weight information"). ) is output to the recognition result extraction unit 13c.
  • the weight of the brown bottle is often less than the specified weight.
  • an empty bottle corresponding to the empty bottle image to which the label information "brown bottle” is added is sorted as the desired waste. Therefore, based on the weight information, the recognition result extracting unit 13c selects empty bin images with the label information of “brown bottle” from among the photographed images stored in the recognition result storage unit 12, and extracts images having weights equal to or greater than the threshold TH3. Empty bin images corresponding to heavy empty bins are determined to be missorted waste images.
  • the recognition result extracting unit 13c recognizes, together with the feature information, the photographed image including the empty bin image corresponding to the empty bin having the label information “brown bottle” and having a weight equal to or greater than the threshold TH3 as the result of misrecognition. Extract from the result storage unit 12 .
  • the third embodiment has been described above.
  • the recognition result storage unit 12, the teacher data storage unit 15, and the learned model storage unit 17 are implemented as hardware, for example, by memory or storage.
  • the image recognition unit 11, the recognition result extraction units 13a, 13b, 13c, the label correction unit 14, the machine learning unit 16, and the button color control unit 18 include hardware such as a CPU (Central Processing Unit), a DSP (Digital signal processor), FPGA (Field Programmable Gate Array), and ASIC (Application Specific Integrated Circuit).
  • the fourth embodiment has been described above.
  • the object sorting system of the present disclosure includes a camera (camera 20 of the embodiment), a recognition unit (image recognition unit 11 of the embodiment), a storage unit ( (recognition result storage unit 12), a selection unit (object selection devices 30, 30a, 30b, 30c of the embodiment), and an extraction unit (recognition result extraction units 13a, 13b, 13c of the embodiment).
  • the camera captures a first image, which is an image of the object group, on the transport path along which the object group is transported.
  • the recognition unit recognizes a second image that is an image of each object present in the first image using the trained model, adds feature information that is information indicating the feature of the second image to the second image, A recognition result is generated in which feature information is added to the second image in the first image.
  • the storage unit stores recognition results.
  • the sorting unit sorts out a desired object from the group of objects conveyed on the conveying path based on the recognition result. When an undesired object other than a desired object is erroneously sorted by the sorting unit, the extracting unit extracts an erroneous recognition that is a recognition result including an image of the erroneously selected undesired object in the recognition results stored in the storage unit. The result is determined, and an erroneous recognition result is extracted from the storage unit.
  • the feature information includes information indicating the type of each object in the object group and information indicating the outline of each object in the object group.
  • the operator can generate teacher data by correcting the erroneous recognition results based on the erroneous recognition results extracted from the storage unit. can reduce the effort required to generate teacher data compared to the case of performing
  • the object sorting system of the present disclosure has an interface that receives an input indicating that the undesired object has been erroneously sorted by the sorting unit. to extract
  • the extracting unit extracts an erroneous recognition result from the storage unit based on the area of the second image.
  • the extraction unit extracts the recognition error result from the storage unit based on the weight of the desired object sorted by the sorting unit.
  • the interface is a button (push button 50 in the embodiment) that accepts input from the operator, and the object sorting system of the present disclosure controls the color of the button to change according to the extraction status of the erroneous recognition result by the extraction unit. have a part.
  • the extraction unit determines an erroneous recognition result based on the frame rate of the camera, the distance between the camera and the sorting unit, and the transportation speed of the object group on the transportation path.

Abstract

An object sorting system, wherein: a camera 20 captures a first image of an object group in a transport path along which the object group is transported; an image recognition unit 11 recognizes a second image, which is an image of objects present in the first image, by using a trained model, assigns feature information indicating a feature of the second image to the second image, generates a recognition result in which the feature information for second image is assigned to the first image, and stores the recognition result in a recognition result storage unit 12; an object sorting device 30a sorts a desired object, on the basis of the recognition result, from within the object group being transported along the transport path; and, when an undesired object different than the desired object is erroneously sorted by the object sorting device 30a, a recognition result extraction unit 13a assesses that the recognition result stored in the recognition result storage unit 12 is an erroneous recognition result including an image of the undesired object that was erroneously sorted and extracts the erroneous recognition result from the recognition result storage unit 12.

Description

物体選別システムObject sorting system
 本開示は、物体選別システムに関する。 The present disclosure relates to an object sorting system.
 廃棄物処理場では日々大量の廃棄物がベルトコンベア上を流れ、処理されている。廃棄物が処理される現場では人の手によって廃棄物の選別作業が行われている。廃棄物の選別作業は単純作業である一方で、廃棄物の選別を行う作業者(以下では「選別作業者」と呼ぶことがある)の負担が大きいことから、廃棄物の選別を自動的に行う装置(以下では「廃棄物選別装置」と呼ぶことがある)が開発されている。 At the waste disposal site, a large amount of waste flows on a belt conveyor and is processed every day. At sites where waste is processed, sorting of waste is performed manually. While sorting waste is a simple task, it places a heavy burden on workers who sort waste (hereafter sometimes referred to as "sorters"). A device (sometimes referred to hereinafter as a "waste sorter") has been developed to do so.
特許第6854995号明細書Patent No. 6854995
 選別作業者が行っていた作業を選別作業者に代わって廃棄物選別装置にやらせる場合、廃棄物選別装置が、ベルトコンベア上を流れる各々の廃棄物を認識し、認識結果に基づいてロボットハンドや吸引パッドを用いて、ベルトコンベア上を流れる廃棄物群から所望の廃棄物(以下では「所望廃棄物」と呼ぶことがある)を抽出することが考えられる。そこで、ベルトコンベア上を複数種類の廃棄物が混在して流れてくる場合には、廃棄物選別装置が廃棄物の種類を識別することが必要になる。廃棄物選別装置が様々な種類の廃棄物を認識するにあたっては、機械学習によって生成された学習済モデルを用いた認識が有効である。 When the waste sorting device performs the work that the sorting worker was doing instead of the sorting worker, the waste sorting device recognizes each waste flowing on the belt conveyor, and based on the recognition result, the robot hand It is conceivable to extract the desired waste (hereinafter sometimes referred to as "desired waste") from the waste mass flowing on the belt conveyor using a vacuum cleaner or a suction pad. Therefore, when a plurality of types of waste are mixed and flowed on the belt conveyor, it is necessary for the waste sorting device to identify the type of waste. Recognition using a trained model generated by machine learning is effective for the waste sorting device to recognize various types of waste.
 しかし、機械学習には膨大な数の教師データが必要であるため、様々な種類の廃棄物が撮影された画像データから教師データを生成するときに画像データに対して行われるアノテーション作業を画像の目視による手作業で行うことになると、教師データの生成に大変な労力がかかってしまう。 However, since machine learning requires a huge amount of training data, the annotation work performed on image data when generating training data from image data of various types of waste is replaced by image data. If this is done manually by visual observation, it takes a lot of labor to generate training data.
 そこで、本開示では、教師データの生成に要する労力を軽減できる技術を提案する。 Therefore, this disclosure proposes a technique that can reduce the labor required to generate training data.
 本開示の物体選別システムは、カメラと、認識部と、記憶部と、選別部と、抽出部とを有する。前記カメラは、物体群が搬送される搬送路において、前記物体群の画像である第一画像を撮影する。前記認識部は、学習済モデルを用いて前記第一画像中に存在する各物体の画像である第二画像を認識し、前記第二画像の特徴を示す情報である特徴情報を前記第二画像に付与し、前記第一画像において前記第二画像に対して前記特徴情報が付与された認識結果を生成する。前記記憶部は、前記認識結果を記憶する。前記選別部は、前記認識結果に基づいて、前記搬送路を搬送される前記物体群の中から所望物体を選別する。前記抽出部は、前記選別部によって前記所望物体以外の非所望物体が誤選別されるときに、前記記憶部に記憶されている前記認識結果において、誤選別される前記非所望物体の画像を含む認識結果である誤認識結果を判定し、前記誤認識結果を前記記憶部から抽出する。 The object sorting system of the present disclosure has a camera, a recognition unit, a storage unit, a sorting unit, and an extraction unit. The camera captures a first image, which is an image of the object group, on a transport path along which the object group is transported. The recognition unit recognizes a second image, which is an image of each object existing in the first image, using the trained model, and recognizes feature information, which is information indicating features of the second image, to the second image. to generate a recognition result in which the characteristic information is added to the second image in the first image. The storage unit stores the recognition result. The sorting unit sorts out a desired object from the object group conveyed on the conveying path based on the recognition result. The extracting unit includes an image of the erroneously selected undesired object in the recognition result stored in the storage unit when the selecting unit erroneously selects an undesired object other than the desired object. An erroneous recognition result, which is a recognition result, is determined, and the erroneous recognition result is extracted from the storage unit.
 開示の技術によれば、教師データの生成に要する労力を軽減できる。 According to the disclosed technology, the effort required to generate training data can be reduced.
図1は、本開示の実施例1の物体選別システムの構成例を示す図である。FIG. 1 is a diagram illustrating a configuration example of an object sorting system according to Example 1 of the present disclosure. 図2は、本開示の実施例1の制御装置の構成例を示す図である。FIG. 2 is a diagram illustrating a configuration example of a control device according to the first embodiment of the present disclosure; 図3は、本開示の実施例1の特徴情報の一例を示す図である。FIG. 3 is a diagram illustrating an example of feature information according to Example 1 of the present disclosure. 図4は、本開示の実施例1の重心座標のトレース例を示す図である。FIG. 4 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure. 図5は、本開示の実施例1の重心座標のトレース例を示す図である。FIG. 5 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure. 図6は、本開示の実施例1の重心座標のトレース例を示す図である。FIG. 6 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure. 図7は、本開示の実施例1の重心座標のトレース例を示す図である。FIG. 7 is a diagram illustrating a trace example of barycentric coordinates according to the first embodiment of the present disclosure. 図8は、本開示の実施例1の所望廃棄物の抽出例を示す図である。FIG. 8 is a diagram illustrating an example of extraction of desired waste in Example 1 of the present disclosure. 図9は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 9 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図10は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 10 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図11は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 11 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図12は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 12 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図13は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 13 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図14は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 14 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図15は、本開示の実施例1の制御装置の動作例を示す図である。FIG. 15 is a diagram illustrating an operation example of the control device according to the first embodiment of the present disclosure; 図16は、本開示の実施例1のボタン色の遷移例を示す図である。FIG. 16 is a diagram illustrating an example of button color transition according to the first embodiment of the present disclosure. 図17は、本開示の実施例1のボタン色の遷移例を示す図である。FIG. 17 is a diagram illustrating an example of button color transition according to the first embodiment of the present disclosure. 図18は、本開示の実施例2の制御装置の構成例を示す図である。FIG. 18 is a diagram illustrating a configuration example of a control device according to a second embodiment of the present disclosure; 図19は、本開示の実施例3の制御装置の構成例を示す図である。FIG. 19 is a diagram illustrating a configuration example of a control device according to a third embodiment of the present disclosure;
 以下、本開示の実施例を図面に基づいて説明する。以下の実施例において同一の構成には同一の符号を付す。 Hereinafter, embodiments of the present disclosure will be described based on the drawings. The same symbols are attached to the same configurations in the following embodiments.
 [実施例1]
 <物体選別システムの構成>
 図1は、本開示の実施例1の物体選別システムの構成例を示す図である。
[Example 1]
<Configuration of object sorting system>
FIG. 1 is a diagram illustrating a configuration example of an object sorting system according to Example 1 of the present disclosure.
 図1において、物体選別システム1は、制御装置10と、カメラ20と、物体選別装置30と、ベルトコンベア40と、プッシュボタン50と、ディスプレイ60と、入力装置70とを有する。プッシュボタン50、ディスプレイ60及び入力装置70は、制御装置10に接続される。入力装置70の一例として、マウス等のポインティングデバイス、及び、キーボードが挙げられる。制御装置10と、カメラ20と、物体選別装置30とは、ネットワークを介して互いに接続される。  In FIG. 1, the object sorting system 1 has a control device 10, a camera 20, an object sorting device 30, a belt conveyor 40, a push button 50, a display 60, and an input device 70. Push buttons 50 , display 60 and input device 70 are connected to control device 10 . Examples of the input device 70 include a pointing device such as a mouse and a keyboard. The control device 10, camera 20, and object sorting device 30 are connected to each other via a network.
 以下では、図1に示す物体選別システム1が、ベルトコンベア40上を廃棄物群が流れる廃棄物処理場に設置される場合を一例に挙げて説明する。つまり、以下では、物体選別システム1の選別対象の物体が廃棄物である場合を一例に挙げて説明する。しかし、物体選別システム1は、ベルトコンベア上を部品群が流れる組立工場等に設置されても良い。つまり、物体選別システム1の選別対象の物体は廃棄物に限定されず、物体選別システム1は様々な物体に対して使用可能である。 In the following, a case where the object sorting system 1 shown in FIG. In other words, the case where the objects to be sorted by the object sorting system 1 are waste will be described below as an example. However, the object sorting system 1 may be installed in an assembly factory or the like where a group of parts flows on a belt conveyor. That is, the objects to be sorted by the object sorting system 1 are not limited to waste, and the object sorting system 1 can be used for various objects.
 ベルトコンベア40は、ベルトコンベア40上に載せられた廃棄物群を搬送方向CDへ搬送する。つまり、ベルトコンベア40は、搬送方向CDへ廃棄物群が搬送される搬送路を形成する。 The belt conveyor 40 conveys the waste group placed on the belt conveyor 40 in the conveying direction CD. That is, the belt conveyor 40 forms a transport path along which the waste group is transported in the transport direction CD.
 カメラ20は、廃棄物群が搬送されるベルトコンベア40の上方に配置され、所定の画角を有し、ベルトコンベア40の上方からベルトコンベア40の上面における所定の領域を一定のフレームレートで連続して撮影する。よって、カメラ20によって撮影された画像(以下では「撮影画像」と呼ぶことがある)は、廃棄物群の画像となる。撮影画像は、カメラ20から制御装置10へ送信される。 The camera 20 is arranged above the belt conveyor 40 on which the waste group is conveyed, has a predetermined angle of view, and continuously scans a predetermined area on the upper surface of the belt conveyor 40 from above the belt conveyor 40 at a constant frame rate. to shoot. Therefore, the image captured by the camera 20 (hereinafter sometimes referred to as "captured image") is the image of the waste group. A captured image is transmitted from the camera 20 to the control device 10 .
 <制御装置の構成>
 図2は、本開示の実施例1の制御装置の構成例を示す図である。図2に示す制御装置10aは、図1に示す制御装置10に該当し、図2に示す物体選別装置30aは、図1に示す物体選別装置30に該当する。図2において、制御装置10aは、画像認識部11と、認識結果記憶部12と、認識結果抽出部13aと、ラベル修正部14と、教師データ記憶部15と、機械学習部16と、学習済モデル記憶部17と、ボタン色制御部18とを有する。
<Configuration of control device>
FIG. 2 is a diagram illustrating a configuration example of a control device according to the first embodiment of the present disclosure; A control device 10a shown in FIG. 2 corresponds to the control device 10 shown in FIG. 1, and an object sorting device 30a shown in FIG. 2 corresponds to the object sorting device 30 shown in FIG. 2, the control device 10a includes an image recognition unit 11, a recognition result storage unit 12, a recognition result extraction unit 13a, a label correction unit 14, a teacher data storage unit 15, a machine learning unit 16, a learned It has a model storage unit 17 and a button color control unit 18 .
 画像認識部11は、学習済モデル記憶部17に記憶されている学習済モデルを用いて、撮影画像の中に存在する各廃棄物の画像(以下では「廃棄物画像」と呼ぶことがある)を認識し、認識した各廃棄物画に対して、各廃棄物画像の特徴を示す情報(以下では「特徴情報」と呼ぶことがある)を付与する。画像認識部11は、画像認識のスコアが閾値TH1以上となる廃棄物画像に特徴情報を付与する。特徴情報は、廃棄物画像の種類を示す情報(以下では「種類情報」と呼ぶことがある)と、廃棄物画像の輪郭を示す情報(以下では「輪郭情報」と呼ぶことがある)と、廃棄物画像の面積重心の座標(以下では「重心座標」と呼ぶことがある)とを含む。画像認識部11は、例えば、撮影画像に対してインスタンスセグメンテーションを行うことにより廃棄物画像を認識する。画像認識部11は、廃棄物画像と、廃棄物画像に付与済みの特徴情報とを互いに対応付け、撮影画像において各廃棄物画像に対して特徴情報が付与された認識結果(以下では「画像認識結果」と呼ぶことがある)を生成し、生成した画像認識結果を認識結果記憶部12及び物体選別装置30aへ出力する。認識結果記憶部12は、画像認識部11から順次出力される画像認識結果を時系列に記憶する。つまり、認識結果記憶部12は、廃棄物画像と特徴情報とが互いに対応付けられた撮影画像を記憶する。また、画像認識部11は、廃棄物画像の認識と、廃棄物画像への特徴情報の付与とが完了したことを示す信号(以下では「認識完了信号」と呼ぶことがある)をボタン色制御部18へ出力する。 The image recognition unit 11 uses the learned model stored in the learned model storage unit 17 to identify each waste image (hereinafter sometimes referred to as “waste image”) present in the captured image. is recognized, and information indicating characteristics of each waste image (hereinafter sometimes referred to as “feature information”) is added to each recognized waste image. The image recognition unit 11 assigns characteristic information to the waste image whose image recognition score is equal to or greater than the threshold TH1. The feature information includes information indicating the type of the waste image (hereinafter sometimes referred to as "type information"), information indicating the contour of the waste image (hereinafter sometimes referred to as "contour information"), and the coordinates of the area centroid of the waste image (hereinafter sometimes referred to as "centroid coordinates"). The image recognition unit 11 recognizes the waste image by, for example, performing instance segmentation on the captured image. The image recognition unit 11 associates the waste image and the feature information already assigned to the waste image with each other, and obtains a recognition result (hereinafter referred to as “image recognition result”), and outputs the generated image recognition result to the recognition result storage unit 12 and the object sorting device 30a. The recognition result storage unit 12 stores the image recognition results sequentially output from the image recognition unit 11 in chronological order. That is, the recognition result storage unit 12 stores captured images in which the waste image and the feature information are associated with each other. Further, the image recognition unit 11 outputs a signal (hereinafter sometimes referred to as a “recognition completion signal”) indicating completion of recognition of the waste image and addition of characteristic information to the waste image. Output to unit 18 .
 物体選別装置30aは、画像認識部11から出力される画像認識結果(つまり、廃棄物画像及び特徴情報)に基づいて、ベルトコンベア40を搬送される廃棄物群の中から所望廃棄物を抽出することにより選別する。物体選別装置30aは、例えば、ロボットハンドや吸引パッド等を用いて所望廃棄物を抽出する。 The object sorting device 30a extracts the desired waste from the waste group conveyed on the belt conveyor 40 based on the image recognition result (that is, waste image and feature information) output from the image recognition unit 11. Select by The object sorting device 30a extracts the desired waste using, for example, a robot hand or a suction pad.
 認識結果抽出部13aは、プッシュボタン50が押下されたときに、認識結果記憶部12に記憶されている撮影画像の中から、所望廃棄物以外の廃棄物(以下では「非所望廃棄物」と呼ぶことがある)であって、物体選別装置30aによって誤選別された非所望廃棄物の画像(以下では「誤選別廃棄物画像」と呼ぶことがある)を含む画像(以下では「誤選別廃棄物存在画像」と呼ぶことがある)を判定する。また、認識結果抽出部13aは、誤選別廃棄物存在画像を含む認識結果が誤認識結果であると判定する。そして、認識結果抽出部13aは、誤選別廃棄物存在画像と、誤選別廃棄物存在画像において各廃棄物画像に対して付与されている特徴情報とを、誤認識結果として認識結果記憶部12から抽出する。認識結果抽出部13aは、認識結果記憶部12から抽出した誤認識結果をラベル修正部14へ出力するとともにディスプレイ60に表示させる。また、認識結果抽出部13aは、認識結果記憶部12からの誤認識結果の抽出が完了したことを示す信号(以下では「抽出完了信号」と呼ぶことがある)をボタン色制御部18へ出力する。 When the push button 50 is pressed, the recognition result extraction unit 13a extracts waste other than the desired waste (hereinafter referred to as “undesired waste”) from the photographed images stored in the recognition result storage unit 12. ), which includes an image of undesired waste erroneously sorted by the object sorting device 30a (hereinafter sometimes referred to as an ``erroneously sorted waste image'') (hereinafter referred to as an ``erroneously sorted waste image''). (sometimes referred to as "object existence image"). Further, the recognition result extracting unit 13a determines that the recognition result including the incorrectly sorted waste presence image is an erroneous recognition result. Then, the recognition result extracting unit 13a extracts the erroneously sorted waste presence image and the characteristic information given to each waste image in the erroneously sorted waste presence image from the recognition result storage unit 12 as the error recognition result. Extract. The recognition result extraction unit 13 a outputs the incorrect recognition result extracted from the recognition result storage unit 12 to the label correction unit 14 and displays it on the display 60 . In addition, the recognition result extraction unit 13a outputs a signal indicating that the extraction of the erroneous recognition result from the recognition result storage unit 12 has been completed (hereinafter sometimes referred to as an “extraction completion signal”) to the button color control unit 18. do.
 プッシュボタン50は、オペレータによって押下され、オペレータは、物体選別装置30aによって非所望廃棄物が誤選別されたときにプッシュボタン50を押下する。プッシュボタン50が押下されたときに、プッシュボタン50が押下されたことを示す信号(以下では「ボタン押下信号」と呼ぶことがある)がプッシュボタン50から認識結果抽出部13a及びボタン色制御部18へ出力される。認識結果抽出部13aは、プッシュボタン50の押下に応じてボタン押下信号が入力されたときに、誤認識結果を認識結果記憶部12から抽出する。プッシュボタン50は、物体選別装置30aによって非所望廃棄物が誤選別されたことを示す入力を受け付けるインタフェース(以下では「誤選別入力インタフェース」と呼ぶことがある)の一例である。プッシュボタン50以外の誤選別入力インタフェースとして、例えば、タッチパネルに表示されるソフトウェアボタンを挙げることができる。つまり、認識結果抽出部13aは、物体選別装置30aによって非所望廃棄物が誤選別されたことを示す入力が誤選別入力インタフェースにあったときに認識結果記憶部12から誤認識結果を抽出する。 The push button 50 is pressed by the operator, and the operator presses the push button 50 when the object sorting device 30a erroneously sorts undesired waste. When the push button 50 is pressed, a signal indicating that the push button 50 has been pressed (hereinafter sometimes referred to as a "button pressing signal") is sent from the push button 50 to the recognition result extraction unit 13a and the button color control unit. 18. The recognition result extracting unit 13a extracts an erroneous recognition result from the recognition result storage unit 12 when a button pressing signal is input in response to pressing of the push button 50. FIG. The push button 50 is an example of an interface (hereinafter sometimes referred to as an "erroneous sorting input interface") that receives an input indicating that the object sorting device 30a has erroneously sorted unwanted waste. A software button displayed on a touch panel, for example, can be used as an erroneous sorting input interface other than the push button 50 . That is, the recognition result extracting unit 13a extracts the erroneous recognition result from the recognition result storage unit 12 when an input indicating that the object sorting device 30a has erroneously sorted undesired waste is received in the erroneous sorting input interface.
 入力装置70は、オペレータによって操作され、オペレータは、ディスプレイ60に表示された誤選別廃棄物存在画像の任意の箇所を入力装置70を用いて指定することが可能である。ラベル修正部14は、誤選別廃棄物存在画像の中の任意に指定された箇所がラベル情報の表示箇所であるときに、オペレータによる入力装置70を用いたラベル情報の修正を可能とする。ラベル修正部14は、誤認識結果においてラベル情報が修正された後の認識結果(以下では「修正後認識結果」と呼ぶことがある)を教師データ記憶部15へ出力する。 The input device 70 is operated by the operator, and the operator can use the input device 70 to specify any part of the incorrectly sorted waste existence image displayed on the display 60 . The label correcting unit 14 enables the operator to correct the label information using the input device 70 when the arbitrarily specified portion in the image of the erroneously sorted waste is the display portion of the label information. The label correcting unit 14 outputs a recognition result (hereinafter sometimes referred to as “post-correction recognition result”) after correcting the label information in the misrecognition result to the teacher data storage unit 15 .
 教師データ記憶部15には、特徴情報が付与された多くの廃棄物画像が教師データとして予め記憶されている。また、教師データ記憶部15は、ラベル修正部14から出力される修正後認識結果を追加の教師データとして新たに記憶する。 The training data storage unit 15 stores in advance many waste images to which feature information is added as training data. Further, the teacher data storage unit 15 newly stores the post-correction recognition result output from the label corrector 14 as additional teacher data.
 機械学習部16は、教師データ記憶部15に記憶されている教師データを用いて機械学習を行い、機械学習後の学習済モデルによって、学習済モデル記憶部17に記憶されている学習済モデルを更新する。よって、画像認識部11での廃棄物画像の認識は、修正後認識結果を用いて機械学習部16によって行われる追加の機械学習により更新された後の学習済モデルを用いて行われることになる。 The machine learning unit 16 performs machine learning using the teacher data stored in the teacher data storage unit 15, and converts the learned model stored in the learned model storage unit 17 to the learned model after machine learning. Update. Therefore, the recognition of the waste image by the image recognition unit 11 is performed using the learned model updated by additional machine learning performed by the machine learning unit 16 using the post-correction recognition result. .
 一方で、物体選別装置30aは、物体選別装置30aの動作状態を示す信号(以下では「選別状態信号」と呼ぶことがある)をボタン色制御部18へ出力する。 On the other hand, the object sorting device 30a outputs a signal indicating the operating state of the object sorting device 30a (hereinafter sometimes referred to as a "sorting state signal") to the button color control section 18.
 ボタン色制御部18は、ボタン押下信号と、認識完了信号と、抽出完了信号と、選別状態信号とに基づいて、プッシュボタン50の色(以下では「ボタン色」と呼ぶことがある)を変化させる。プッシュボタン50は例えばLED(Light Emitting Diode)を用いて構成され、ボタン色が変更可能である。 The button color control unit 18 changes the color of the push button 50 (hereinafter sometimes referred to as "button color") based on the button depression signal, the recognition completion signal, the extraction completion signal, and the sorting state signal. Let The push button 50 is configured using, for example, an LED (Light Emitting Diode), and the button color can be changed.
 <特徴情報>
 図3は、本開示の実施例1の特徴情報の一例を示す図である。以下では、廃棄物として空きビンを想定し、物体選別装置30aでは、ベルトコンベア40上を流れる各々の空きビンが、茶色の空きビン(以下では「茶色ビン」と呼ぶことがある)と、茶色以外の色(以下では「他色」と呼ぶことがある)を有する空きビン(以下では「他色ビン」と呼ぶことがある)との二種類の空きビンに選別される場合を想定する。また以下では、所望廃棄物が茶色ビンであり、茶色ビン用の教師データが教師データ記憶部15に記憶される場合を想定する。
<Feature information>
FIG. 3 is a diagram illustrating an example of feature information according to Example 1 of the present disclosure. In the following, empty bottles are assumed as waste, and in the object sorting device 30a, each empty bottle flowing on the belt conveyor 40 is a brown empty bottle (hereinafter sometimes referred to as a "brown bottle") and a brown bottle. It is assumed that the bins are sorted into two types of empty bins, ie, empty bins having colors other than those (hereinafter sometimes referred to as “other colors”) (hereinafter sometimes referred to as “other color bins”). In the following description, it is assumed that the desired waste is a brown bottle and training data for the brown bottle is stored in the training data storage unit 15. FIG.
 図3に示すように、撮影画像に廃棄物画像として空きビンの画像(以下では「空きビン画像」と呼ぶことがある)BIが含まれる場合、画像認識部11は、学習済モデル記憶部17に記憶されている学習済モデルを用いて、撮影画像の中に存在する空きビン画像BIを認識し、認識した空きビン画像BIに対して、ラベル情報LAと輪郭情報COと重心座標DGとを含む特徴情報を付与する。空きビン画像BIが茶色ビンの画像(以下では「茶色ビン画像」と呼ぶことがある)である場合には、画像認識部11は、“茶色ビン”という種類情報をラベル情報LAとして空きビン画像BIに付与する。一方で、空きビン画像BIが他色ビンの画像(以下では「他色ビン画像」と呼ぶことがある)である場合には、画像認識部11は、“他色ビン”という種類情報をラベル情報LAとして空きビン画像BIに付与する。また、画像認識部11は、撮影画像において、空きビン画像BIに対して輪郭情報COを付与する。矩形の撮影画像の長辺をX軸、短辺をY軸として、輪郭情報COは、複数の座標点(x0,y0),(x1,y1),…,(xn,yn)で形成される。また、画像認識部11は、撮影画像において、空きビン画像BIの面積重心を算出し、空きビン画像BIに対して重心座標DGを付与する。重心座標DGは、単一の座標点(X,Y)で形成される。 As shown in FIG. 3, when the captured image includes an image of an empty bin (hereinafter sometimes referred to as an “empty bin image”) BI as a waste image, the image recognition unit 11 stores the learned model storage unit 17 recognizes an empty bin image BI existing in the photographed image using the learned model stored in the . Give the feature information that contains. When the empty bin image BI is an image of a brown bin (hereinafter sometimes referred to as a “brown bin image”), the image recognition unit 11 recognizes the empty bin image as label information LA with the type information “brown bin”. Give to BI. On the other hand, if the empty bin image BI is an image of a bin of another color (hereinafter sometimes referred to as a "bin image of another color"), the image recognition unit 11 labels the type information "bin of another color". The information LA is added to the empty bin image BI. Further, the image recognition unit 11 adds outline information CO to the empty bin image BI in the captured image. The outline information CO is formed by a plurality of coordinate points (x0, y0), (x1, y1), . . Further, the image recognition unit 11 calculates the area barycenter of the empty bin image BI in the captured image, and assigns the barycenter coordinate DG to the empty bin image BI. A barycentric coordinate DG is formed by a single coordinate point (X, Y).
 ここで、輪郭情報COを形成する複数の座標点(x0,y0),(x1,y1),…,(xn,yn)、及び、重心座標を形成する単一の座標点(X,Y)は、撮影画像の座標系、つまり、カメラ20の座標系(以下では「カメラ座標系」と呼ぶことがある)における座標点である。 Here, a plurality of coordinate points (x0, y0), (x1, y1), . is a coordinate point in the coordinate system of the captured image, that is, the coordinate system of the camera 20 (hereinafter sometimes referred to as the “camera coordinate system”).
 <重心座標のトレース>
 図4、図5、図6及び図7は、本開示の実施例1の重心座標のトレース例を示す図である。
<Trace of barycentric coordinates>
4, 5, 6, and 7 are diagrams showing trace examples of barycentric coordinates according to the first embodiment of the present disclosure.
 まず、図4に示すように、画像認識部11は、時刻t11において取得した撮影画像において茶色ビン画像BOを認識し、時刻t11における茶色ビン画像BOの重心座標DG1を算出する。 First, as shown in FIG. 4, the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t11, and calculates the barycentric coordinates DG1 of the brown bin image BO at time t11.
 次いで、図5に示すように、画像認識部11は、カメラ20のフレームレートFR[frame/sec]と、ベルトコンベア40の搬送速度CS[mm/sec]とに基づいて、時刻t11で取得された撮影画像の次に時刻t12で取得されるであろう撮影画像における茶色ビン画像BOの重心座標EG2を予測する。時刻t12における重心座標EG2は、時刻t11における重心座標DG1に対して、1フレーム当たり、ベルトコンベア40の搬送方向CD(つまり、図1におけるX方向)にCS/FRだけ移動すると予測される。また、画像認識部11は、予測した重心座標EG2を中心とする円形の所定の大きさの許容範囲TRを設定する。 Next, as shown in FIG. 5, the image recognition unit 11 acquires at time t11 based on the frame rate FR [frame/sec] of the camera 20 and the conveying speed CS [mm/sec] of the belt conveyor 40. The barycentric coordinates EG2 of the brown bin image BO in the captured image that will be acquired at time t12 next to the captured image are predicted. The barycentric coordinate EG2 at time t12 is predicted to move CS/FR per frame in the conveying direction CD of the belt conveyor 40 (that is, the X direction in FIG. 1) with respect to the barycentric coordinate DG1 at time t11. Further, the image recognition unit 11 sets a circular allowable range TR of a predetermined size centered on the predicted barycentric coordinate EG2.
 次いで、図6に示すように、画像認識部11は、時刻t12において取得した撮影画像において茶色ビン画像BOを認識し、時刻t12における茶色ビン画像BOの重心座標DG2を算出する。また、画像認識部11は、重心座標DG2が許容範囲TR内にあるか否かを判定する。 Next, as shown in FIG. 6, the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t12, and calculates the barycentric coordinates DG2 of the brown bin image BO at time t12. Further, the image recognition unit 11 determines whether or not the barycentric coordinate DG2 is within the allowable range TR.
 重心座標DG2が許容範囲TR内にあるときは、画像認識部11は、時刻t11及び時刻t12における双方の茶色ビン画像BOの被写体が同一の茶色ビンであると判定し、図7に示すように、重心座標DG1から重心座標DG2へと繋がる直線のトレース線TLを設定する。一方で、重心座標DG2が許容範囲TR内にないときは、画像認識部11は、重心座標のトレースを中止する。 When the barycentric coordinate DG2 is within the permissible range TR, the image recognition unit 11 determines that the subject of both the brown bin images BO at time t11 and time t12 is the same brown bin, and as shown in FIG. , a straight trace line TL connecting from the barycentric coordinate DG1 to the barycentric coordinate DG2 is set. On the other hand, when the barycentric coordinate DG2 is not within the allowable range TR, the image recognition unit 11 stops tracing the barycentric coordinate.
 画像認識部11は、フレームレートFRで順次取得される撮影画像に対して、上記のような重心座標のトレースを順次実行する。 The image recognition unit 11 sequentially traces the barycentric coordinates as described above for the captured images that are sequentially acquired at the frame rate FR.
 <所望廃棄物の抽出>
 図8は、本開示の実施例1の所望廃棄物の抽出例を示す図である。図8では、カメラ20のフレームレートFRは、例えば、ベルトコンベア40の搬送速度CSとカメラ20の画角とに基づいて、搬送方向CDに搬送される同一の空きビンを最大で3回撮影可能なフレームレートに設定されている。また、物体選別装置30aは、吸引パッド31を用いて空きビン群から所望廃棄物である茶色ビンを抽出する。
<Extraction of desired waste>
FIG. 8 is a diagram illustrating an example of extraction of desired waste in Example 1 of the present disclosure. In FIG. 8, the frame rate FR of the camera 20 is based on, for example, the conveying speed CS of the belt conveyor 40 and the angle of view of the camera 20, and the same empty bottle conveyed in the conveying direction CD can be photographed three times at maximum. frame rate is set. The object sorting device 30a also uses the suction pad 31 to extract brown bottles, which are desired waste, from the empty bottle group.
 図8において、画像認識部11は、時刻t11において取得した撮影画像において茶色ビン画像BOを認識し、時刻t11における茶色ビン画像BOの重心座標DG1(X1,Y1)を算出する。 In FIG. 8, the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t11, and calculates the barycentric coordinates DG1 (X1, Y1) of the brown bin image BO at time t11.
 次いで、画像認識部11は、時刻t12において取得した撮影画像において茶色ビン画像BOを認識し、時刻t12における茶色ビン画像BOの重心座標DG2(X2,Y2)を算出する。 Next, the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t12, and calculates barycentric coordinates DG2 (X2, Y2) of the brown bin image BO at time t12.
 次いで、画像認識部11は、時刻t11及び時刻t12における双方の茶色ビン画像BOの被写体が同一の茶色ビンであると判定し、重心座標DG1(X1,Y1)から重心座標DG2(X2,Y2)へと繋がる直線のトレース線TLを設定する。 Next, the image recognition unit 11 determines that the subjects of both the brown bin images BO at time t11 and time t12 are the same brown bin, and determines the coordinates DG2 (X2, Y2) from the coordinates DG1 (X1, Y1) to the coordinates DG2 (X2, Y2). A straight trace line TL connecting to is set.
 次いで、画像認識部11は、時刻t13において取得した撮影画像において茶色ビン画像BOを認識し、時刻t13における茶色ビン画像BOの重心座標DG3(X3,Y3)を算出する。 Next, the image recognition unit 11 recognizes the brown bin image BO in the captured image acquired at time t13, and calculates barycentric coordinates DG3 (X3, Y3) of the brown bin image BO at time t13.
 次いで、画像認識部11は、時刻t12及び時刻t13における双方の茶色ビン画像BOの被写体が同一の茶色ビンであると判定し、重心座標DG1(X1,Y1)から重心座標DG3(X3,Y3)へと繋がる直線のトレース線TLを設定する。 Next, the image recognition unit 11 determines that the subjects of both the brown bin images BO at time t12 and time t13 are the same brown bin, and the coordinates of the center of gravity DG1 (X1, Y1) to the coordinates of the center of gravity DG3 (X3, Y3) are determined. A straight trace line TL connecting to is set.
 次いで、画像認識部11は、重心座標DG1(X1,Y1)から重心座標DG3(X3,Y3)へと繋がるトレース線TLをX方向における吸引パッド31の設置位置まで直線近似により延長し、かつ、カメラ座標系を物体選別装置30aの座標系に変換することにより、茶色ビンの抽出目標座標TC(X,Y)を算出する。また、画像認識部11は、搬送速度CSに基づいて、時刻t11,t12,t13で認識した茶色ビン画像BOの被写体である茶色ビンが抽出目標座標TC(X,Y)に到達する時刻(以下では「抽出目標時刻」と呼ぶことがある)tHを算出する。画像認識部11は、算出した抽出目標座標TC(X,Y)及び抽出目標時刻tHを含む制御信号を物体選別装置30aへ送信する。 Next, the image recognition unit 11 extends the trace line TL connecting from the barycentric coordinates DG1 (X1, Y1) to the barycentric coordinates DG3 (X3, Y3) to the installation position of the suction pad 31 in the X direction by linear approximation, and By converting the camera coordinate system into the coordinate system of the object sorting device 30a, extraction target coordinates TC(X, Y) of the brown bin are calculated. Based on the transport speed CS, the image recognition unit 11 also determines the times (hereinafter referred to as the Then, tH, which may be called "extraction target time", is calculated. The image recognition unit 11 transmits a control signal including the calculated extraction target coordinates TC(X, Y) and the extraction target time tH to the object sorting device 30a.
 物体選別装置30aでは、画像認識部11から受信された抽出目標座標TC(X,Y)及び抽出目標時刻tHに従って、吸引パッド31が、抽出目標座標TC(X,Y)に移動し、抽出目標時刻tHで抽出目標座標TC(X,Y)に位置する空きビンを空きビン群から抽出する。これにより、物体選別装置30aでは、時刻t11,t12,t13で画像認識部11によって認識された茶色ビン画像BOの被写体である茶色ビンが空きビン群から抽出される。 In the object sorting device 30a, the suction pad 31 moves to the extraction target coordinates TC(X,Y) according to the extraction target coordinates TC(X,Y) and the extraction target time tH received from the image recognition unit 11, and the extraction target At the time tH, an empty bin positioned at the extraction target coordinates TC(X,Y) is extracted from the empty bin group. As a result, in the object sorting device 30a, brown bottles, which are subjects of the brown bottle images BO recognized by the image recognition unit 11 at times t11, t12, and t13, are extracted from the empty bottle group.
 <制御装置の動作>
 図9~図15は、本開示の実施例1の制御装置の動作例を示す図である。
<Operation of the control device>
9 to 15 are diagrams showing operation examples of the control device according to the first embodiment of the present disclosure.
 図9において、時刻taから時刻tiまで、空きビンBOa,BOb,BOc,BOd,BOe,BOfがベルトコンベア40によって搬送方向CDに順次搬送される。空きビンBOaは所望廃棄物である茶色ビンであり、空きビンBOb,BOc,BOd,BOe,BOfは非所望廃棄物である他色ビンである。 In FIG. 9, the empty bins BOa, BOb, BOc, BOd, BOe, and BOf are sequentially conveyed by the belt conveyor 40 in the conveying direction CD from time ta to time ti. The empty bottle BOa is a brown bottle that is the desired waste, and the empty bottles BOb, BOc, BOd, BOe, and BOf are non-desired waste of other colors.
 空きビン群の搬送に伴って、カメラ20によって、時刻taでは、空きビンの画像が含まれない撮影画像CIaが取得され、時刻tbでは、茶色ビンBOaの画像である茶色ビン画像BIaが含まれる撮影画像CIbが取得され、時刻tcでは、茶色ビン画像BIaと、他色ビンBObの画像である他色ビン画像BIbとが含まれる撮影画像CIcが取得され、時刻tdでは、茶色ビン画像BIaと、他色ビン画像BIbと、他色ビンBOcの画像である他色ビン画像BIcとが含まれる撮影画像CIdが取得され、時刻teでは、他色ビン画像BIbと、他色ビン画像BIcと、他色ビンBOdの画像である他色ビン画像BIdとが含まれる撮影画像CIeが取得される。また、空きビン群の搬送に伴って、カメラ20によって、時刻tfでは、他色ビン画像BIcと、他色ビン画像BIdと、他色ビンBOeの画像である他色ビン画像BIeとが含まれる撮影画像CIfが取得され、時刻tgでは、他色ビン画像BIdと、他色ビン画像BIeと、他色ビンBOfの画像である他色ビン画像BIfとが含まれる撮影画像CIgが取得され、時刻thでは、他色ビン画像BIeと他色ビン画像BIfとが含まれる撮影画像CIhが取得され、時刻tiでは、他色ビン画像BIfが含まれる撮影画像CIiが取得され、時刻tjでは、空きビンの画像が含まれない撮影画像CIjが取得される。よって、時刻tiの時点で、認識結果記憶部12には、特徴情報が付与された各撮影画像CIa,CIb,CIc,CId,CIe,CIf,CIg,CIh,CIiが各撮影時刻ta,tb,tc,td,te,tf,tg,th,tiと一対一で対応付けられて記憶されている。 As the empty bin group is transported, the camera 20 acquires a photographed image CIa that does not include the image of the empty bin at time ta, and includes the brown bin image BIa that is the image of the brown bin BOa at time tb. A photographed image CIb is acquired, at time tc a photographed image CIc including a brown bin image BIa and another color bin image BIb that is an image of the other color bin BOb is acquired, and at time td, the brown bin image BIa and , the other-color bin image BIb and the other-color bin image BIc, which is the image of the other-color bin BOc. A photographed image CIe including the other-color bin image BId, which is the image of the other-color bin BOd, is acquired. At time tf, the camera 20 captures the other-color bin image BIc, the other-color bin image BId, and the other-color bin image BIe, which is the image of the other-color bin BOe, along with the transportation of the empty bin group. A photographed image CIf is acquired, and at time tg, a photographed image CIg including the other-color bin image BId, the other-color bin image BIe, and the other-color bin image BIf, which is the image of the other-color bin BOf, is acquired. At th, a photographed image CIh containing the other color bin image BIe and the other color bin image BIf is acquired, at time ti a photographed image CIi containing the other color bin image BIf is acquired, and at time tj an empty bin is acquired. A captured image CIj that does not include the image of is acquired. Therefore, at the time ti, the captured images CIa, CIb, CIc, CId, CIe, CIf, CIg, CIh, and CIi to which the feature information is assigned are stored in the recognition result storage unit 12 at the respective shooting times ta, tb, They are stored in one-to-one correspondence with tc, td, te, tf, tg, th, and ti.
 ここで、時刻tbにおいて、画像認識部11は、茶色ビン画像BIaを茶色ビン画像であると正しく認識したものとする。そこで、物体選別装置30aは、時刻tfで、吸引パッド31を用いて空きビン群から茶色ビンBOaを抽出し、時刻tgで、吸引パッド31をリリース位置まで移動させて吸引パッド31から茶色ビンBOaをリリースする。 Here, at time tb, the image recognition unit 11 correctly recognizes the brown bin image BIa as a brown bin image. Therefore, at time tf, the object sorting device 30a uses the suction pad 31 to extract the brown bottle BOa from the empty bottle group, and at time tg, moves the suction pad 31 to the release position and removes the brown bottle BOa from the suction pad 31 at time tg. to release.
 一方で、時刻tdにおいて、画像認識部11は、他色ビン画像BIcを茶色ビン画像であると誤認識したものとする。そこで、物体選別装置30aは、時刻thで、吸引パッド31を用いて空きビン群から他色ビンBOcを抽出し、時刻tiで、吸引パッド31をリリース位置まで移動させて吸引パッド31から他色ビンBOcをリリースする。つまり、他色ビン画像BIcが茶色ビン画像であると画像認識部11で誤認識された結果、物体選別装置30aでは、他色ビンBOcが茶色ビンとして誤選別される。 On the other hand, at time td, the image recognition unit 11 erroneously recognizes the other-color bin image BIc as a brown bin image. Therefore, at time th, the object sorting device 30a uses the suction pad 31 to extract the other-color bin BOc from the empty bin group, and at time ti, moves the suction pad 31 to the release position to remove the other-color bin BOc from the suction pad 31 at time ti. Release bin BOc. That is, as a result of the image recognition unit 11 erroneously recognizing the other-color bin image BIc as a brown bin image, the object sorting device 30a erroneously sorts the other-color bin BOc as a brown bin.
 吸引パッド31から他色ビンBOcがリリースされた時刻tiで、茶色ビンではない他色ビンBOcが物体選別装置30aによって誤選別されたことに気付いたオペレータは、プッシュボタン50を押下する。よって、時刻tiで、ボタン押下信号がプッシュボタン50から認識結果抽出部13a及びボタン色制御部18へ出力される。 At the time ti when the other color bottle BOc is released from the suction pad 31, the operator presses the push button 50 when he/she notices that the other color bottle BOc, which is not the brown bottle, has been erroneously sorted by the object sorting device 30a. Therefore, at time ti, a button depression signal is output from the push button 50 to the recognition result extraction unit 13a and the button color control unit 18. FIG.
 認識結果抽出部13aは、ボタン押下信号に応じて、時刻tiで、誤選別廃棄物画像である他色ビン画像BIcを含む誤認識結果を判定する。認識結果抽出部13aは、フレームレートFRと、カメラ20と物体選別装置30aとの間のX方向での距離XDと、搬送速度CSとに基づいて、認識結果記憶部12に記憶されている撮影画像CIa,CIb,CIc,CId,CIe,CIf,CIg,CIh,CIiのうち、時刻td,te,tfにそれぞれ対応付けられた撮影画像CId,CIe,CIfが誤選別廃棄物存在画像であると判定する。例えば、認識結果抽出部13aは、プッシュボタン50が押下された時刻tiからXD/CS[sec]だけ以前の時刻を時刻teと判定するとともに、時刻tiから((XD/CS)-(1/FR))[sec]だけ以前の時刻を時刻tdと判定し、時刻tiから((XD/CS)+(1/FR))[sec]だけ以前の時刻を時刻tfと判定する。そして、認識結果抽出部13aは、時刻tiにおいて認識結果記憶部12に記憶されている撮影画像CIa,CIb,CIc,CId,CIe,CIf,CIg,CIh,CIiの中から、時刻td,te,tfの各時刻に対応する撮影画像CId,CIe,CIfを含む認識結果を誤認識結果として抽出する。認識結果抽出部13aは、認識結果記憶部12から抽出した撮影画像CId,CIe,CIfを特徴情報とともにラベル修正部14へ出力するとともに、輪郭情報によって表される輪郭とともにディスプレイ60に表示させる。 The recognition result extraction unit 13a determines, at time ti, an erroneous recognition result including the other-color bin image BIc, which is an erroneously sorted waste image, in response to the button press signal. The recognition result extraction unit 13a extracts images stored in the recognition result storage unit 12 based on the frame rate FR, the distance XD in the X direction between the camera 20 and the object sorting device 30a, and the transport speed CS. Of the images CIa, CIb, CIc, CId, CIe, CIf, CIg, CIh, and CIi, the photographed images CId, CIe, and CIf associated with the times td, te, and tf, respectively, are the missorted waste presence images. judge. For example, the recognition result extracting unit 13a determines that the time te before the time ti at which the push button 50 was pressed by XD/CS [sec] is the time te, and ((XD/CS)-(1/ The time FR)) [sec] earlier is determined as time td, and the time earlier than time ti by ((XD/CS)+(1/FR)) [sec] is determined as time tf. Then, the recognition result extracting unit 13a selects the captured images CIa, CIb, CIc, CId, CIe, CIf, CIg, CIh, and CIi stored in the recognition result storage unit 12 at the time ti, at the times td, te, Recognition results including captured images CId, CIe, and CIf corresponding to each time of tf are extracted as erroneous recognition results. The recognition result extraction unit 13a outputs the photographed images CId, CIe, and CIf extracted from the recognition result storage unit 12 together with the feature information to the label correction unit 14, and causes the display 60 to display them together with the outline represented by the outline information.
 画像認識部11が時刻tbで茶色ビン画像BIaを茶色ビン画像と正しく認識したため、図10に示すように、撮影画像CIdにおいて、茶色ビン画像BIaには、“茶色ビン”という正しいラベル情報LAaと、正しい輪郭情報COa1とを含む特徴情報RRa1が付与されている。また、画像認識部11が時刻tcで他色ビン画像BIbを他色ビン画像と正しく認識したため、図10に示すように、撮影画像CIdにおいて、他色ビン画像BIbには、“他色ビン”という正しいラベル情報LAbと、正しい輪郭情報COb1とを含む特徴情報RRb1が付与されている。一方で、画像認識部11が時刻tdで他色ビン画像BIcを茶色ビン画像と誤認識したため、図10に示すように、撮影画像CIdにおいて、他色ビン画像BIcには、“茶色ビン”という誤ったラベル情報LAcと、正しい輪郭情報COc1とを含む特徴情報RRc1が付与されている。 Since the image recognition unit 11 correctly recognized the brown bin image BIa as a brown bin image at time tb, as shown in FIG. , and correct contour information COa1. Further, since the image recognition unit 11 correctly recognized the other-color bin image BIb as the other-color bin image at time tc, as shown in FIG. and feature information RRb1 including correct label information LAb and correct contour information COb1. On the other hand, since the image recognition unit 11 erroneously recognizes the other-color bin image BIc as a brown bin image at time td, as shown in FIG. Feature information RRc1 including erroneous label information LAc and correct outline information COc1 is provided.
 同様に、図11に示すように、撮影画像CIeにおいて、他色ビン画像BIbには、“他色ビン”という正しいラベル情報LAbと、正しい輪郭情報COb2とを含む特徴情報RRb2が付与され、他色ビン画像BIcには、“茶色ビン”という誤ったラベル情報LAcと、正しい輪郭情報COc2とを含む特徴情報RRc2が付与されている。また、画像認識部11が時刻teで他色ビン画像BIdを他色ビン画像と正しく認識したため、図11に示すように、撮影画像CIeにおいて、他色ビン画像BIdには、“他色ビン”という正しいラベル情報LAdと、正しい輪郭情報COd1とを含む特徴情報RRd1が付与されている。 Similarly, as shown in FIG. 11, in the photographed image CIe, the other-color bin image BIb is given feature information RRb2 including correct label information LAb of "other-color bin" and correct contour information COb2. Feature information RRc2 including incorrect label information LAc of "brown bin" and correct contour information COc2 is added to the color bin image BIc. Further, since the image recognition unit 11 correctly recognized the other-color bin image BId as the other-color bin image at time te, as shown in FIG. and feature information RRd1 including correct label information LAd and correct contour information COd1.
 同様に、図12に示すように、撮影画像CIfにおいて、他色ビン画像BIcには、“茶色ビン”という誤ったラベル情報LAcと、正しい輪郭情報COc3とを含む特徴情報RRc3が付与され、他色ビン画像BIdには、“他色ビン”という正しいラベル情報LAdと、正しい輪郭情報COd2とを含む特徴情報RRd2が付与されている。また、画像認識部11が時刻tfで他色ビン画像BIeを他色ビン画像と正しく認識したため、図12に示すように、撮影画像CIfにおいて、他色ビン画像BIeには、“他色ビン”という正しいラベル情報LAeと、正しい輪郭情報COe1とを含む特徴情報RRe1が付与されている。 Similarly, as shown in FIG. 12, in the photographed image CIf, the other-color bin image BIc is given feature information RRc3 including erroneous label information LAc of "brown bin" and correct contour information COc3. Feature information RRd2 including correct label information LAd of "other color bin" and correct outline information COd2 is added to the color bin image BId. Further, since the image recognition unit 11 correctly recognized the other-color bin image BIe as the other-color bin image at time tf, as shown in FIG. and feature information RRe1 including correct label information LAe and correct contour information COe1.
 図13に誤選別廃棄物画像を含む誤認識結果の表示例を示す。ディスプレイ60には、例えば、認識結果抽出部13aによって認識結果記憶部12から抽出された図13に示すような撮影画像CIが表示される。撮影画像CIには、空きビン画像BI1,BI2,BI3,BI4,BI5が含まれる。空きビン画像BI1は輪郭情報CO1によって表される輪郭で囲まれ、空きビン画像BI2は輪郭情報CO2によって表される輪郭で囲まれ、空きビン画像BI3は輪郭情報CO3によって表される輪郭で囲まれ、空きビン画像BI4は輪郭情報CO4によって表される輪郭で囲まれ、空きビン画像BI5は輪郭情報CO5によって表される輪郭で囲まれている。 Fig. 13 shows a display example of misrecognition results including missorted waste images. For example, the display 60 displays the photographed image CI as shown in FIG. 13 extracted from the recognition result storage unit 12 by the recognition result extraction unit 13a. The captured image CI includes empty bin images BI1, BI2, BI3, BI4, and BI5. Empty bin image BI1 is surrounded by a contour represented by contour information CO1, empty bin image BI2 is surrounded by a contour represented by contour information CO2, and empty bin image BI3 is surrounded by a contour represented by contour information CO3. , the empty bin image BI4 is surrounded by the outline represented by the outline information CO4, and the empty bin image BI5 is surrounded by the outline represented by the outline information CO5.
 ここで、空きビン画像BI1,BI3,BI4の正しい色は茶色であり、空きビン画像BI2,BI5の正しい色は他色であるものとする。よって、空きビン画像BI1,BI3,BI4にそれぞれ付与された“茶色ビン”というラベル情報LA1,LA3,LA4、及び、空きビン画像BI2に付与された“他色ビン”というラベル情報LA2は正しい一方で、空きビン画像BI5に付与された“茶色ビン”というラベル情報LA5aは誤っている。つまり、他色ビン画像である空きビン画像BI5が誤選別廃棄物画像に該当し、撮影画像CIは、誤選別廃棄物画像である空きビン画像BI5を含む誤選別廃棄物存在画像に該当する。 Here, it is assumed that the correct color of empty bin images BI1, BI3, BI4 is brown, and the correct color of empty bin images BI2, BI5 is another color. Therefore, the label information LA1, LA3, LA4 of "brown bottle" given to the empty bin images BI1, BI3, and BI4, respectively, and the label information LA2 of "another color bottle" given to the empty bin image BI2 are correct. Therefore, the label information LA5a of "brown bottle" given to the empty bottle image BI5 is incorrect. That is, the empty bin image BI5, which is the other-color bin image, corresponds to the incorrectly sorted waste image, and the photographed image CI corresponds to the incorrectly sorted waste existence image including the empty bin image BI5, which is the incorrectly sorted waste image.
 そこで、例えばオペレータは、入力装置70の操作により撮影画像CI上に表示されるポインタPOを用いて、図14に示すようにして、撮影画像CIにおいて空きビン画像BI5に付与されている“茶色ビン”というラベル情報LA5aの表示箇所をクリックすることにより空きビン画像BI5のラベル情報LA5aを指定する。入力装置70の操作によりラベル情報LA5aが指定されると、ラベル修正部14は、指定されたラベル情報LA5aの修正を可能とする。そこで、オペレータは、例えばキーボードを用いて、図15に示すように、空きビン画像BI5に付与されているラベル情報を“茶色ビン”から“他色ビン”に修正する。オペレータの修正に従って、ラベル修正部14は、空きビン画像BI5に付与されている“茶色ビン”という誤ったラベル情報LA5aを、空きビン画像BI5の正しい種類を示す“他色ビン”というラベル情報LA5bに修正する。このようにして、誤認識結果に含まれる誤ったラベル情報は、ラベル修正部14によって修正される。 Therefore, for example, the operator operates the input device 70 to use the pointer PO displayed on the photographed image CI, as shown in FIG. ”, the label information LA5a of the empty bin image BI5 is specified by clicking the display position of the label information LA5a. When the label information LA5a is specified by operating the input device 70, the label correction unit 14 enables correction of the specified label information LA5a. Therefore, the operator uses, for example, a keyboard to correct the label information given to the empty bin image BI5 from "brown bin" to "other color bin" as shown in FIG. According to the correction by the operator, the label correcting unit 14 replaces the incorrect label information LA5a of "brown bottle" attached to the empty bin image BI5 with label information LA5b of "other color bin" indicating the correct type of the empty bin image BI5. to be corrected. In this manner, erroneous label information included in the erroneous recognition result is corrected by the label corrector 14. FIG.
 なお、認識結果記憶部12の最低限必要な記憶容量は式(1)に従って算出可能である。式(1)において、“Ts”は、茶色ビンがカメラ20の画角に入ってから、吸引パッド31に吸着された茶色ビンが吸引パッド31からリリースされるまでに要する時間[sec]であり、“Tj”は、誤選別された他色ビンが吸引パッド31からリリースされてから、オペレータがプッシュボタン50を押下するまでに要する時間[sec]であり、“FR”はカメラ20のフレームレートであり、“DS”は、撮影画像1枚当たりのデータサイズ[MB]である。
  認識結果記憶部12の記憶容量≧(Ts+Tj)/FR×DS …(1)
Note that the minimum required storage capacity of the recognition result storage unit 12 can be calculated according to equation (1). In equation (1), "Ts" is the time [sec] required from the time the brown bottle enters the angle of view of the camera 20 until the brown bottle sucked to the suction pad 31 is released from the suction pad 31. , "Tj" is the time [sec] required for the operator to press the push button 50 after the erroneously sorted bins of other colors are released from the suction pad 31, and "FR" is the frame rate of the camera 20. and "DS" is the data size [MB] per photographed image.
Storage capacity of recognition result storage unit 12≧(Ts+Tj)/FR×DS (1)
 <ボタン色の遷移>
 図16及び図17は、本開示の実施例1のボタン色の遷移例を示す図である。図16及び図17に示すように、ボタン色は、例えば緑色と赤色と黄色との間で遷移する。
<Button color transition>
16 and 17 are diagrams illustrating examples of button color transitions according to the first embodiment of the present disclosure. As shown in FIGS. 16 and 17, the button color transitions between green, red, and yellow, for example.
 図16において、ボタン色制御部18は、画像認識部11が前回の認識完了信号を出力してから次の認識完了信号を出力するまでの認識待ち状態にあるときは、誤認識結果の抽出の受付が可能であることを示す緑色にボタン色を設定する。ボタン色が緑色であるときは、プッシュボタン50の押下が有効である。 In FIG. 16, when the image recognition unit 11 is in the recognition waiting state from when the image recognition unit 11 outputs the previous recognition completion signal until when it outputs the next recognition completion signal, the button color control unit 18 does not extract the erroneous recognition result. Set the button color to green, which indicates that reception is possible. When the button color is green, pressing the push button 50 is valid.
 ボタン色制御部18は、緑色にある状態のプッシュボタン50が押下されることによりボタン押下信号が入力されると、誤認識結果の抽出の開始をオペレータから指示されたと判定し、ボタン色を緑色から黄色に変化させる。ボタン色が黄色であるときは、プッシュボタン50の押下が有効である。 When a button depression signal is input by pressing the push button 50 in the green state, the button color control unit 18 determines that an instruction to start extraction of erroneous recognition results has been given by the operator, and changes the button color to green. to yellow. When the button color is yellow, pressing the push button 50 is valid.
 ボタン色制御部18は、黄色にある状態のプッシュボタン50が押下されることによりボタン押下信号が入力されると、誤認識結果の抽出の中止をオペレータから指示されたと判定し、ボタン色を黄色から緑色に変化させる。または、ボタン色制御部18は、ボタン色が黄色にある状態でプッシュボタン50が押下されることなく認識結果抽出部13aから抽出完了信号が出力された時点で、ボタン色を黄色から緑色に変化させる。 When a button pressing signal is input by pressing the yellow push button 50, the button color control unit 18 determines that the operator has instructed to stop extracting the erroneous recognition result, and changes the button color to yellow. to green. Alternatively, the button color control unit 18 changes the button color from yellow to green when an extraction completion signal is output from the recognition result extraction unit 13a without the push button 50 being pressed while the button color is yellow. Let
 また、ボタン色制御部18は、ボタン色が緑色にある状態でプッシュボタン50が押下されることなく画像認識部11から認識完了信号が出力された時点で、ボタン色を緑色から赤色に変化させる。ボタン色が赤色であるときは、プッシュボタン50の押下が無効である。画像認識部11での認識完了後、抽出目標座標に対応する目標位置への吸引パッド31の移動が完了するまで、ボタン色制御部18は、ボタン色を赤色に維持する。ボタン色制御部18は、物体選別装置30aから出力される選別状態信号に基づいて、吸引パッド31の状態を判定する。 Further, the button color control unit 18 changes the button color from green to red when the recognition completion signal is output from the image recognition unit 11 without the push button 50 being pressed while the button color is green. . When the button color is red, pressing the push button 50 is invalid. After completion of recognition by the image recognition unit 11, the button color control unit 18 keeps the button color red until the movement of the suction pad 31 to the target position corresponding to the extraction target coordinates is completed. The button color control unit 18 determines the state of the suction pad 31 based on the sorting state signal output from the object sorting device 30a.
 また、ボタン色制御部18は、抽出目標座標に対応する目標位置への吸引パッド31の移動が完了した後、所望廃棄物が目標位置へ到着するまで、ボタン色を赤色に維持する。また、ボタン色制御部18は、吸引パッド31による所望廃棄物の吸着が失敗したことを選別状態信号に基づいて判定すると、ボタン色を赤色から緑色に変化させる。一方で、ボタン色制御部18は、吸引パッド31による所望廃棄物の吸着が成功したことを選別状態信号に基づいて判定すると、ボタン色を赤色に維持する。ボタン色制御部18は、所望廃棄物のリリース位置へ吸引パッド31が移動している間は、ボタン色を赤色に維持する。 Further, the button color control unit 18 keeps the button color red until the desired waste reaches the target position after the suction pad 31 has completely moved to the target position corresponding to the extraction target coordinates. Further, when the button color control unit 18 determines based on the sorting state signal that the suction pad 31 has failed to suck the desired waste, it changes the button color from red to green. On the other hand, the button color control unit 18 maintains the button color to red when it is determined based on the sorting state signal that the suction pad 31 has successfully adsorbed the desired waste. The button color control unit 18 keeps the button color red while the suction pad 31 is moving to the desired waste release position.
 そして、ボタン色制御部18は、リリース位置で吸引パッド31が所望廃棄物のリリースを完了したことを選別状態信号に基づいて判定すると、ボタン色を赤色から緑色に変化させる。 Then, when the button color control unit 18 determines based on the sorting state signal that the suction pad 31 has completed releasing the desired waste at the release position, it changes the button color from red to green.
 図17に、ボタン色の遷移と、制御装置10aにおけるタスクの遷移との関係を示す。    FIG. 17 shows the relationship between button color transitions and task transitions in the control device 10a.   
 図17において、空きビンBOf,BOgは非所望廃棄物である他色ビンであり、空きビンBohは所望廃棄物である茶色ビンである。また、空きビンBOgは他色ビンであると正しく認識され、空きビンBohは茶色ビンであると正しく認識された一方で、空きビンBofは茶色ビンであると誤認識されたものとする。よって、制御装置10aでは、空きビンBOfに対するタスクTA1と、空きビンBohに対するタスクTA2とが生成される。新規のタスクは、重心座標のトレースに成功した空きビンがカメラ20の画角から外れた時点で生成される。また、既存のタスクは、吸引パッド31からの空きビンのリリース時、または、吸引パッド31による空きビンの吸着失敗時に削除される。 In FIG. 17, empty bottles BOf and BOg are non-desired waste bottles of other colors, and empty bottle Boh is a desired waste brown bottle. It is also assumed that the empty bottle BOg is correctly recognized as a bottle of another color, the empty bottle Boh is correctly recognized as a brown bottle, and the empty bottle Bof is erroneously recognized as a brown bottle. Therefore, the control device 10a generates a task TA1 for the empty bin BOf and a task TA2 for the empty bin Boh. A new task is generated when an empty bin whose barycentric coordinates have been successfully traced is out of the angle of view of the camera 20 . Also, the existing task is deleted when the empty bottle is released from the suction pad 31 or when the suction pad 31 fails to suck the empty bottle.
 時刻t21でタスクが存在しないときは、ボタン色制御部18は、ボタン色を緑色に設定する。 When there is no task at time t21, the button color control unit 18 sets the button color to green.
 次いで、時刻t22で、空きビンBOfに対するタスクTA1が生成されると、ボタン色制御部18は、ボタン色を緑色に設定する。また、ボタン色制御部18は、空きビンBOhに対するタスクTA2が生成される時刻t24まで、ボタン色を緑色に設定する。 Next, at time t22, when the task TA1 for the empty bin BOf is generated, the button color control section 18 sets the button color to green. Further, the button color control unit 18 sets the button color to green until time t24 when the task TA2 for the empty bin BOh is generated.
 次いで、時刻t24で、空きビンBOhに対するタスクTA2が生成されると、ボタン色制御部18は、ボタン色を緑色から赤色に変化させる。 Next, at time t24, when the task TA2 for the empty bin BOh is generated, the button color control section 18 changes the button color from green to red.
 次いで、時刻t25でタスクTA1が削除されると、ボタン色制御部18は、ボタン色を赤色から緑色に変化させる。 Next, when task TA1 is deleted at time t25, the button color control unit 18 changes the button color from red to green.
 次いで、タスクTA1が削除されてから所定時間経過後の時刻t26で、ボタン色制御部18は、ボタン色を緑色から赤色に変化させる。 Next, at time t26 after a predetermined period of time has elapsed since task TA1 was deleted, the button color control unit 18 changes the button color from green to red.
 そして、時刻t27でタスクTA2が削除されると、ボタン色制御部18は、ボタン色を赤色から緑色に変化させる。 Then, when the task TA2 is deleted at time t27, the button color control unit 18 changes the button color from red to green.
 なお、誤選別入力インタフェースとして、プッシュボタン50ではなく、タッチパネルに表示されるソフトウェアボタンが用いられる場合、ボタン色制御部18は、上記と同様にして、ソフトウェアボタンの色を変化させることが可能である。 When software buttons displayed on the touch panel are used instead of the push buttons 50 as the misselection input interface, the button color control unit 18 can change the color of the software buttons in the same manner as described above. be.
 以上、実施例1について説明した。 The first embodiment has been described above.
 [実施例2]
 <制御装置の構成>
 図18は、本開示の実施例2の制御装置の構成例を示す図である。図18に示す制御装置10bは、図1に示す制御装置10に該当し、図18に示す物体選別装置30bは、図1に示す物体選別装置30に該当する。但し、制御装置10bには、図1及び図2に示すプッシュボタン50は接続されない。また、物体選別装置30bは、選別状態信号を出力しない点が図2に示す物体選別装置30aと異なる。図18において、制御装置10bは、画像認識部11と、認識結果記憶部12と、認識結果抽出部13bと、ラベル修正部14と、教師データ記憶部15と、機械学習部16と、学習済モデル記憶部17とを有する。
[Example 2]
<Configuration of control device>
FIG. 18 is a diagram illustrating a configuration example of a control device according to a second embodiment of the present disclosure; A control device 10b shown in FIG. 18 corresponds to the control device 10 shown in FIG. 1, and an object sorting device 30b shown in FIG. 18 corresponds to the object sorting device 30 shown in FIG. However, the push button 50 shown in FIGS. 1 and 2 is not connected to the control device 10b. Further, the object sorting device 30b differs from the object sorting device 30a shown in FIG. 2 in that it does not output a sorting state signal. 18, the control device 10b includes an image recognition unit 11, a recognition result storage unit 12, a recognition result extraction unit 13b, a label correction unit 14, a teacher data storage unit 15, a machine learning unit 16, a learned and a model storage unit 17 .
 通常、茶色ビンのサイズは所定のサイズ未満であることが多い。また、物体選別装置30bでは、“茶色ビン”というラベル情報が付与された空きビン画像に対応する空きビンが所望廃棄物として選別される。そこで、認識結果抽出部13bは、認識結果記憶部12に記憶されている撮影画像において、“茶色ビン”というラベル情報が付与された空きビン画像のうち、閾値TH2以上の面積を有する空きビン画像を誤選別廃棄物画像と判定する。そして、認識結果抽出部13bは、“茶色ビン”というラベル情報が付与され、かつ、閾値TH2以上の面積を有する空きビン画像を含む撮影画像を特徴情報とともに誤認識結果として認識結果記憶部12から抽出する。 "Usually, the size of the brown bottle is often smaller than the specified size." Further, in the object sorting device 30b, an empty bottle corresponding to the empty bottle image to which the label information "brown bottle" is added is sorted as the desired waste. Therefore, the recognition result extraction unit 13b extracts empty bin images having an area equal to or larger than the threshold value TH2 among the empty bin images to which the label information “brown bin” is assigned in the photographed images stored in the recognition result storage unit 12. is determined to be a missorted waste image. Then, the recognition result extracting unit 13b extracts the photographed image including the empty bin image having the area equal to or larger than the threshold value TH2 to which the label information "brown bottle" is assigned, from the recognition result storage unit 12 as the false recognition result together with the feature information. Extract.
 以上、実施例2について説明した。 The second embodiment has been described above.
 [実施例3]
 <制御装置の構成>
 図19は、本開示の実施例3の制御装置の構成例を示す図である。図19に示す制御装置10cは、図1に示す制御装置10に該当し、図19に示す物体選別装置30cは、図1に示す物体選別装置30に該当する。但し、制御装置10cには、図1及び図2に示すプッシュボタン50は接続されない。
[Example 3]
<Configuration of control device>
FIG. 19 is a diagram illustrating a configuration example of a control device according to a third embodiment of the present disclosure; A control device 10c shown in FIG. 19 corresponds to the control device 10 shown in FIG. 1, and an object sorting device 30c shown in FIG. 19 corresponds to the object sorting device 30 shown in FIG. However, the push button 50 shown in FIGS. 1 and 2 is not connected to the control device 10c.
 図19において、制御装置10cは、画像認識部11と、認識結果記憶部12と、認識結果抽出部13cと、ラベル修正部14と、教師データ記憶部15と、機械学習部16と、学習済モデル記憶部17とを有する。物体選別装置30cは、吸引パッド31を有する。吸引パッド31は、吸着した空きビンの重量を検出する重量センサを有し、物体選別装置30cは、吸引パッド31に吸着された空きビンの重量を示す情報(以下では「重量情報」と呼ぶことがある)を認識結果抽出部13cへ出力する。 19, the control device 10c includes an image recognition unit 11, a recognition result storage unit 12, a recognition result extraction unit 13c, a label correction unit 14, a teacher data storage unit 15, a machine learning unit 16, a learned and a model storage unit 17 . The object sorting device 30 c has a suction pad 31 . The suction pad 31 has a weight sensor for detecting the weight of the sucked empty bin, and the object sorting device 30c receives information indicating the weight of the empty bin sucked by the suction pad 31 (hereinafter referred to as "weight information"). ) is output to the recognition result extraction unit 13c.
 通常、茶色ビンの重量は所定の重量未満であることが多い。また、物体選別装置30cでは、“茶色ビン”というラベル情報が付与された空きビン画像に対応する空きビンが所望廃棄物として選別される。そこで、認識結果抽出部13cは、重量情報に基づいて、認識結果記憶部12に記憶されている撮影画像において、“茶色ビン”というラベル情報が付与された空きビン画像のうち、閾値TH3以上の重量を有する空きビンに対応する空きビン画像を誤選別廃棄物画像と判定する。そして、認識結果抽出部13cは、“茶色ビン”というラベル情報が付与され、かつ、閾値TH3以上の重量を有する空きビンに対応する空きビン画像を含む撮影画像を特徴情報とともに誤認識結果として認識結果記憶部12から抽出する。 Usually, the weight of the brown bottle is often less than the specified weight. Further, in the object sorting device 30c, an empty bottle corresponding to the empty bottle image to which the label information "brown bottle" is added is sorted as the desired waste. Therefore, based on the weight information, the recognition result extracting unit 13c selects empty bin images with the label information of “brown bottle” from among the photographed images stored in the recognition result storage unit 12, and extracts images having weights equal to or greater than the threshold TH3. Empty bin images corresponding to heavy empty bins are determined to be missorted waste images. Then, the recognition result extracting unit 13c recognizes, together with the feature information, the photographed image including the empty bin image corresponding to the empty bin having the label information “brown bottle” and having a weight equal to or greater than the threshold TH3 as the result of misrecognition. Extract from the result storage unit 12 .
 以上、実施例3について説明した。 The third embodiment has been described above.
 [実施例4]
 認識結果記憶部12、教師データ記憶部15、及び、学習済モデル記憶部17は、ハードウェアとして、例えば、メモリまたはストレージにより実現される。画像認識部11、認識結果抽出部13a,13b,13c、ラベル修正部14、機械学習部16、及び、ボタン色制御部18は、ハードウェアとして、例えば、CPU(Central Processing Unit)、DSP(Digital Signal Processor)、FPGA(Field Programmable Gate Array)、ASIC(Application Specific Integrated Circuit)等のプロセッサにより実現される。
[Example 4]
The recognition result storage unit 12, the teacher data storage unit 15, and the learned model storage unit 17 are implemented as hardware, for example, by memory or storage. The image recognition unit 11, the recognition result extraction units 13a, 13b, 13c, the label correction unit 14, the machine learning unit 16, and the button color control unit 18 include hardware such as a CPU (Central Processing Unit), a DSP (Digital signal processor), FPGA (Field Programmable Gate Array), and ASIC (Application Specific Integrated Circuit).
 以上、実施例4について説明した。 The fourth embodiment has been described above.
 以上のように、本開示の物体選別システム(実施例の物体選別システム1)は、カメラ(実施例のカメラ20)と、認識部(実施例の画像認識部11)と、記憶部(実施例の認識結果記憶部12)と、選別部(実施例の物体選別装置30,30a,30b,30c)と、抽出部(実施例の認識結果抽出部13a,13b,13c)とを有する。カメラは、物体群が搬送される搬送路において、物体群の画像である第一画像を撮影する。認識部は、学習済モデルを用いて第一画像中に存在する各物体の画像である第二画像を認識し、第二画像の特徴を示す情報である特徴情報を第二画像に付与し、第一画像において第二画像に対して特徴情報が付与された認識結果を生成する。記憶部は、認識結果を記憶する。選別部は、認識結果に基づいて、搬送路を搬送される物体群の中から所望物体を選別する。抽出部は、選別部によって所望物体以外の非所望物体が誤選別されるときに、記憶部に記憶されている認識結果において、誤選別される非所望物体の画像を含む認識結果である誤認識結果を判定し、誤認識結果を記憶部から抽出する。例えば、特徴情報は、物体群における各物体の種類を示す情報と、物体群における各物体の輪郭を示す情報とを含む。 As described above, the object sorting system of the present disclosure (object sorting system 1 of the embodiment) includes a camera (camera 20 of the embodiment), a recognition unit (image recognition unit 11 of the embodiment), a storage unit ( (recognition result storage unit 12), a selection unit ( object selection devices 30, 30a, 30b, 30c of the embodiment), and an extraction unit (recognition result extraction units 13a, 13b, 13c of the embodiment). The camera captures a first image, which is an image of the object group, on the transport path along which the object group is transported. The recognition unit recognizes a second image that is an image of each object present in the first image using the trained model, adds feature information that is information indicating the feature of the second image to the second image, A recognition result is generated in which feature information is added to the second image in the first image. The storage unit stores recognition results. The sorting unit sorts out a desired object from the group of objects conveyed on the conveying path based on the recognition result. When an undesired object other than a desired object is erroneously sorted by the sorting unit, the extracting unit extracts an erroneous recognition that is a recognition result including an image of the erroneously selected undesired object in the recognition results stored in the storage unit. The result is determined, and an erroneous recognition result is extracted from the storage unit. For example, the feature information includes information indicating the type of each object in the object group and information indicating the outline of each object in the object group.
 こうすることで、記憶部から抽出された誤認識結果を元にして、オペレータが誤認識結果を修正することにより教師データを生成することができるため、画像データに対してすべて手作業でアノテーション作業を行う場合に比べて、教師データの生成に要する労力を軽減することができる。 By doing so, the operator can generate teacher data by correcting the erroneous recognition results based on the erroneous recognition results extracted from the storage unit. can reduce the effort required to generate teacher data compared to the case of performing
 また、本開示の物体選別システムは、選別部によって非所望物体が誤選別されたことを示す入力を受け付けるインタフェースを有し、抽出部は、インタフェースに入力があったときに記憶部から誤認識結果を抽出する。 Further, the object sorting system of the present disclosure has an interface that receives an input indicating that the undesired object has been erroneously sorted by the sorting unit. to extract
 または、抽出部は、第二画像の面積に基づいて記憶部から誤認識結果を抽出する。 Alternatively, the extracting unit extracts an erroneous recognition result from the storage unit based on the area of the second image.
 または、抽出部は、選別部によって選別される所望物体の重量に基づいて記憶部から誤認識結果を抽出する。 Alternatively, the extraction unit extracts the recognition error result from the storage unit based on the weight of the desired object sorted by the sorting unit.
 こうすることで、誤認識結果を正確に抽出することができる。 By doing this, misrecognition results can be extracted accurately.
 また、インタフェースは、オペレータからの入力を受け付けるボタン(実施例のプッシュボタン50)であり、本開示の物体選別システムは、抽出部による誤認識結果の抽出状況に応じてボタンの色を変化させる制御部を有する。 In addition, the interface is a button (push button 50 in the embodiment) that accepts input from the operator, and the object sorting system of the present disclosure controls the color of the button to change according to the extraction status of the erroneous recognition result by the extraction unit. have a part.
 こうすることで、誤認識結果の抽出状況をオペレータに通知することができる。 By doing this, the operator can be notified of the extraction status of the misrecognition results.
 また、抽出部は、カメラのフレームレートと、カメラと選別部との間の距離と、搬送路における物体群の搬送速度とに基づいて、誤認識結果を判定する。 In addition, the extraction unit determines an erroneous recognition result based on the frame rate of the camera, the distance between the camera and the sorting unit, and the transportation speed of the object group on the transportation path.
 こうすることで、誤認識結果を正確に判定することができる。 By doing this, it is possible to accurately judge the misrecognition result.
1 物体選別システム
10,10a,10b,10c 制御装置
20 カメラ
30,30a,30b,30c 物体選別装置
50 プッシュボタン
11 画像認識部
12 認識結果記憶部
13a,13b,13c 認識結果抽出部
14 ラベル修正部
15 教師データ記憶部
16 機械学習部
17 学習済モデル記憶部
18 ボタン色制御部
1 object sorting system 10, 10a, 10b, 10c control device 20 cameras 30, 30a, 30b, 30c object sorting device 50 push button 11 image recognition unit 12 recognition result storage unit 13a, 13b, 13c recognition result extraction unit 14 label correction unit 15 Teacher data storage unit 16 Machine learning unit 17 Learned model storage unit 18 Button color control unit

Claims (7)

  1.  物体群が搬送される搬送路において、前記物体群の画像である第一画像を撮影するカメラと、
     学習済モデルを用いて前記第一画像中に存在する各物体の画像である第二画像を認識し、前記第二画像の特徴を示す情報である特徴情報を前記第二画像に付与し、前記第一画像において前記第二画像に対して前記特徴情報が付与された認識結果を生成する認識部と、
     前記認識結果を記憶する記憶部と、
     前記認識結果に基づいて、前記搬送路を搬送される前記物体群の中から所望物体を選別する選別部と、
     前記選別部によって前記所望物体以外の非所望物体が誤選別されるときに、前記記憶部に記憶されている前記認識結果において、誤選別される前記非所望物体の画像を含む認識結果である誤認識結果を判定し、前記誤認識結果を前記記憶部から抽出する抽出部と、
     を具備する物体選別システム。
    a camera that captures a first image that is an image of the object group on a transport path along which the object group is transported;
    Recognizing a second image that is an image of each object present in the first image using the trained model, adding feature information that is information indicating a feature of the second image to the second image, a recognition unit that generates a recognition result in which the feature information is added to the second image in the first image;
    a storage unit that stores the recognition result;
    a selection unit that selects a desired object from the object group transported on the transport path based on the recognition result;
    When an undesired object other than the desired object is erroneously sorted by the sorting unit, an erroneous recognition result that includes an image of the erroneously selected undesired object in the recognition results stored in the storage unit. an extraction unit that determines a recognition result and extracts the erroneous recognition result from the storage unit;
    An object sorting system comprising:
  2.  前記選別部によって前記非所望物体が誤選別されたことを示す入力を受け付けるインタフェース、をさらに具備し、
     前記抽出部は、前記インタフェースに前記入力があったときに前記記憶部から前記誤認識結果を抽出する、
     請求項1に記載の物体選別システム。
    an interface that receives an input indicating that the undesired object has been erroneously sorted by the sorting unit;
    The extraction unit extracts the erroneous recognition result from the storage unit when the interface receives the input.
    The object sorting system of claim 1.
  3.  前記インタフェースは、オペレータからの前記入力を受け付けるボタンであり、
     前記抽出部による前記誤認識結果の抽出状況に応じて前記ボタンの色を変化させる制御部、をさらに具備する。
     請求項2に記載の物体選別システム。
    the interface is a button that accepts the input from an operator;
    A control unit that changes the color of the button according to the extraction status of the erroneous recognition result by the extraction unit.
    3. An object sorting system according to claim 2.
  4.  前記抽出部は、前記第二画像の面積に基づいて前記記憶部から前記誤認識結果を抽出する、
     請求項1に記載の物体選別システム。
    The extracting unit extracts the erroneous recognition result from the storage unit based on the area of the second image.
    The object sorting system of claim 1.
  5.  前記抽出部は、前記選別部によって選別される前記所望物体の重量に基づいて前記記憶部から前記誤認識結果を抽出する、
     請求項1に記載の物体選別システム。
    The extraction unit extracts the erroneous recognition result from the storage unit based on the weight of the desired object sorted by the sorting unit.
    The object sorting system of claim 1.
  6.  前記抽出部は、前記カメラのフレームレートと、前記カメラと前記選別部との間の距離と、前記搬送路における前記物体群の搬送速度とに基づいて、前記誤認識結果を判定する、
     請求項1に記載の物体選別システム。
    The extraction unit determines the false recognition result based on the frame rate of the camera, the distance between the camera and the sorting unit, and the transport speed of the object group on the transport path.
    The object sorting system of claim 1.
  7.  前記特徴情報は、前記物体群における各物体の種類を示す情報と、前記物体群における各物体の輪郭を示す情報とを含む、
     請求項1に記載の物体選別システム。
    The feature information includes information indicating the type of each object in the object group and information indicating the contour of each object in the object group,
    The object sorting system of claim 1.
PCT/JP2021/041598 2021-11-11 2021-11-11 Object sorting system WO2023084713A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/041598 WO2023084713A1 (en) 2021-11-11 2021-11-11 Object sorting system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2021/041598 WO2023084713A1 (en) 2021-11-11 2021-11-11 Object sorting system

Publications (1)

Publication Number Publication Date
WO2023084713A1 true WO2023084713A1 (en) 2023-05-19

Family

ID=86335406

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/041598 WO2023084713A1 (en) 2021-11-11 2021-11-11 Object sorting system

Country Status (1)

Country Link
WO (1) WO2023084713A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017109197A (en) * 2016-07-06 2017-06-22 ウエノテックス株式会社 Waste screening system and screening method therefor
JP2020011182A (en) * 2018-07-14 2020-01-23 株式会社京都製作所 Commodity inspection device, commodity inspection method and commodity inspection program
JP2021030219A (en) * 2020-03-18 2021-03-01 株式会社イーアイアイ Article sorting apparatus and article sorting method
JP6958663B2 (en) * 2020-04-21 2021-11-02 三菱マテリアル株式会社 Dismantled object sorting device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017109197A (en) * 2016-07-06 2017-06-22 ウエノテックス株式会社 Waste screening system and screening method therefor
JP2020011182A (en) * 2018-07-14 2020-01-23 株式会社京都製作所 Commodity inspection device, commodity inspection method and commodity inspection program
JP2021030219A (en) * 2020-03-18 2021-03-01 株式会社イーアイアイ Article sorting apparatus and article sorting method
JP6958663B2 (en) * 2020-04-21 2021-11-02 三菱マテリアル株式会社 Dismantled object sorting device

Similar Documents

Publication Publication Date Title
JP6679188B1 (en) Waste sorting device and waste sorting method
US5617312A (en) Computer system that enters control information by means of video camera
CN111626117A (en) Garbage sorting system and method based on target detection
US20060007304A1 (en) System and method for displaying item information
CN106737664A (en) Sort the Delta robot control methods and system of multiclass workpiece
CN106000904A (en) Automatic sorting system for household refuse
JP2008108008A (en) Moving pattern specification device, moving pattern specification method, moving pattern specification program, and recording medium that recorded this
CN111715559A (en) Garbage sorting system based on machine vision
US7925046B2 (en) Implicit video coding confirmation of automatic address recognition
JP2021030107A (en) Article sorting apparatus, article sorting system, and article sorting method
CN108038861A (en) A kind of multi-robot Cooperation method for sorting, system and device
WO2023084713A1 (en) Object sorting system
JP2021030219A (en) Article sorting apparatus and article sorting method
CN105225248B (en) The method and apparatus for identifying the direction of motion of object
JP2018153874A (en) Presentation device, presentation method, program and work system
CN112090782A (en) Man-machine cooperative garbage sorting system and method
JP2018153873A (en) Device for controlling manipulator, control method, program and work system
CN113255550B (en) Pedestrian garbage throwing frequency counting method based on video
KR101461145B1 (en) System for Controlling of Event by Using Depth Information
JP2861014B2 (en) Object recognition device
WO2023062828A1 (en) Training device
CN113971746B (en) Garbage classification method and device based on single hand teaching and intelligent sorting system
CN206184783U (en) Can error correcting letter sorting system
CN111047731A (en) AR technology-based telecommunication room inspection method and system
WO2019159409A1 (en) Goods tracker, goods counter, goods-tracking method, goods-counting method, goods-tracking system, and goods-counting system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21964068

Country of ref document: EP

Kind code of ref document: A1