WO2022137822A1 - 被選別物の識別方法、選別方法、選別装置、および識別装置 - Google Patents
被選別物の識別方法、選別方法、選別装置、および識別装置 Download PDFInfo
- Publication number
- WO2022137822A1 WO2022137822A1 PCT/JP2021/040519 JP2021040519W WO2022137822A1 WO 2022137822 A1 WO2022137822 A1 WO 2022137822A1 JP 2021040519 W JP2021040519 W JP 2021040519W WO 2022137822 A1 WO2022137822 A1 WO 2022137822A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sorted
- information
- identification
- image
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 185
- 238000003384 imaging method Methods 0.000 claims abstract description 95
- 230000008569 process Effects 0.000 claims description 82
- 238000012546 transfer Methods 0.000 claims description 54
- 230000032258 transport Effects 0.000 claims description 44
- 238000002372 labelling Methods 0.000 description 143
- 238000012545 processing Methods 0.000 description 110
- 230000002950 deficient Effects 0.000 description 52
- 238000001514 detection method Methods 0.000 description 35
- 238000009966 trimming Methods 0.000 description 30
- 239000000463 material Substances 0.000 description 29
- 238000001914 filtration Methods 0.000 description 21
- 230000003287 optical effect Effects 0.000 description 20
- 230000008878 coupling Effects 0.000 description 18
- 238000010168 coupling process Methods 0.000 description 18
- 238000005859 coupling reaction Methods 0.000 description 18
- 230000007246 mechanism Effects 0.000 description 18
- 230000006870 function Effects 0.000 description 17
- 239000012530 fluid Substances 0.000 description 16
- 238000010586 diagram Methods 0.000 description 15
- 230000000694 effects Effects 0.000 description 12
- 238000007689 inspection Methods 0.000 description 12
- 238000010606 normalization Methods 0.000 description 10
- 101100532584 Clostridium perfringens (strain 13 / Type A) sspC1 gene Proteins 0.000 description 9
- 101100095550 Homo sapiens SENP7 gene Proteins 0.000 description 9
- 101150098865 SSP2 gene Proteins 0.000 description 9
- 102100031406 Sentrin-specific protease 7 Human genes 0.000 description 9
- 238000012790 confirmation Methods 0.000 description 8
- 235000013305 food Nutrition 0.000 description 8
- 238000010801 machine learning Methods 0.000 description 8
- 239000000126 substance Substances 0.000 description 8
- 238000012360 testing method Methods 0.000 description 8
- 102100023645 Sentrin-specific protease 3 Human genes 0.000 description 6
- 108010092928 Stomoxys serine protease 3 Proteins 0.000 description 6
- 235000013339 cereals Nutrition 0.000 description 6
- 238000013135 deep learning Methods 0.000 description 6
- 238000012706 support-vector machine Methods 0.000 description 6
- 230000009471 action Effects 0.000 description 5
- 238000007599 discharging Methods 0.000 description 5
- 230000001678 irradiating effect Effects 0.000 description 5
- 101100410359 Drosophila melanogaster Patronin gene Proteins 0.000 description 4
- 101100256651 Homo sapiens SENP6 gene Proteins 0.000 description 4
- 101150038317 SSP1 gene Proteins 0.000 description 4
- 101100125020 Schizosaccharomyces pombe (strain 972 / ATCC 24843) pss1 gene Proteins 0.000 description 4
- 101100018019 Schizosaccharomyces pombe (strain 972 / ATCC 24843) ssc1 gene Proteins 0.000 description 4
- 102100023713 Sentrin-specific protease 6 Human genes 0.000 description 4
- 238000000862 absorption spectrum Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000007477 logistic regression Methods 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000007637 random forest analysis Methods 0.000 description 3
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 2
- 244000046052 Phaseolus vulgaris Species 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 235000014571 nuts Nutrition 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 244000144725 Amygdalus communis Species 0.000 description 1
- 235000011437 Amygdalus communis Nutrition 0.000 description 1
- 244000058871 Echinochloa crus-galli Species 0.000 description 1
- 235000008247 Echinochloa frumentacea Nutrition 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 240000008620 Fagopyrum esculentum Species 0.000 description 1
- 235000009419 Fagopyrum esculentum Nutrition 0.000 description 1
- 244000068988 Glycine max Species 0.000 description 1
- 235000010469 Glycine max Nutrition 0.000 description 1
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 235000006089 Phaseolus angularis Nutrition 0.000 description 1
- 244000062793 Sorghum vulgare Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 244000098338 Triticum aestivum Species 0.000 description 1
- 240000007098 Vigna angularis Species 0.000 description 1
- 235000010711 Vigna angularis Nutrition 0.000 description 1
- 240000008042 Zea mays Species 0.000 description 1
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 235000020224 almond Nutrition 0.000 description 1
- 238000003705 background correction Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000004464 cereal grain Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 235000005822 corn Nutrition 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000005684 electric field Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000008187 granular material Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 235000019713 millet Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 235000021067 refined food Nutrition 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/04—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
- B07C5/3416—Sorting according to other particular properties according to radiation transmissivity, e.g. for light, x-rays, particle radiation
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
- B07C5/342—Sorting according to other particular properties according to optical properties, e.g. colour
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
- B07C5/342—Sorting according to other particular properties according to optical properties, e.g. colour
- B07C5/3425—Sorting according to other particular properties according to optical properties, e.g. colour of granular material, e.g. ore particles, grain
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N33/00—Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
- G01N33/02—Food
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65B—MACHINES, APPARATUS OR DEVICES FOR, OR METHODS OF, PACKAGING ARTICLES OR MATERIALS; UNPACKING
- B65B57/00—Automatic control, checking, warning, or safety devices
- B65B57/10—Automatic control, checking, warning, or safety devices responsive to absence, presence, abnormal feed, or misplacement of articles or materials to be packaged
- B65B57/14—Automatic control, checking, warning, or safety devices responsive to absence, presence, abnormal feed, or misplacement of articles or materials to be packaged and operating to control, or stop, the feed of articles or material to be packaged
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/85—Investigating moving fluids or granular solids
- G01N2021/8592—Grain or other flowing solid samples
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2223/00—Investigating materials by wave or particle radiation
- G01N2223/60—Specific applications or type of materials
- G01N2223/652—Specific applications or type of materials impurities, foreign matter, trace amounts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30161—Wood; Lumber
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/06—Recognition of objects for industrial automation
Definitions
- This disclosure relates to an identification method when selecting an object to be sorted. Further, the present disclosure relates to a sorting method and a sorting device for sorting a subject to be sorted based on the method for identifying the subject to be sorted.
- Japanese Unexamined Patent Publication No. 2004-301690 Japanese Unexamined Patent Publication No. 2004-301690
- the inspection target object to be sorted and other foreign substances can be sorted, but the inspection target objects (Japanese Patent Laid-Open No. 2004-301690) can be sorted out.
- the present disclosure has been made in order to solve the above-mentioned problems of the prior art, and appropriately grasps the state of the object to be sorted and the relationship between the plurality of objects to be sorted, and the grasped information. Based on the above, it is an object to provide a method for identifying a subject to be sorted, which appropriately identifies the subject to be sorted. Further, the present disclosure has been made in order to solve the above-mentioned problems of the prior art, and appropriately grasps the state of the object to be sorted and the relationship between a plurality of objects to be sorted, and the grasped information. Based on the above, it is an object of the present invention to provide a sorting method and a sorting device for a subject to be sorted, which appropriately identifies and sorts the subject to be sorted.
- the method for identifying an object to be sorted according to the first aspect of the present disclosure is obtained by a transfer step of transferring the object to be sorted, an imaging step of imaging the object to be sorted during the transfer step, and the imaging step.
- the image pickup information includes the adjacent information between the items to be sorted and the classification information of the items to be sorted, including an identification step for identifying the items to be sorted based on the image pickup information.
- imaging means irradiating an object with an electromagnetic wave in order to grasp the condition of the object (reflected light) such as the external condition and the internal condition of the object (selected object). ) And at least one of the transmitted signal (transmitted light) are received (received) to form an image of the condition of the object (external or internal condition) by an optical method.
- the "electromagnetic wave” is a wave propagating between the electric field and the change of the magnetic field, and light and radio waves are a kind of electromagnetic wave.
- the image pickup means for performing "imaging” is an electromagnetic wave generation unit (for example, an X-ray generation unit) that irradiates an electromagnetic wave, and an electromagnetic wave detection unit (for example, X) that receives at least one of a transmitted signal and a reflected signal of the irradiated electromagnetic wave. It is configured by using a line detection unit). Further, the "adjacent information” is information attached according to the distance between adjacent objects to be sorted (adjacent distance) or the adjacency ratio (ratio of the distance between the objects to be sorted to the size of the objects to be sorted).
- the "classification information” is information attached according to the state of the material to be sorted, for example, "good part information” which is a good part of the material to be sorted, and a defective part or an introspection of the material to be sorted. Examples include "defective part information” which is a defective part such as a defective part. When a plurality of information such as "good part information” and “bad part information” exist as this "classification information", different information (for example, color information) is attached to each "classification information". It is composed of.
- the method for identifying an object to be sorted according to the second aspect of the present disclosure is obtained by a transfer step of transferring the object to be sorted, an imaging step of imaging the object to be sorted during the transfer step, and the imaging step.
- the image pickup information includes an identification step for identifying the object to be sorted based on the image pickup information, and the image pickup information includes adjacent information between the objects to be sorted, classification information of the objects to be sorted, and background information of the objects to be sorted. including.
- the method for identifying the object to be sorted according to the third aspect of the present disclosure is obtained by a transfer step of transferring the object to be sorted, an imaging step of imaging the subject to be sorted during the transfer step, and the imaging step.
- the identification step is provided with an identification step for identifying the subject to be sorted based on the image pickup information, and the identification step is performed using an inference model generated based on the learning information regarding the subject to be sorted, and the learning information is the above-mentioned learning information.
- the identification step includes the adjacent information between the objects to be sorted and the classification information of the objects to be sorted, and the image pickup information and the inference model are used to identify the objects to be sorted.
- the "inference model” is a support vector machine (SVM), simple Bayes classifier, logistic regression, random forest, neural network, deep learning, K-nearest neighbor method, AdaBoost, bagging, C4.5, kernel method, stochastic method. It can be obtained using a system (learning program) configured by combining at least one of gradient descent method, lasso regression, ridge regression, elastic Net, interpolation method, and cooperative filtering. Specifically, by having such a system (learning program) learn predetermined image information (learning information, learning image), it is possible to obtain an "inference model” capable of carrying out the identification step according to the present disclosure. In the following, when it is simply expressed as “inference model", the same concept is used.
- the learning information includes information on the adjacency between the subject to be sorted, information on the classification of the subject to be sorted, and background information on the subject to be sorted. It may be a configuration including.
- the sorting method is based on a transfer step of transferring a subject to be sorted, an imaging step of imaging the subject to be sorted during the transfer step, and imaging information obtained in the imaging step.
- the identification step for identifying the subject to be sorted and the sorting step for selecting the subject to be sorted based on the identification information obtained in the identification step are provided, and the imaging information is adjacent to the objects to be sorted.
- the information and the classification information of the selected object are included.
- the imaging information includes the adjacent information between the sorted objects, the classification information of the sorted objects, and the background information of the sorted objects. May be good.
- the sorting method is based on a transfer step of transferring a subject to be sorted, an imaging step of imaging the subject to be sorted during the transfer step, and imaging information obtained in the imaging step.
- the identification step includes an identification step for identifying the object to be sorted and a sorting step for selecting the object to be sorted based on the identification information obtained in the identification step, and the identification step is learning information about the object to be sorted. It is performed using the inference model generated based on the above, and the learning information includes the adjacent information between the selected objects and the classification information of the selected objects, and in the identification step, the imaging information and the above-mentioned
- the inference model is used to identify the subject to be sorted.
- the learning information includes information on the proximity of the subject to be sorted, information on classification of the subject to be sorted, and background information on the subject to be sorted. It may be a configuration including.
- the sorting apparatus includes a transfer means for transferring the object to be sorted, an image pickup means for imaging the object to be sorted during the transfer in the transfer means, and image pickup information obtained by the image pickup means.
- the identification means for identifying the subject to be sorted and the sorting means for selecting the subject to be sorted based on the identification information obtained by the identification means are provided, and the imaging information is obtained between the objects to be sorted. Adjacent information and the classification information of the selected object are included.
- the imaging information includes the adjacent information between the objects to be sorted, the classification information of the objects to be sorted, and the background information of the objects to be sorted. May be good.
- the sorting apparatus includes a transfer means for transferring the object to be sorted, an image pickup means for imaging the object to be sorted during the transfer in the transfer means, and imaging information obtained by the image pickup means.
- the identification means for identifying the subject to be sorted and the sorting means for selecting the subject to be sorted based on the identification information obtained by the identification means are provided, and the identification means relates to the subject to be sorted. It has an inference model generated based on the learning information, the learning information includes adjacent information between the selected objects and the classification information of the selected objects, and the identification means is the imaging information and the said. The inference model is used to identify the subject to be sorted.
- the "imaging means” includes an electromagnetic wave generating unit (for example, an X-ray generating unit) that irradiates an electromagnetic wave, and transmitted information (transmitted light, transmitted signal) and reflected information (reflected light, reflected signal) of the irradiated electromagnetic wave. It is configured by using an electromagnetic wave detection unit (for example, an X-ray detection unit) that detects at least one of the above.
- the learning information includes the adjacent information between the sorted objects, the classification information of the sorted objects, and the background information of the sorted objects. It may be a configuration including.
- the state of the object to be sorted and the relationship between a plurality of the items to be sorted are appropriately grasped, and the subject to be sorted is appropriately identified based on the grasped information. You can get the method. Further, according to the present disclosure, the state of the object to be sorted itself and the relationship between the plurality of items to be sorted are appropriately grasped, and the items to be sorted are appropriately identified and sorted based on the grasped information. A sorting method and a sorting device for the object to be sorted can be obtained.
- It shows a partial schematic diagram (schematic side view of the X-ray identification means) of the sorting apparatus of the object to be sorted according to the embodiment of the present disclosure. It shows the flowchart about the construction method of the inference model used in the X-ray selection means (the X-ray identification means which forms) which concerns on embodiment of this disclosure. It is a schematic diagram of the image pickup information, etc. in the method of constructing the inference model used in the X-ray selection means (forming X-ray identification means) according to the embodiment of the present disclosure, and is acquired by irradiating the object to be sorted with X-rays. It shows the raw image. It is a schematic diagram of the image pickup information, etc.
- FIG. 1 It is a schematic diagram of the imaging information and the like in the method of constructing the inference model used in the X-ray selection means (forming X-ray identification means) according to the embodiment of the present disclosure, and is based on the trimmed image, the good part (good part information, It shows a labeling image (labeling image group) in which labeling processing is performed for a defective part (bad part information, classification information), an adjacent part (adjacent information), and a background part (background information). It shows the flowchart about the identification process (the identification method of the object to be sorted) performed by using the X-ray sorting means (the X-ray identification means forming) which concerns on embodiment of this disclosure.
- examples of the "object to be sorted” include granules such as nuts, beans, grains, resins, stones, glass, and wood.
- "cereal grain” is a grain of grain, and grain is one of a general term for foodstuffs obtained from plants. Examples of the grain include rice, wheat, millet, Japanese millet, corn, soybean, adzuki bean, buckwheat and the like.
- FIG. 1 shows a partial schematic view (partially schematic side view) of a sorting device for objects to be sorted according to the embodiment of the present disclosure. More specifically, FIG. 1 shows a schematic view (schematic side view) of the X-ray identification means 10 which is a part of the sorting device. Further, FIG. 2 shows a flowchart regarding a method of constructing an inference model used in the X-ray selection means according to the present embodiment. Further, FIGS. 3A to 3C show schematic views of image pickup information and the like used at the time of constructing the inference model according to the present embodiment, and FIG. 3A irradiates the object to be sorted with an electromagnetic wave (X-ray). Showing the acquired raw image, FIG.
- FIG. 3B is trimmed from the acquired raw image in order to generate a training image (a non-limiting example of "learning information” in the scope of the claim).
- the trimmed image (trimmed image group) is shown, and FIG. 3C shows a good part (good part information as an example of classification information), a bad part (defective part information as an example of classification information), and an adjacent part based on the trimmed image. (Adjacent information), a labeling image (labeling image group) in which the background portion (background information) has been labeled is shown.
- the labeling image is used as a learning image (learning information). Further, FIG.
- FIGS. 5A to 5C show schematic views of image pickup information and the like at the time of the identification step performed by using the X-ray identification means according to the present embodiment, and FIG. 5A shows an image taken by irradiating the object to be sorted with X-rays.
- 5B shows the inference result image when the inference model is applied to the captured raw image
- FIG. 5C shows the filtered image when the inference result image is filtered. It is an image.
- the X-ray identification means 10 forming the sorting device is an X-ray generating unit 11 (electromagnetic wave generating unit) that irradiates the X-ray 11a on the object S to be sorted on the transport path 91.
- an X-ray detection unit 12 (electromagnetic wave detection unit) that detects the X-ray 11a emitted from the X-ray generation unit 11.
- the X-ray identification means 10 according to the present embodiment is a partition wall provided so as to cover the X-ray generation unit 11 and the like in addition to the X-ray generation unit 11 and the X-ray detection unit 12.
- Various control means such as image processing means connected to each, and other components are used.
- the X-ray identification means 10 has the X-ray generation unit 11 and the X-ray detection unit 12 by driving the transport path 91 or by having the transport path 91 having a predetermined inclination angle.
- the object S to be sorted is configured to move between them. That is, in the X-ray identification means 10 according to the present embodiment, the object S to be sorted is in a predetermined direction (left direction or right direction in FIG. 1) between the X-ray generation unit 11 and the X-ray detection unit 12. It is configured to be transportable.
- the step of transferring the sorted portion S in this way is a non-limiting example of the "transfer step" in the claims.
- the "transfer step" in the present disclosure includes not only a state in which the object to be sorted S is transferred by a belt or the like, but also a state in which the object to be sorted S is released into the air and transferred in some cases. It is a concept.
- the X-ray identification means 10 shown in FIG. 1 forms a part of the sorting device and may also function as a part of the machine learning device for constructing the inference model mounted on the sorting device. good.
- a method of constructing an inference model mounted on the sorting device (X-ray identification means 10) according to the present embodiment will be described.
- the machine learning device for constructing the inference model according to the present embodiment is configured by using an X-ray identification means 10 and a personal computer (hereinafter, also referred to as “PC”) (not shown).
- the PC is an arithmetic processing unit (configured using an arithmetic unit, a control device, a clock, a register, etc.) such as a CPU (central processing unit) and a GPU (image arithmetic processing unit), a memory, an input / output device, and a bus. It is configured by using (signal circuit) and the like.
- the machine learning device used when constructing the inference model according to the present embodiment is configured by using a PC or the like as described above, and the PC includes a support vector machine (SVM), a simple Bayes classifier, and the like.
- SVM support vector machine
- a learning program composed of a combination of at least one is provided.
- a learning program is configured mainly for deep learning.
- FIG. 2 shows a flowchart regarding a method for constructing an inference model used in the X-ray selection means according to the present embodiment
- FIGS. 3A to 3C are images taken when constructing the inference model according to the present embodiment.
- a schematic diagram of information (acquired image) and the like is shown.
- the X-ray identification means 10 is used to acquire an image (X-ray transmission image) relating to the object to be sorted S on the transport path 91. It is performed (image acquisition step: step S201).
- image acquisition step S201 a line-shaped raw image 310 is acquired with respect to the plurality of objects S to be sorted on the transport path 91 (see FIG. 3A). More specifically, in step S201, image information for a certain number of lines (for example, 512 lines) is taken in from the X-ray detection unit 12, and for this image information (raw image 310), the object S to be sorted is used. It is confirmed that the image has been taken with a predetermined threshold value, and the image is appropriately saved in the memory.
- a trimming process is performed on a predetermined portion of the raw image 310 acquired in the image acquisition step S201 (trimming process step: step S203). That is, in this step S203, the trimmed image group 320 (first trimmed image 321 to sixth trimmed image 326) is generated from the raw image 310 (see FIG. 3B). In this trimming process step S203, the trimming process is performed so that the object to be sorted S is present in each of the trimmed images.
- an image (first) having good part information S1 (classification information), bad part information S2 (classification information), and background information S4 of the object to be sorted S Trimmed image 321 and second trimmed image 322), an image having good part information S1 of the object to be sorted S, adjacent information S3 and background information S4 (third trimmed image 323, fourth trimmed image 324), and the object to be sorted.
- An image (fifth trimmed image 325, sixth trimmed image 326) having good part information S1 and background information S4 of S is cut out and extracted.
- a state having at least two or more pieces of information for example, a state having good part information S1, defective part information S2, and background information S4 (first trimming image 321 or the like), good part information, etc.
- a state having S1 and adjacent information S3 and background information S4 third trimmed image 323 etc.
- a state having good part information S1 and background information S4 (fifth trimmed image 325 etc.), etc.
- the image is cut out and extracted.
- the good part information S1 which is a good part of the object S to be sorted (a part excluding a defective part such as a defective part and an introspection defective part (invisible shape inside) in the object S to be sorted), and
- the trimming process step S203 is performed so as to have the defective portion information S2, which is a defective portion such as a defective portion or an introspection defective portion of the object S to be sorted, at a predetermined ratio.
- the good part information S1 and the bad part information S2 are non-limiting examples of "classification information".
- the trimming process step S203 is performed so as to have a portion having a predetermined value or less (ratio of distances between them) at a predetermined ratio.
- At least one of a portion where the objects to be sorted S are in contact with each other, a portion where the distance between the objects to be sorted S is a predetermined distance or less, and a portion where the adjacent ratio between the objects to be sorted S is a predetermined ratio or less is "adjacent".
- the trimming process step S203 is performed so that the classification information, the adjacent information, and the background information have a predetermined ratio.
- the trimming size in the trimming processing step (S203) of the present embodiment is determined according to the size (projected area of the object) and the shape of the object S to be sorted. Specifically, the trimming process is performed with a size similar to the size of the single object S to be sorted. At this time, when the object to be sorted S has a shape close to a "circle” or a "square", the size is about the same as or slightly larger than the average size (area) of the unit to be sorted ("single substance"). The trimming process is performed in the order of "average value of about -5%" ⁇ trimming size ⁇ "average value of a single unit + about 15%").
- the size of the object S to be sorted is about the same as or slightly smaller than the average size of a single body ("average value of a single body-”.
- the trimming process is performed in the order of "about 15%” ⁇ trimming size ⁇ "average value of a single unit + about 5%").
- the trimming process is performed at a size of 40 ⁇ 40 pixels (a size similar to or slightly smaller than the size of an average simple substance).
- the normalization process is performed for each trimmed image.
- the timing of this normalization process is not limited to that after the trimming process step (S203).
- the raw image 310 is normalized and the image is normalized.
- the trimming process may be performed.
- the normalization process shading correction that makes the light amount unevenness and the sensitivity of the element of the detection unit uniform, conversion that keeps the gradation value according to the electromagnetic wave intensity in the image within a certain range, and the like are performed. Will be done.
- the normalization process is performed so that the range of the gradation value according to the electromagnetic wave intensity is within the range of 0 to 1.
- the labeling process is performed using the trimmed image group 320 (trimmed image group after the normalization process) generated in the trimming process step S203 (labeling process step: step S205). That is, in this labeling processing step S205, from the trimmed image group after the normalization processing to the labeling image group 330 (first labeling image 331 to sixth labeling image 336) (non-limiting "learning information" in the range of the request). An example) is generated (see Figure 3C).
- step S205 labeling processing step
- all the pixels forming each image are based on the trimmed images 321 to 326 (each trimmed image after the normalization processing) of the trimmed image group 320. Is determined what is in the image (determination of what information is), good part labeling part SP1 (a non-limited example of "classification information” in the scope of claim), bad part labeling.
- Section SP2 corresponding to "classification information” of the present invention
- adjacent labeling section SP3 a non-limiting example of "adjacent information” in the scope of claim
- background labeling section SP4 background information
- the "classification information" in the present embodiment is information attached according to the state of the object S to be sorted, and the good part labeling unit SP1 and the bad part labeling unit SP2 in the present embodiment correspond to the classification information. Further, the adjacent labeling unit SP3 corresponds to the adjacent information, and the background labeling unit SP4 corresponds to the background information.
- the adjacent labeling unit SP3 (adjacent information) given in the labeling processing step S205 according to the present embodiment is a distance (adjacent distance) or an adjacent ratio (subject to the size of the sorted object S) between adjacent objects S to be sorted. This is information attached according to the ratio of the distance between the sorted objects S). For example, for at least one of a portion where the objects to be sorted S are in contact with each other, a portion where the distance between the objects to be sorted S is a predetermined distance or less, and a portion where the adjacent ratio between the objects to be sorted S is a predetermined ratio or less. , Adjacent labeling SP3 (adjacent information) is given.
- the portion where it is determined that the adjacent objects S to be sorted are connected to each other is defined as the adjacent labeling unit SP3 (adjacent information). Will be done.
- the determination criteria for each labeling are defined as follows, for example.
- the brightness of the acquired signal is used as a criterion for labeling.
- a plurality of brightness reference values first labeling reference value, second labeling reference value
- the labeling process is performed based on the brightness reference values.
- the second labeling reference value is defined as a brighter value than the first labeling reference value.
- a good portion labeling portion SP1 is attached to a portion (dark portion) having a brightness equal to or lower than the first labeling reference value.
- a background labeling portion SP4 is attached to a (bright portion) having a brightness equal to or higher than the second labeling reference value.
- a portion having a predetermined brightness (a predetermined brightness between the first labeling reference value and the second labeling reference value) in a region having a brightness equal to or lower than the first labeling reference value, and the first labeling reference.
- a defective portion labeling portion SP2 is attached to a portion having a predetermined brightness existing between a portion having a brightness equal to or lower than the value and a portion having a brightness equal to or higher than the second labeling reference value. Further, an adjacent labeling portion SP3 is attached to a region in which a plurality of regions having a brightness equal to or lower than the first labeling reference value are close to each other or in contact with each other and are darker than the second labeling reference value.
- each of the labeling images 331 to 336 of the labeling image group 330 has four labeling (labeling) for all the pixels constituting each image. Parts SP1 to SP4) will be attached.
- the labeling images 331 to 336 generated in step S205 are appropriately stored as learning images (learning information) in a predetermined memory.
- the hollow portion of the object to be sorted S is recognized as a defective portion (introspection defective portion), and the labeling treatment is performed as a defective portion (the defective portion labeling portion SP2 is formed). .. Then, the portion other than the defective portion of the sorted portion S is labeled as a good portion (the good portion labeling portion SP1 is formed).
- the labeling processing step S205 is completed based on a predetermined condition (completion determination step: step S207).
- the "predetermined condition" is, for example, the total number of labeling images for which the labeling process has been completed, the total number of pixels for the labeling image for which the labeling process has been completed, the number of pixels for the good part labeling part and the bad part labeling in the labeling image. At least one of the total number of pixels of the part, the respective ratios, the total number of pixels of the classification information in the labeling image, the number of pixels of the adjacent information, the total number of pixels of the background information, and the respective ratios can be considered.
- This "predetermined condition" may be one of the above conditions or a combination of a plurality of the above conditions.
- the number of labeling images for which the labeling process has been completed is a predetermined number or more (for example, 500 or more), and the ratio of the number of the labeling images is a predetermined ratio (for example, "().
- step S207 When it is determined in the completion determination step S207 that the labeling processing step S205 is completed based on a predetermined condition (in the case of "Yes” in S207), the processing after step S209 is then performed, and based on the predetermined condition. If it is determined that the labeling process step S205 is not completed (in the case of "No” in S207), the process after the repeated step S201 is performed.
- step S207 When it is determined in the completion determination step S207 according to the present embodiment that the labeling process step S205 is completed (in the case of "Yes” in S207), an inference model generation process is then performed using a labeling image or the like. (Inference model generation step: step S209).
- the inference model generation step S209 deep learning is performed based on the labeling image (learning image, learning information) stored in the memory (the labeling image is read into a PC and machine learning is performed by the learning program. Do), the inference model is generated.
- the number of labeling images used when generating this inference model (for example, images 331 to 336 with labeling for each pixel (labeling of any of four labelings SP1 to SP4)) is, for example, 500 or more.
- step S211 confirmation processing is performed on the inference model generated in the inference model generation step S209 (inference model confirmation step: step S211).
- the inference model confirmation step S211 For example, as one method, the raw image 310 which is the basis of the labeling image used at the time of generating the inference model is inferred by the generated inference model, and the inference is performed. The correct answer rate is calculated by comparing the result with the learning information. Further, for example, as another method, in the inference model confirmation step S211, a test image different from the learning information used at the time of generating the inference model and test image information based on this test image (using the test image).
- Image information that has been labeled is prepared, inference processing is performed using the inference model generated for this test image, and the inference result is compared with the test image information to determine the correct answer rate. It is calculated.
- the inference model confirmation step S211 for example, at least one of the above-mentioned one method and the other method is carried out, and when either one is carried out, the correct answer rate at that time is a predetermined memory. When both are performed, the correct answer rate of each is stored in a predetermined memory.
- step S213 it is determined whether or not the result of the inference model confirmation step S211 is equal to or greater than the expected value (inference model determination step: step S213).
- whether or not the result of the inference model confirmation step S211 is equal to or higher than the expected value is determined by comparing the correct answer rate and the expected value in the inference model confirmation step S211 stored in the memory.
- the expected value for example, the correct answer rate in comparison with the learning information is 95% or more, and the correct answer rate in comparison with the test image information is 90% or more.
- step S213 When it is determined in the inference model determination step S213 that the correct answer rate in the generated inference model is equal to or higher than a predetermined expected value (in the case of "Yes” in S213), the construction of the inference model is completed and the inference model is generated. When it is determined that the correct answer rate in the inference model is not equal to or higher than the predetermined expected value (in the case of "No” in S213), the process of step S215 (adjustment step) is performed, and the process of repeated steps S209 and subsequent steps is performed. Is done.
- step S215 for appropriately generating the inference model is performed as described above.
- the adjustment step S215 include means for changing the learning rate, batch size, dropout rate, and the like, which are parameters used for learning.
- the preparation step S215 the learning information and the test image information may be shuffled or the respective information may be increased or decreased as necessary.
- a plurality of models may be generated as needed, and an inference model determination step S213, a preparation step S215, or the like may be performed for each of them to create a more appropriate inference model.
- the processing after step S215 when it is determined that the correct answer rate in the generated inference model is not equal to or higher than the predetermined expected value (when "No" in S213), the processing after step S215 is performed.
- the present invention is not limited to this. Therefore, for example, if it is determined that the correct answer rate is not more than a predetermined expectation, the process after step S201 may be performed again. More specifically, for example, if the expected value or more is not obtained even after repeating the adjustment step S215 a plurality of times (three times, etc.), the process returns to the process of step S201, and the image acquisition step S201 or later is performed. It may be configured to perform the processing of.
- the present embodiment is performed by determining that the generated inference model is equal to or higher than a predetermined expected value in the inference model determination step S213 (by determining "Yes” in S213).
- the "inference model" for the form will be completed.
- the identification process of the object to be sorted S is performed using the inference model generated as described above.
- an inference model is mounted on a control means (not shown) electrically connected to the X-ray identification means 10 of FIG. 1, and the inference model is used to perform identification processing of the object S to be sorted. Will be.
- FIG. 4 shows a flowchart regarding a method for identifying an object to be sorted, which is performed by using the X-ray identification means 10 according to the present embodiment.
- FIGS. 5A to 5C show schematic views of image pickup information and the like in the identification method (during the identification step) performed by using the X-ray identification means 10 according to the present embodiment, and FIG. 5A shows X-rays on the object to be sorted.
- 5B shows an inference result image when the inference model is applied to the captured raw image
- FIG. 5C shows a filter when the inference result image is filtered. The processed image is shown.
- the X-ray generation unit 11 and the X-ray detection unit 12 are used to perform imaging processing on the object to be sorted S during the transfer step (claimed). It is a non-limiting example of the “imaging process” in the range of: step S401).
- this step S401 imaging step
- a line-shaped raw image 510 is acquired for a plurality of objects S to be sorted on the transport path 91 (see FIG. 5A).
- step S401 X-rays are irradiated from the X-ray generation unit 11 to the object SS to be sorted, and the X-ray detection unit 12 provides image information for a certain number of lines (for example, 512 lines). Is taken in, and this image information (raw image 510) is appropriately stored in the memory. Further, in the present embodiment, the raw image 510 is appropriately normalized.
- the image obtained in the imaging step (raw image 510 after normalization processing) is inferred as input data (a non-limiting example of "imaging information" in the range of the request).
- the model implementation step S403 is performed. That is, the inference model is applied to the raw image 510 (raw image after normalization processing) captured in the imaging step S401 (step S403), and the inference result (inference result image 520) can be obtained (step S405). ).
- FIG. 5B shows an inference result image 520 when the inference model is applied to the captured raw image 510. As shown in FIG.
- the inference model by applying the inference model, for all the pixels of the inference result image 520, the good part labeling part SSP1 (classification information), the bad part labeling part SSP2 (classification information), and the adjacent labeling part. Any labeling (labeling of any of four) of SSP3 (adjacent information) and background labeling unit SSP4 (background information) will be attached.
- a filtering processing step (step S407) is performed on the inference result image 520 obtained in the inference model implementation step S403.
- the filtering process for example, it is confirmed how many pixels the defective portion (defective portion labeling portion SSP2) is included in the predetermined image size, and when the number of pixels is larger than the specified value, the defective portion is processed. However, if the number of pixels is smaller than the specified value, it can be treated as a good part.
- examples of other filtering processes include processes such as smoothing, edge extraction, sharpening, and morphology conversion.
- the defective portion labeling portion SSP2 in the inference result image 520 may be reduced (for example, reduced by 20% to 50%), or the defective portion labeling having a pixel number equal to or less than a predetermined value may be reduced.
- the section SSP2 and the adjacent labeling section SSP3 may be deleted and replaced with another labeling section.
- the size of the defective portion labeling portion SSP2 in the inference result image 520 is displayed small.
- the good portion labeling portion (“good portion” at the time of specifying the classification described later).
- background labeling part (“background part” at the time of classification specification described later) is replaced and displayed.
- the adjacent labeling unit SSP3 is replaced with a background labeling unit (“background unit” at the time of classification specification described later) and displayed.
- the image after passing through these filtering processes S407 is the filtered image 530 (see FIG. 5C).
- the filtering process S407 is performed on the inference result image 520 (see FIG. 5B) obtained in the inference result S405. Then, by performing this filtering process S407, the post-filtered image 530 shown in FIG. 5C is formed, and based on this post-filtered image 530, a classification specifying process (identification process based on classification identification, identification step). Is performed (step S409).
- the defective portion labeling portions in the inference result image 520 are located in two places.
- SSP2 (FIG. 5B) is reduced by about 30% to identify defective portion C2 (FIG. 5C).
- the adjacent labeling portion SSP3 (FIG. 5B) in the inference result image 520 is specified as the background portion C4 (FIG. 5C) as a result of filtering.
- the good portion labeling portion SSP1 and the background labeling portion SSP4 of FIG. 5B are identified as the good portion C1 and the background portion C4 of FIG.
- the "good part”, “bad part”, and “background part” are specified based on the filtered image 530.
- the adjacent labeling portion in which the objects to be sorted are adjacent to each other is specified as a “background portion”.
- various information is attached to each pixel.
- the good part labeling part SSP1 classification information
- the bad part labeling part SSP2 classification information
- the adjacent labeling are attached to each pixel.
- the labeling (labeling of any one of the four) of the section SSP3 (adjacent information) and the background labeling section SSP4 (background information) will be attached.
- the classification specifying process (identification process, identification step) S409 is performed on each pixel in the filtered image 530 obtained by performing the filtering process S407 on the inference result image 520, and each pixel is performed. Is specified as one of "good part", "bad part", and "background part”.
- the background portion C4 and the plurality of objects to be sorted are identified, and more specifically, an individual having a defective portion C2 (subject).
- Two sorted objects CA and CB) are identified (identified), and the other sorted objects are identified (identified) as individuals having only good portion C1 (sorted objects).
- step S411 it is determined whether or not to end the identification process of the object to be sorted.
- the identification process identification step of the object to be sorted
- the processes of the repeated steps S401 and subsequent steps are performed. Specifically, unless otherwise specified, there is an object S to be sorted on the transport path 91, and an imaging step is performed using the X-ray generator 11 and the X-ray detector 12 and the like. If so, the identification process is continued.
- the imaging processing step, the inference model implementation step (inference result), the filtering processing step, the classification identification step, and the like are performed using the X-ray identification means 10 and the inference model. Therefore, the object to be sorted S is identified.
- the identification device (discrimination means) forming the sorting device for the object to be sorted according to the present embodiment is configured and functions as described with reference to FIGS. 1 to 5C, it has the following functions and effects.
- the configuration and operation / effect of the identification method performed by using the identification device according to the present embodiment will be described.
- the method for identifying the object to be sorted includes a transfer step of transferring the object to be sorted, an imaging step of imaging the object to be sorted during the transfer step, and imaging information (raw image 510) obtained in the imaging step. )
- the identification step includes an identification step for identifying the object to be sorted, using an inference model generated based on learning information (labeling image group 330, labeling images 331 to 336) regarding the object to be sorted.
- the learning information (labeling image group 330, labeling images 331 to 336) includes adjacent information between the objects to be sorted (adjacent labeling section SP3) and classification information of the objects to be sorted (good part labeling part SP1, bad part labeling part). Including SP2), in the identification step, the image to be sorted is identified by using the imaging information and the inference model.
- the method for identifying the object to be sorted according to the present embodiment is configured as described above, the following effects can be obtained.
- the identification method according to the present embodiment it is possible to easily compare the inference result with the imaged information, so that the efficiency of the arithmetic processing at the time of identification can be improved.
- the adjacent information adjacent labeling unit SP3
- the adjacent portion is not erroneously recognized as a defective portion, and the identification accuracy can be improved.
- the identification method according to the present embodiment by using the adjacent information, the objects to be sorted in the contacted state can be recognized as a separated state, so that the objects to be sorted can be identified with higher accuracy. Can be done.
- the processing speed of the identification process can be increased.
- the defective portion can be easily identified by converting the inference result into an image.
- each pixel data can be easily used for various processes (post-processing and the like).
- the image processing technique can be applied by treating the inference result as an image.
- the method for identifying the object to be sorted includes a transfer step of transferring the object to be sorted, an imaging step of imaging the object to be sorted during the transfer step, and imaging information (raw) obtained in the imaging step.
- the identification step is provided with an identification step for identifying the object to be sorted, and the identification step uses an inference model generated based on learning information (labeling image group 330, labeling images 331 to 336) regarding the object to be sorted.
- the learning information includes adjacent information between the objects to be sorted (adjacent labeling section SP3) and classification information of the objects to be sorted (good section labeling section SP1, defective section).
- the labeling unit SP2) and the background information of the object to be sorted may be included, and in the identification step, the image pickup information and the inference model may be used to identify the object to be sorted.
- the identification method according to the present embodiment since the learning information includes the background information of the selected object, each selected object can be clearly identified. Further, according to the identification method according to the present embodiment, after the adjacent information (adjacent labeling unit SP3) is specified, in order to convert this adjacent information into background information, the identification process of each object to be sorted is performed more clearly. Can be done.
- an "inference model” is generated, and the “inference model” generated in this way is mounted on an identification device (discrimination means) forming a sorting device. , The identification method is carried out using these.
- an identification device discrimination means
- the identification method is carried out using these.
- a sorting device provided with an identification device equipped with the above-mentioned inference model will be described with reference to drawings and the like.
- FIG. 6 shows a schematic configuration diagram of a sorting device (belt type sorting device) according to the second embodiment of the present disclosure. Specifically, FIG. 6 shows a schematic configuration diagram of a belt-type sorting device 101 equipped with the identification device (discrimination means) described with reference to FIGS. 1 to 5C.
- the sorting device 101 optically detects the object to be sorted 120 to supply the object to be sorted S0, the transport section 130 to convey the object to be sorted S0, and the object to be sorted S0. It is configured by using an optical detection unit 150, a sorting unit 160 that performs sorting processing of the object to be sorted S0, an X-ray identification device 10, and the like.
- the material to be sorted S0 stored in the material to be sorted 120 is supplied to one end of the transporting unit 130, and the supplied material S0 is supplied with the transporting unit 130.
- the image pickup process (identification process based on) by the X-ray identification device 10 is performed during the transfer by the X-ray identification device 10 (during the transfer process).
- the object to be sorted S0 transported by the transport unit 130 is discharged from the other end of the transport unit 130 along the fall locus L, and the optical detection unit 150 and the optical detection unit 150 and the periphery of the discharged object S0 are discharged.
- a sorting unit 160 is provided.
- the sorting device 101 is configured to drive the sorting unit 160 based on the identification signal obtained from at least one of the X-ray identification device 10 and the optical detection unit 150, and the sorted object S0 is sorted.
- the unit 160 sorts the product into either the non-defective product accommodating unit 181 or the defective product discharging unit 182.
- the object to be sorted unit 120 forming the sorting device 101 is configured by using a storage unit for storing the object to be sorted S0, a vibration feeder, and the like.
- the material to be sorted 120 is configured to supply the material S0 to be sorted from the material supply unit 120 to one end of the transport unit 130, if necessary.
- the transport unit 130 forming the sorting device 101 is configured by using a transport belt 131, transport rollers 132, 133, a drive motor 134, and the like.
- the transport belt 131 is wound endlessly between the rollers 132 and 133 provided in parallel.
- a drive motor 134 is connected to one of the rollers 133 via a belt or the like.
- the conveyor belt 131 is configured to be rotationally driven at a constant speed by rotating the drive motor 134.
- the transport belt 131 forming the transport unit 130 corresponds to the transport path 91 in the first embodiment.
- the X-ray identification device 10 described in the first embodiment is provided at a substantially central portion of the transport unit 130 according to the present embodiment in the transport direction. More specifically, as shown in FIG. 6, an X-ray generator 11 is provided at a position above the object to be sorted S0 on the transport belt 131 (conveyance path 91 in the first embodiment), and the object to be sorted S0. An X-ray detection unit 12 is provided at a lower position (between the conveyor belts 131 in FIG. 6). The image pickup information obtained by the X-ray identification device 10 is transmitted to the X-ray image processing means 173 described later. Further, in the present embodiment, a partition wall is appropriately provided in order to prevent leakage of X-rays.
- the material to be sorted S0 supplied from the material supply unit 120 to one end of the transport unit 130 is the transport unit 130. It is provided around the fall locus L when it is discharged from the other end of (conveyor belt 131). Specifically, it is configured by using a light receiving unit 151 (first light receiving unit 151A, second light receiving unit 151B), a plurality of light emitting units 153, a background unit 155 corresponding to each light receiving unit 151A, 151B, and the like.
- a light receiving unit 151 first light receiving unit 151A, second light receiving unit 151B
- the two light receiving portions 151A and 151B are provided at positions substantially symmetrical with respect to the fall locus L.
- the light receiving unit 151 is configured by using a solid-state image sensor (CCD image sensor, CMOS image sensor, etc.) or the like.
- the light emitting unit 153 is configured to irradiate the optical detection position P on the fall locus L with light having a predetermined wavelength from a plurality of angles.
- the light receiving unit 151 takes an image of the object to be sorted S0 that has reached the optical detection position P on the drop locus L, and the image pickup information (light receiving signal) regarding each object to be sorted S0 is transmitted to the image processing means 174 described later. ..
- the sorting unit 160 forming the sorting device 101 according to the present embodiment is configured by using the nozzle unit 161 and the solenoid valve 162 and the like.
- the nozzle portion 161 is configured to inject a fluid (for example, air) to the object to be sorted S0 on the drop locus L.
- the solenoid valve 162 is provided between the fluid supply unit (for example, an air compressor or the like, not shown) and the nozzle unit 161 and is connected to the nozzle unit 161 based on a signal from the discrimination result coupling mechanism 175 described later. It is configured to control the fluid supply.
- the sorting device 101 has a touch panel 171 capable of inputting various signals and the like when the device is used, and the touch panel 171 is the CPU 172 of the sorting device 101. Is electrically connected to. Further, the CPU 172 is electrically connected to the X-ray image processing means 173 and the image processing means 174. Further, the X-ray image processing means 173 and the image processing means 174 are electrically connected to the discrimination result coupling mechanism 175.
- the X-ray image processing means 173 is configured to be able to transmit the image pickup information from the X-ray detector 11, and the image processing means 174 is configured to be able to transmit the image pickup information from the two light receiving units 151A and 151B.
- the discrimination result coupling mechanism 175 and the solenoid valve 162 are electrically connected, and based on the discrimination signal from the discrimination result coupling mechanism 175, the fluid is ejected from the nozzle portion 161 via the solenoid valve 162 ( Ejection time, eruption timing, etc.) are controlled.
- the sorting device 101 is configured as described above and functions as follows.
- the material to be sorted S0 supplied from the material to be sorted 120 is transported (transferred) by the transporting unit 130, and X-rays are emitted to the material to be sorted S0 being transported (during the transfer process).
- An imaging process is performed by the identification device 10.
- an image pickup process is performed by the optical detection unit 150 on the object to be sorted S0 discharged from the transport unit 130.
- a sorting process (sorting step) using the sorting unit 160 is performed based on the identification signal caused by the imaging information of at least one of the X-ray identification device 10 and the optical detection unit 150.
- the ejection timing and ejection time of the fluid ejected from the nozzle portion 161 via the solenoid valve 162 are controlled based on the identification signal, and the fluid ejected from the nozzle portion 161 causes the sorted object S0 to be the non-defective product accommodating portion 181. Alternatively, it is selected by one of the defective product discharging units 182.
- various information and sorting conditions regarding the object to be sorted S0 are used by using the touch panel 171 capable of inputting a predetermined signal to the CPU 172 provided in the device. Input the operating condition information such as the identification condition).
- This operating condition information is transmitted to the X-ray image processing means 173 and the image processing means 174 via the CPU 172.
- the image processing means 174 also sets a predetermined threshold value and the like.
- a plurality of inference models, a plurality of threshold values (a plurality of threshold values used for identification), and the like can be mounted on the CPU 172 (or at least one of the X-ray image processing means 173 and the image processing means 174). There is.
- the sorting device 101 after inputting the operating condition information using the touch panel 171, the sorting device 101 is driven, and the image pickup information from the X-ray detector 12 regarding the object to be sorted S0 is transmitted to the X-ray image processing means 173.
- the image pickup information from the light receiving units 151A and 151B regarding the object to be sorted S0 is transmitted to the image processing means 174.
- the image pickup information transmitted to the X-ray image processing means 173 and the image processing means 174 is appropriately subjected to predetermined arithmetic processing (or as the image pickup information as it is) and then transmitted to the discrimination result coupling mechanism 175.
- the discrimination result coupling mechanism 175 the calculation based on the image pickup information from the X-ray image detecting means 173 and the image pickup information from the image processing means 174 for the same sorted object S0 in consideration of the transport speed of the sorted object S0 and the like. Processing is performed, and an identification signal (solenoid valve control signal) based on the combined discrimination result is transmitted to the solenoid valve 162.
- the ejected state of the fluid from the nozzle portion 161 is controlled via the solenoid valve 162, and the ejected fluid is used to sort the object to be sorted S0.
- the sorting device 101 is configured by using the X-ray sorting device 10 (X-ray identification means) described in the first embodiment and an inference model or the like, and the sorting device 101 is used. , Imaging processing step, inference model implementation step (inference result), filtering processing step, classification identification step, etc. are performed, and identification (discrimination processing) and sorting (sorting processing) of the object to be sorted S0 are performed. ..
- the sorting device 101 Since the sorting device 101 according to the present embodiment is configured and functions as described with reference to FIG. 6 and the like (FIGS. 1 to 5C for the X-ray identification device 10), it has the following functions and effects. Hereinafter, the configuration and operation / effect of the sorting device (sorting method) according to the present embodiment will be described.
- the sorting method is based on a transfer step of transferring the object to be sorted S0, an image pickup step of imaging the object to be sorted S0 during the transfer step, and imaging information obtained in the image pickup step. It includes an identification step for identifying S0 and a sorting step for selecting the object to be sorted S0 based on the identification information obtained in the identification step, and the identification step is generated based on the learning information about the object to be sorted S0. It is performed using the inference model, and the learning information includes the adjacent information between the selected objects and the classification information of the selected objects. In the identification step, the imaged information and the inference model are used to select the selected object S0. Identify.
- the sorting apparatus includes a transport unit 130 (transfer means) for transferring the sorted object S0 and an X-ray identification device 10 (imaging means) for imaging the sorted object S0 during the transfer in the transport unit 130.
- the identification means (identification means composed of at least one of the X-ray image processing means 173 and the discrimination result coupling mechanism 175) for identifying the object to be sorted S0 based on the image pickup information obtained by the X-ray identification device 10.
- a sorting unit 160 sorting means for sorting the object to be sorted S0 based on the identification signal (identification information) obtained by the identification means, and the identification means is generated based on the learning information about the object to be sorted.
- the learning information includes the adjacent information between the selected objects and the classification information of the selected objects, and the identification means identifies the selected object S0 by using the imaging information and the inference model. do.
- the sorting device (sorting method) according to the present embodiment is configured and functions as described above, the following effects can be obtained.
- the sorting apparatus (sorting method) according to the present embodiment it is possible to improve the efficiency of arithmetic processing at the time of generating an identification signal in the identification means by easily comparing the inference result with the image pickup information.
- the control of the sorting unit is carried out based on the identification signal obtained in the above. Therefore, according to the sorting apparatus according to the present embodiment, it is possible to improve the efficiency of the arithmetic processing when sorting the object to be sorted S0.
- the sorting device sorting method
- the adjacent information adjacent labeling unit SP3
- the adjacent portion is not erroneously recognized as a defective portion and the identification signal identification accuracy is not recognized. Can be enhanced. Therefore, according to the sorting apparatus according to the present embodiment, it is possible to carry out a high-precision sorting process using the identification signal having high discrimination accuracy. Further, according to the sorting apparatus (sorting method) according to the present embodiment, by using the adjacent information, it is possible to recognize the objects to be sorted in the contacted state as separated states, so that the objects to be sorted can be recognized with higher accuracy. Can be identified and sorted.
- the identification process does not require the object cutting process at the time of identifying the object to be sorted S0.
- the processing speed of the above can be increased, and the speed of the sorting processing can also be increased.
- the sorting apparatus includes a transport unit 130 (transfer means) for transferring the object to be sorted S0 and an X-ray identification device 10 (imaging means) for imaging the object to be sorted S0 during the transfer in the transport unit 130.
- the identification means (identification means composed of at least one of the X-ray image processing means 173 and the discrimination result coupling mechanism 175) for identifying the object to be sorted S0 based on the image pickup information obtained by the X-ray identification device 10.
- a sorting unit 160 sorting means for sorting the object to be sorted S0 based on the identification signal (identification information) obtained by the identification means, and the identification means is generated based on the learning information about the object to be sorted.
- the learning information includes the adjacent information between the selected objects, the classification information of the selected objects, and the background information of the selected objects, and the identification means uses the imaging information and the inference model. Then, the object to be sorted S0 is identified.
- the sorting device since the learning information includes the background information of the sorted object, each sorted object can be clearly identified at the time of generating the identification signal, and a highly accurate identification signal can be obtained. .. Therefore, it is possible to realize a highly accurate sorting process by using such a highly accurate identification signal. Further, according to the sorting apparatus according to the present embodiment, after the adjacent information is specified, in order to convert the adjacent information into the background information, the identification processing of each object to be sorted is performed more clearly, and the identification signal with high accuracy is obtained. Obtainable. Therefore, according to the sorting apparatus according to the present embodiment, high-precision sorting processing can be realized.
- FIG. 7 shows a schematic configuration diagram of a sorting device (shoot type sorting device) according to the third embodiment of the present disclosure. Specifically, FIG. 7 shows a schematic configuration diagram of a chute type sorting device 201 equipped with the identification device (discrimination means) described with reference to FIGS. 1 to 5C.
- the sorting device 201 includes an object supply unit 220 for supplying the object to be sorted S0, a chute 230 (transport unit, transfer means) for transferring the object S0 to be sorted, and a sorting device 201. It is configured by using an optical detection unit 250 that optically detects an object S0, a sorting unit 260 that performs a sorting process for an object S0 to be sorted, an X-ray identification device 10, and the like.
- the material to be sorted S0 stored in the material to be sorted 220 is supplied to one end of the inclined plate portion 231 forming the chute 230, and is supplied to the inclined plate portion 231.
- An image pickup process (identification process based on) is performed by the X-ray identification device 10 for the object to be sorted (during the transfer step) during free fall (flowing).
- the object to be sorted S0 that falls on the chute 230 is discharged from the other end of the inclined plate portion 231 along the drop locus L, and the optical detection unit 250 and the sorting object S0 are surrounded around the discharged object S0 to be sorted.
- a unit 260 is provided.
- the sorting device 201 is configured to drive the sorting unit 260 based on the identification signal obtained from at least one of the X-ray identification device 10 and the optical detection unit 250, and the sorting object S0 is sorted.
- the unit 260 sorts the product into either the non-defective product accommodating unit 281 or the defective product discharging unit 282.
- the object to be sorted 220 forming the sorting device 201 is configured by using a storage section for storing the object S0 to be sorted, a vibration feeder, or the like.
- the material to be sorted 220 is configured to supply the material to be sorted S0 from the material supply unit 220 to one end of the inclined plate portion 231 forming the chute 230, if necessary.
- the chute 230 forming the sorting device 101 is configured by using the inclined plate portion 231.
- the inclined plate portion 231 is inclined by, for example, about 30 ° to 70 ° with respect to the ground plane (horizontal plane) in consideration of the installation area in the apparatus and the falling speed (flowing speed) of the object to be sorted S0 that naturally falls. It is provided for this purpose. Further, the inclination angle of the inclined plate portion 231 may be set to be higher than the angle of repose of the object to be sorted and to an angle such that the object to be sorted flowing down on the inclined plate portion 231 does not bounce. In the present embodiment, the inclined plate portion 231 forming the chute 230 is configured to be inclined by about 60 degrees with respect to the ground plane. The inclined plate portion 231 forming the chute 230 corresponds to the transport path 91 in the first embodiment.
- the X-ray identification device 10 described in the first embodiment is provided at a substantially central portion of the chute 230 according to the present embodiment. More specifically, as shown in FIG. 7, an X-ray generating portion 11 is provided at a position above the object to be sorted S0 that naturally falls on the inclined plate portion 231 (transport path 91 in the first embodiment), and is covered. An X-ray detection unit 12 is provided at a position below the sorted object S0 (a position below the inclined plate portion 231 in FIG. 7). The image pickup information obtained by the X-ray identification device 10 is transmitted to the X-ray image processing means 273, which will be described later. Further, in the present embodiment, a partition wall is appropriately provided in order to prevent leakage of X-rays.
- the material to be sorted S0 supplied from the material supply unit 220 to one end of the chute 230 is the chute 230 ( It is provided around the fall locus L when it is discharged from the other end of the inclined plate portion 231).
- the chute 230 It is configured by using a light receiving unit 251 (first light receiving unit 251A, second light receiving unit 251B), a plurality of light emitting units 253, a background unit 255 corresponding to each light receiving unit 251A, 251B, and the like.
- a light receiving unit 251 first light receiving unit 251A, second light receiving unit 251B
- the light receiving unit 251 is configured by using a solid-state image sensor (CCD image sensor, CMOS image sensor, etc.) or the like.
- the light emitting unit 253 is configured to irradiate the optical detection position P on the fall locus L with light having a predetermined wavelength from a plurality of angles.
- the light receiving unit 251 takes an image of the object to be sorted S0 that has reached the optical detection position P on the drop locus L, and the image pickup information (light receiving signal) regarding each object to be sorted S0 is transmitted to the image processing means 274 described later. ..
- the sorting unit 260 forming the sorting device 201 according to the present embodiment is configured by using the nozzle unit 261 and the solenoid valve 262 and the like.
- the nozzle portion 261 is configured to inject a fluid (for example, air) to the object to be sorted S0 on the drop locus L.
- the solenoid valve 262 is provided between the fluid supply unit (for example, an air compressor or the like, not shown) and the nozzle unit 261, and is connected to the nozzle unit 261 based on a signal from the discrimination result coupling mechanism 275 described later. It is configured to control the fluid supply.
- the sorting device 201 has a touch panel 271 capable of inputting various signals and the like when the device is used, and the touch panel 271 is the CPU 272 of the sorting device 201. Is electrically connected to. Further, the CPU 272 is electrically connected to the X-ray image processing means 273 and the image processing means 274. Further, the X-ray image processing means 273 and the image processing means 274 are electrically connected to the discrimination result coupling mechanism 275.
- the X-ray image processing means 273 is configured to be able to transmit the image pickup information from the X-ray detector 11, and the image processing means 274 is configured to be able to transmit the image pickup information from the two light receiving units 251A and 251B.
- the discrimination result coupling mechanism 275 and the solenoid valve 262 are electrically connected, and based on the identification signal from the discrimination result coupling mechanism 275, the fluid ejected state from the nozzle portion 261 via the solenoid valve 262 ( Ejection time, eruption timing, etc.) are controlled.
- the sorting device 201 is configured as described above and functions as follows.
- the material to be sorted S0 supplied from the material supply unit 220 to be sorted is transported (transferred) by the chute 230, and X-ray identification is performed with respect to the material to be sorted S0 being transported (during the transfer process).
- An imaging process (imaging step) is performed by the apparatus 10.
- an image pickup process (imaging step) is performed by the optical detection unit 250 on the object to be sorted S0 discharged from the transport unit 230.
- a sorting process (sorting step) using the sorting unit 260 is performed based on the identification signal caused by the imaging information of at least one of the X-ray identification device 10 and the optical detection unit 250.
- the ejection timing and ejection time of the fluid ejected from the nozzle portion 261 via the solenoid valve 262 are controlled based on the identification signal, and the fluid ejected from the nozzle portion 261 causes the sorted object S0 to be the non-defective product accommodating portion 281. Alternatively, it is selected by one of the defective product discharging units 282.
- various information and sorting conditions regarding the object to be sorted S0 are used by using the touch panel 271 capable of inputting a predetermined signal to the CPU 272 provided in the device. Input the operating condition information such as the identification condition).
- This operating condition information is transmitted to the X-ray image processing means 273 and the image processing means 274 via the CPU 272.
- selection of one inference model from a plurality of inference models, setting of a predetermined threshold value, and the like are performed.
- the image processing means 274 also sets a predetermined threshold value and the like.
- the CPU 272 (or at least one of the X-ray image processing means 273 and the image processing means 274) is configured to be able to mount a plurality of inference models, a plurality of threshold values (a plurality of threshold values used for identification), and the like. There is.
- the sorting device 201 is driven, and the image pickup information from the X-ray detector 12 regarding the object to be sorted S0 is transmitted to the X-ray image processing means 273.
- the image pickup information from the light receiving units 251A and 251B regarding the object to be sorted S0 is transmitted to the image processing means 274.
- the image pickup information transmitted to the X-ray image processing means 273 and the image processing means 274 is appropriately subjected to predetermined arithmetic processing (or as the image pickup information as it is) and then transmitted to the discrimination result coupling mechanism 275.
- the discrimination result coupling mechanism 275 the calculation based on the image pickup information from the X-ray image detection means 273 and the image pickup information from the image processing means 274 for the same sorted object S0 in consideration of the transport speed of the sorted object S0 and the like. Processing is performed, and an identification signal (solenoid valve control signal) based on the combined discrimination result is transmitted to the solenoid valve 262.
- the ejected state of the fluid from the nozzle portion 261 is controlled via the solenoid valve 262, and the ejected fluid is used to sort the object to be sorted S0.
- the sorting device 201 is configured by using the X-ray sorting device 10 (X-ray identification means) described in the first embodiment and an inference model or the like, and the sorting device 201 is configured. , Imaging processing step, inference model implementation step (inference result), filtering processing step, classification identification step, etc. are performed, and identification (discrimination processing) and sorting (sorting processing) of the object to be sorted S0 are performed. ..
- the sorting device 201 according to the present embodiment is configured and functions as described with reference to FIG. 7 and the like (FIGS. 1 to 5C for the X-ray identification device 10).
- the sorting device 201 according to the third embodiment and the sorting device 101 according to the second embodiment described above have different partial configurations (for example, the configuration of the transport unit), but the identification means and the sorting means are different. Has a similar configuration. Therefore, the sorting device 201 according to the third embodiment also has substantially the same function and effect as the sorting device 101 described in the second embodiment. Therefore, the details of the action and effect are omitted here.
- the material to be sorted is nuts, beans, grains, etc.
- the present invention is not limited to this configuration. Therefore, for example, the material to be sorted may be vegetables, confectionery, processed foods, or the like.
- the electromagnetic wave is an X-ray
- the electromagnetic wave may be ultraviolet rays, visible rays, near infrared rays, infrared rays, microwaves, or the like.
- the "inference model” constituting the identification method, the selection method, and the selection device is an inference model obtained by using deep learning
- the "inference model” is a support vector machine (SVM), simple Bayes classifier, logistic regression, random forest, neural network, deep learning, k-nearest neighbor method, AdaBoost, bagging, C4.5, kernel method, probability. It may be configured using a learning program that combines at least one of gradient descent, lasso regression, ridge regression, elastic Net, interpolation, and co-filtering.
- the present invention is not limited to this configuration. Therefore, for example, a machine learning device is configured by using an X-ray identification means provided outside the sorting device, an inference model is constructed by using the machine learning device outside the sorting device, and the constructed inference model is sorted. It may be configured to be mounted on the device.
- the present invention is not limited to this configuration. Therefore, for example, necessary imaging information is collected using an X-ray identification means forming a sorting device or an equivalent device provided outside, and an inference model is used using the collected imaging information and the functions inside the sorting device. May be constructed. That is, if necessary, the machine learning device is configured by the sorting device itself or a part thereof, the inference model is constructed or updated in the sorting device, and the necessary inference model is appropriately selected. It may be configured to be mounted on the identification means).
- the present invention is not limited to this configuration. Therefore, for example, in the present invention, when performing identification processing and sorting processing without constructing an inference model, "adjacent information between selected objects" and “classification information of selected objects", which are imaging information, are used. You may use it. That is, predetermined threshold information is set for these "adjacent information between the items to be sorted" and “classification information of the items to be sorted", and this threshold information and "adjacent information between the items to be sorted” regarding the items to be sorted are set.
- the imaging information "adjacent information between the objects to be sorted”, “classification information of the objects to be sorted”, and “sorting” are performed. "Background information of objects” may be used.
- predetermined threshold information is set for the "background information of the objects to be sorted", and these threshold information and the subject
- the trimming process is performed with a size of 40 ⁇ 40 pixels in the trimming process step for generating the learning image (learning information)
- the present invention has this configuration.
- the size of trimming in this trimming process is about the same as the size of a single object to be sorted, and is appropriately determined by the shape of the object to be sorted. Therefore, for example, the trimming process may be performed at a size of 32 ⁇ 32 pixels, 64 ⁇ 64 pixels, or the like, depending on the size and shape of the single object to be sorted.
- the classification information is the good part information and the bad part information has been described, but the present invention is not limited to this configuration, and various classification information other than the good part and the bad part can be described. Information is applicable. Therefore, for example, as the classification information, it is possible to use classification according to the degree of quality of a predetermined subject, classification regarding the type of the subject to be sorted, and the like.
- each function of the controller may be realized by the CPU executing a predetermined program, may be realized by a dedicated circuit, or may be realized by a combination thereof. Also, a single controller may be used. Alternatively, a plurality of controllers may be used to decentralize the plurality of functions, and a plurality of required functions may be realized by the whole of the plurality of controllers.
- templates for determining "adjacent information between objects to be sorted" and templates for determining "classification information of sorted objects" are used. You may prepare more than one.
- Such a template may be the same as or similar to the first labeling images 331 to sixth labeling images 336 shown in FIG. 3C.
- template matching is performed by image processing, and each template and raw image 510 (or raw image 510) are used for each of "adjacent information between selected objects" and "classification information of selected objects”.
- the correlation coefficient between the image to which the normalization process has been performed) and the correlation coefficient may be obtained, and the "adjacent information between the items to be sorted" and the “classification information of the items to be sorted” may be determined from the correlation coefficient.
- the present disclosure provides a method for identifying an object to be sorted, which appropriately grasps the state of the object to be sorted and the relationship between a plurality of the objects to be sorted, and appropriately identifies the object to be sorted based on the grasped information. It is provided for the purpose of providing a sorting method and a sorting device for a subject to be sorted, which appropriately identifies and sorts the subject to be sorted based on the identification information.
- a device using X-rays or the like for inspecting the shape of the inside of an inspection target has been known, but basically, by binarizing the shade of a grayscale image without color information, the inside is used.
- the abnormal shape part was extracted.
- the identification method is configured as follows.
- the method for identifying the object to be sorted according to the present embodiment is based on a transfer step of transferring the object to be sorted, an imaging step of imaging the object to be sorted during the transfer step, and imaging information obtained in the imaging step.
- the identification step is provided with an identification step for identifying the selected object, and the identification step is performed using an inference model generated based on the learning information about the selected object, and the learning information is the adjacent information between the selected objects and the selected object.
- It includes the classification information of the object and the background information of the object to be sorted (background labeling unit SP4), and in the identification step, it is configured to identify the object to be sorted by using the imaging information and the inference model. Then, the sorting method and the sorting device according to the present disclosure are configured by using the above-mentioned identification method.
- the identification method, the sorting method, and the sorting device according to the present disclosure are considered to have high industrial applicability because the problems of the existing device (conventional technique) are appropriately solved.
- X-ray identification means (X-ray identification device, identification device, imaging means) 11 ... X-ray generation unit (X-ray generation unit forming an imaging means) 11a ... X-ray 12 ... X-ray detector (X-ray detector forming an imaging means) 91 ... Transport path (transport means, transport path forming the transport means) 101 ... Sorting device (belt type sorting device) 120 ... Material supply unit to be sorted 130 ... Transport unit (transfer means) 131 ... Conveyance belt (transfer means, transfer belt forming transfer means, transfer path) 132, 133 ... Roller 134 ... Drive motor 150 ... Optical detection unit 151 ... Light receiving unit 151A ... First light receiving unit 151B ...
- Second light receiving unit 153 Light emitting unit 155 ... Background unit 160 ...
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Food Science & Technology (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Toxicology (AREA)
- Medicinal Chemistry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
- Image Processing (AREA)
Abstract
Description
より具体的には、特開2004-301690号公報にかかる従来技術においては、まず、被選別物(食品と食品に含まれる異物等)に光を照射することによって得られる反射光の可視光および近赤外光の吸収スペクトルを測定する。次に、この吸収スペクトルに対して二次微分処理を行い、食品とその他の異物等との間で異なる二次微分スペクトルを示す波長帯を測定する。そして、食品について、上述した波長帯の二次微分分光画像を作成することによって、この食品に含まれる異物等を検出することが可能となる。
また、本開示は、上記従来技術の問題を解決するためになされたものであって、被選別物そのものの状態、および複数の被選別物同士の関係を適切に把握して、かかる把握した情報に基づき、被選別物を適切に識別して選別する、被選別物の選別方法および選別装置を提供することを課題とする。
また、「隣接情報」とは、隣接する被選別物間の距離(隣接距離)あるいは隣接割合(被選別物の大きさに対する被選別物間の距離の割合)に応じて付される情報であり、例えば、被選別物同士が接触している部分、被選別物間の距離が所定距離以下の部分、および被選別物間の隣接割合が所定割合以下の部分の少なくとも一つに、被選別物および背景等と異なる所定の情報(例えば、色情報)を付して構成される。
さらに、「分類情報」とは、被選別物の状態に応じて付される情報であり、例えば、被選別物の良好な部分である「良部情報」や、被選別物の欠損部や内観不良部等の不良な部分である「不良部情報」等があげられる。この「分類情報」として、例えば、「良部情報」や「不良部情報」等の複数の情報が存在する場合、それぞれの「分類情報」については、それぞれ異なる情報(例えば、色情報)を付して構成される。
ここで、「推論モデル」は、サポートベクターマシン(SVM)、単純ベイズ分類器、ロジスティック回帰、ランダムフォレスト、ニューラルネットワーク、ディープラーニング、K近傍法、AdaBoost、バギング、C4.5、カーネル法、確率的勾配降下法、ラッソ回帰、リッジ回帰、Elastic Net、内挿法、および協調フィルタリングの少なくとも一つ以上を組み合わせて構成されたシステム(学習プログラム)を用いて得ることができる。具体的には、かかるシステム(学習プログラム)に所定の画像情報(学習情報、学習用画像)を学習させることによって、本開示にかかる識別工程を実施可能な「推論モデル」を得ることができる。以下においても、単に「推論モデル」と表記される場合には、同様の概念とする。
また、この第三態様にかかる被選別物の識別方法においては、前記学習情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報と、前記被選別物の背景情報とを含む構成であってもよい。
また、本実施態様にかかる選別方法においては、前記撮像情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報と、前記被選別物の背景情報とを含む構成であってもよい。
また、この第五態様にかかる被選別物の選別方法においては、前記学習情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報と、前記被選別物の背景情報とを含む構成であってもよい。
また、本実施態様にかかる選別装置においては、前記撮像情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報と、前記被選別物の背景情報とを含む構成であってもよい。
ここで、「撮像手段」は、電磁波を照射する電磁波発生部(例えば、X線発生部)と、照射された電磁波の透過情報(透過光、透過信号)および反射情報(反射光、反射信号)の少なくとも一方を検出する電磁波検出部(例えば、X線検出部)とを用いて構成される。
また、この第七態様にかかる被選別物の選別装置においては、前記学習情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報と、前記被選別物の背景情報とを含む構成であってもよい。
また、本開示によれば、被選別物そのものの状態、および複数の被選別物同士の関係を適切に把握して、かかる把握した情報に基づき、被選別物を適切に識別して選別する、被選別物の選別方法および選別装置を得ることができる。
ここで、「被選別物」としては、例えば、ナッツ、豆類、穀粒、樹脂、石、ガラス、木材等の粒状物があげられる。また、「穀粒」とは、穀物の粒のことであり、穀物とは、植物から得られる食材の総称の一つである。穀物としては、例えば、米、麦、粟、稗、トウモロコシ、大豆、小豆、蕎麦等があげられる。
図1は、本開示の実施形態にかかる被選別物の選別装置の一部概略図(一部概略側面図)を示したものである。より具体的には、図1は、選別装置の一部であるX線識別手段10の概略図(概略側面図)を示したものである。また、図2は、本実施形態にかかるX線選別手段にて用いられる推論モデルの構築方法に関するフローチャートを示したものである。さらに、図3A~3Cは、本実施形態にかかる推論モデル構築時に用いられる撮像情報等の概略図を示したものであって、図3Aは、被選別物に電磁波(X線)を照射して取得された生画像を示し、図3Bは、学習用画像(請求の範囲の「学習情報」の非限定的な一例である)を生成するために、取得された生画像からトリミング処理が行われたトリミング画像(トリミング画像群)を示し、図3Cは、トリミング画像に基づき、良部(分類情報の一例としての良部情報)、不良部(分類情報の一例としての不良部情報)、隣接部(隣接情報)、背景部(背景情報)についてのラベリング処理が行われたラベリング画像(ラベリング画像群)を示している。本開示にかかる実施形態においては、ラベリング画像が学習用画像(学習情報)として用いられる。また、図4は、本実施形態にかかるX線選別手段(を成すX線識別手段)を用いて行われる識別工程(被選別物の識別方法)に関するフローチャートを示したものである。さらに、図5A~5Cは、本実施形態にかかるX線識別手段を用いて行われる識別工程時における撮像情報等の概略図を示し、図5Aは、被選別物にX線を照射して撮像された生画像を示し、図5Bは、撮像された生画像に推論モデルを適用した際の推論結果画像を示し、図5Cは、推論結果画像にフィルタリングを行った際のフィルタ処理後画像を示したものである。
また、本実施形態においては、被選別物S同士が接触している部分、および隣接する被選別物S間の距離(隣接距離)あるいは隣接割合(被選別物Sの大きさに対する被選別物S間の距離の割合)が所定値以下の部分を所定の割合で有するように、トリミング処理工程S203が行われる。例えば、被選別物S同士が接触している部分、被選別物S間の距離が所定距離以下の部分、および被選別物S間の隣接割合が所定割合以下の部分の少なくとも一つが、「隣接情報」の非限定的な一例である。
さらに、本実施形態においては、分類情報と隣接情報と背景情報とが所定の割合となるように、トリミング処理工程S203が行われる。
具体的には、被選別物Sの単体の大きさと同程度の大きさでトリミング処理が行われる。この際、被選別物Sが「円」や「正方形」に近い形状である場合には、その被選別物Sの平均的な単体の大きさ(面積)と同程度あるいは少し大きいサイズ(「単体の平均値-5%程度」≦トリミングサイズ≦「単体の平均値+15%程度」)にトリミング処理が行われる。また、被選別物Sが「楕円形」や「長方形」に近い形状である場合には、その被選別物Sの平均的な単体の大きさと同程度あるいは少し小さいサイズ(「単体の平均値-15%程度」≦トリミングサイズ≦「単体の平均値+5%程度」)にトリミング処理が行われる。
例えば、本実施形態においては、被選別物Sがアーモンドであれば、40×40ピクセルのサイズ(平均的な単体の大きさと同程度あるいは少し小さいサイズ)で、トリミング処理が行われる。
具体的には、取得される信号の明るさがラベリングの判断基準として用いられる。
本実施形態においては、例えば、複数の明るさの基準値(第一ラベリング基準値、第二ラベリング基準値)を設け、この明るさの基準値に基づいてラベリング処理が行われる。ここでは、第一ラベリング基準値よりも第二ラベリング基準値の方が、より明るい値として定義するものとする。
上記の第一ラベリング基準値および第二ラベリング基準値を用い、本実施形態においては、例えば、第一ラベリング基準値以下の明るさを有する部分(暗い部分)について良部ラベリング部SP1が付され、第二ラベリング基準値以上の明るさを有する(明るい部分)について背景ラベリング部SP4が付される。また、第一ラベリング基準値以下の明るさを有する領域中に所定の明るさ(第一ラベリング基準値と第二ラベリング基準値との間の所定の明るさ)を有する部分、および第一ラベリング基準値以下の明るさを有する箇所と第二ラベリング基準値以上の明るさを有する箇所との間に存在する所定の明るさを有する部分について不良部ラベリング部SP2が付される。さらに、第一ラベリング基準値以下の明るさを有する領域が複数近接あるいは接触している領域であって、第二ラベリング基準値よりも暗い部分について、隣接ラベリング部SP3が付される。
ここで、「所定の条件」とは、例えば、ラベリング処理が完了したラベリング画像の総数、ラベリング処理が完了したラベリング画像の総ピクセル数、ラベリング画像中の良部ラベリング部のピクセル数と不良部ラベリング部のピクセル数の合計やそれぞれの割合、およびラベリング画像中の分類情報のピクセル数と隣接情報のピクセル数と背景情報のピクセル数の合計やそれぞれの割合等の少なくとも一つが考えられる。この「所定の条件」は、上記条件の一つでもよいし、上記条件の複数を組み合わせてもよい。
この推論モデル生成時に使用されるラベリング画像(例えば、ピクセルごとにラベリング(4つのラベリングSP1~SP4のうちのいずれかのラベリング)を付された画像331~336)の数は、例えば500枚以上であり、これらのラベリング画像数の割合は、例えば、「(SP1+SP4)を有する画像数:(SP1+SP2+SP4)を有する画像数:(SP1+SP2+SP3)を有する画像数=60%:25%:15%」の割合となっている。
本実施形態にかかる推論モデル確認工程S211においては、例えば、上述した一の方法および他の方法の少なくとも一方が実施され、いずれか一方を実施した際には、その際の正解率が所定のメモリに記憶され、両方を実施した際には、それぞれの正解率が所定のメモリに記憶される。
本実施形態にかかる調整工程S215としては、例えば、学習に使用するパラメータである学習率、バッチサイズ、ドロップアウト率等を変更する手段があげられる。また、この調製工程S215においては、必要に応じて、学習情報とテスト用画像情報とをシャッフルしたり、それぞれの情報を増減させたりしてもよい。さらに、本実施形態においては、必要に応じて複数のモデルを生成し、それぞれについて推論モデル判断工程S213や調製工程S215等を行い、より適切な推論モデルを作成してもよい。
ここで、図5Bは、撮像された生画像510に推論モデルを適用した際の推論結果画像520を示したものである。図5Bに示すように、推論モデルが適用されることによって、推論結果画像520の全てのピクセルについては、良部ラベリング部SSP1(分類情報)、不良部ラベリング部SSP2(分類情報)、隣接ラベリング部SSP3(隣接情報)、および背景ラベリング部SSP4(背景情報)のいずれかのラベリング(4つのうちのいずれかのラベリング)が付されることとなる。
ここで、フィルタリング処理としては、例えば、所定の画像サイズ中に不良部(不良部ラベリング部SSP2)が何ピクセル含まれているかを確認し、規定値よりもピクセル数が多い場合に不良部として処理し、規定値よりもピクセル数が少ない場合に良部として処理する手段があげられる。また、他のフィルタリング処理としては、例えば、平滑化、エッジ抽出、鮮鋭化、モルフォロジー変換等の処理があげられる。
具体的には、フィルタリング処理S407の結果、一例として、推論結果画像520中の不良部ラベリング部SSP2の大きさは小さく表示される。また、フィルタリング処理S407の結果の他例として、推論結果画像520中の所定値以下のピクセル数を有する不良部ラベリング部SSP2については、良部ラベリング部(後述する分類特定時の「良部」)や背景ラベリング部(後述する分類特定時の「背景部」)に置き換えられて表示される。さらに、隣接ラベリング部SSP3については、背景ラベリング部(後述する分類特定時の「背景部」)に置き換えられて表示される。これらのフィルタリング処理S407を経た後の画像が、フィルタ処理後画像530(図5C参照)である。
つまり、本実施形態においては、フィルタ処理後画像530に基づいて、「良部」「不良部」および「背景部」の特定が行われる。本実施形態においては、被選別物同士が隣接している隣接ラベリング部については、「背景部」としての特定処理が行われる。
具体的には、特別な指示がない限り、搬送路91上にて搬送される被選別物Sが存在し、X線発生部11およびX線検出部12等を用いて撮像工程が実施されている場合には、識別処理は継続して行われる。
本実施形態にかかる識別方法によれば、推論結果と撮像情報との比較を容易に実施可能であるため、識別時の演算処理の効率を高めことができる。また、本実施形態にかかる識別方法によれば、隣接情報(隣接ラベリング部SP3)を用いるため、隣接部を不良部と誤認識せず、識別精度を高めることができる。さらに、本実施形態にかかる識別方法によれば、隣接情報を用いることによって、接触した状態の被選別物を別離した状態として認識可能であるため、より高精度に各被選別物を識別することができる。また、本実施形態にかかる識別方法によれば、図5A~5Cに示すように、物体の切り出し処理を必要としないため、識別処理の処理速度を高めることができる。さらに、本実施形態によれば、推論結果を画像に変換することによって、不良部の特定を容易に実施することができる。また、本実施形態によれば、ピクセル単位で結果が出るため、各ピクセルデータを各種処理(後処理等)に簡単に利用できることとなる。さらに、本実施形態によれば、推論結果を画像として取り扱うことで、画像処理技術を適用することが可能となる。
本実施形態にかかる識別方法によれば、学習情報に被選別物の背景情報を含むため、各被選別物を明確に識別可能となる。また、本実施形態にかかる識別方法によれば、隣接情報(隣接ラベリング部SP3)を特定後、この隣接情報を背景情報に変換するため、より明確に各被選別物の識別処理を実施することができる。
以下、図面等を用いて、上述した推論モデルが搭載された識別装置を備えた選別装置について説明する。
図6は、本開示の第二実施形態にかかる選別装置(ベルト式選別装置)の概略構成図を示したものである。具体的には、この図6は、図1~図5Cにて説明した識別装置(識別手段)を搭載したベルト式選別装置101の概略構成図を示したものである。
本実施形態にかかる選別装置(選別方法)によれば、推論結果と撮像情報との比較を容易に行うことによって識別手段における識別信号生成時の演算処理の効率を高めることが可能となり、このようにして得られた識別信号に基づいて選別部の制御が実施される。したがって、本実施形態にかかる選別装置によれば、被選別物S0を選別する際の演算処理の効率化を図ることができる。
また、本実施形態にかかる選別装置(選別方法)によれば、識別信号生成時において隣接情報(隣接ラベリング部SP3)を用いるため、隣接部を不良部と誤認識せず、識別信号の識別精度を高めることができる。したがって、本実施形態にかかる選別装置によれば、この識別精度の高い識別信号を用いて高精度の選別処理を実施することができる。
さらに、本実施形態にかかる選別装置(選別方法)によれば、隣接情報を用いることによって、接触した状態の被選別物を別離した状態として認識可能であるため、より高精度に各被選別物を識別および選別することができる。
また、本実施形態にかかる選別装置(選別方法)によれば、先に説明した図5A~5Cに示すように、被選別物S0の識別時において物体の切り出し処理を必要としないため、識別処理の処理速度を高め、ひいては選別処理の速度も高めることができる。
本実施形態にかかる選別装置によれば、学習情報に被選別物の背景情報を含むため、識別信号生成時において、各被選別物を明確に識別し、高精度の識別信号を得ることができる。よって、かかる高精度の識別信号を用いて、高精度の選別処理を実現することができる。
また、本実施形態にかかる選別装置によれば、隣接情報を特定後、この隣接情報を背景情報に変換するため、より明確に各被選別物の識別処理を実施し、高精度の識別信号を得ることができる。よって、本実施形態にかかる選別装置によれば、高精度の選別処理を実現することができる。
図7は、本開示の第三実施形態にかかる選別装置(シュート式選別装置)の概略構成図を示したものである。具体的には、この図7は、図1~図5Cにて説明した識別装置(識別手段)を搭載したシュート式選別装置201の概略構成図を示したものである。
この第三実施形態にかかる選別装置201と、先に説明した第二実施形態にかかる選別装置101とは、部分的な構成(例えば、搬送部の構成)は異なるが、識別手段および選別手段については同様の構成を有している。したがって、この第三実施形態にかかる選別装置201においても、第二実施形態にて説明した選別装置101と略同様の作用効果を有する。よって、ここでは、作用効果の詳細は割愛する。
なお、本発明は、上記実施形態に限定されるものではなく、本発明の趣旨に適合し得る範囲で必要に応じて種々の変更を加えて実施することも可能であり、それらはいずれも本発明の技術的範囲に含まれる。
さらに、本発明は、推論モデルを構築することなく、識別処理および選別処理を行う際において、撮像情報である「被選別物同士の隣接情報」と「被選別物の分類情報」と「被選別物の背景情報」とを用いてもよい。つまり、上述した「被選別物同士の隣接情報」および「被選別物の分類情報」に加え、「被選別物の背景情報」についても所定の閾値情報を設定し、これらの閾値情報と、被選別物に関する「被選別物同士の隣接情報」「被選別物の分類情報」および「被選別物の背景情報」との対比を行うことにより、適宜、被選別物について良品あるいは不良品の識別処理および選別処理を行う構成であってもよい。
本実施形態にかかる被選別物の識別方法は、被選別物を移送する移送工程と、移送工程中における被選別物を撮像する撮像工程と、撮像工程にて得られた撮像情報に基づき、被選別物を識別する識別工程とを備え、識別工程が、被選別物に関する学習情報に基づいて生成された推論モデルを用いて行われ、学習情報が、被選別物同士の隣接情報と、被選別物の分類情報と、被選別物の背景情報(背景ラベリング部SP4)とを含み、識別工程では、撮像情報と推論モデルとを用いて、被選別物を識別すべく構成されている。
そして、上記識別方法を用いて、本開示にかかる選別方法および選別装置が構成される。
11…X線発生部(撮像手段を成すX線発生部)
11a…X線
12…X線検出部(撮像手段を成すX線検出部)
91…搬送路(移送手段、移送手段を成す搬送路)
101…選別装置(ベルト式選別装置)
120…被選別物供給部
130…搬送部(移送手段)
131…搬送ベルト(移送手段、移送手段を成す搬送ベルト、搬送路)
132、133…ローラ
134…駆動モータ
150…光学検出部
151…受光部
151A…第一受光部
151B…第二受光部
153…発光部
155…背景部
160…選別部(選別手段)
161…ノズル部
162…電磁弁
171…タッチパネル
172…CPU(中央演算処理装置)
173…X線画像処理手段
174…画像処理手段
175…判別結果結合機構
181…良品収容部
182…不良品排出部
201…選別装置(シュート式選別装置)
220…被選別物供給部
230…シュート(搬送部、移送手段)
231…傾斜板部(移送手段、移送手段を成す傾斜板部、搬送路)
250…光学検出部
251…受光部
251A…第一受光部
251B…第二受光部
253…発光部
255…背景部
260…選別部(選別手段)
261…ノズル部
262…電磁弁
271…タッチパネル
272…CPU(中央演算処理装置)
273…X線画像処理手段
274…画像処理手段
275…判別結果結合機構
281…良品収容部
282…不良品排出部
310…生画像
320…トリミング画像群
321~326…第一トリミング画像~第六トリミング画像(トリミング画像)
330…ラベリング画像群(学習用画像、学習情報)
331~336…第一ラベリング画像~第六ラベリング画像(ラベリング画像)(学習用画像、学習情報)
510…生画像(撮像情報)
520…推論結果画像
530…フィルタ処理後画像
S、SS、S0…被選別物
S1…良部情報(分類情報)
S2…不良部情報(欠陥情報)(分類情報)
S3…隣接情報
S4…背景情報
SP1、SSP1…良部ラベリング部(分類情報)
SP2、SSP2…不良部ラベリング部(分類情報)
SP3、SSP3…隣接ラベリング部(隣接情報)
SP4、SSP4…背景ラベリング部(背景情報)
C1…良部(良部ラベリング部、分類情報)
C2…不良部(不良部ラベリング部、分類情報)
C4…背景部(背景ラベリング部、背景情報)
CA、CB…被選別物
Claims (7)
- 被選別物を移送する移送工程と、
前記移送工程中における前記被選別物を撮像する撮像工程と、
前記撮像工程にて得られた撮像情報に基づき、前記被選別物を識別する識別工程とを備え、
前記撮像情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報とを含む
被選別物の識別方法。 - 請求項1に記載の被選別物の識別方法であって、
前記撮像情報が、さらに、前記被選別物の背景情報を含む
被選別物の識別方法。 - 請求項1または請求項2に記載の識別方法であって、
前記識別工程が、前記被選別物に関する学習情報に基づいて生成された推論モデルを用いて行われ、前記学習情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報とを含み、
前記識別工程では、前記撮像情報と前記推論モデルとを用いて、前記被選別物を識別する
被選別物の識別方法。 - 被選別物の選別方法であって、
請求項1ないし請求項3のいずれか一項に記載の被選別物の識別方法と、
前記識別工程にて得られた識別情報に基づき、前記被選別物を選別する選別工程とを備えた
被選別物の選別方法。 - 被選別物を移送する移送部と、
前記移送部における移送中に前記被選別物を撮像する撮像部と、
前記撮像部にて得られた撮像情報に基づき、前記被選別物を識別する識別部と、
前記識別部にて得られた識別情報に基づき、前記被選別物を選別する選別部とを備え、
前記撮像情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報とを含む
選別装置。 - 請求項5に記載の選別装置であって、
前記識別部が、前記被選別物に関する学習情報に基づいて生成された推論モデルを有し、前記学習情報が、前記被選別物同士の隣接情報と、前記被選別物の分類情報とを含み、
前記識別部が、前記撮像情報と前記推論モデルとを用いて、前記被選別物を識別する
選別装置。 - 複数の被識別物を移送する移送部と、
前記移送部における移送中に前記複数の被識別物を撮像する撮像部と、
前記撮像部にて得られた撮像情報に基づき、個々の被識別物を識別する識別部と
を備え、
前記撮像情報は、隣接する少なくとも二つの被識別物の近接の程度を表す近接情報を含む
識別装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/258,938 US20240046613A1 (en) | 2020-12-24 | 2021-11-04 | Method for discriminating a sorting target, sorting method, sorting apparatus, and discrimination apparatus |
CN202180086319.9A CN116686003A (zh) | 2020-12-24 | 2021-11-04 | 被分选物的识别方法、分选方法、分选装置以及识别装置 |
EP21909977.7A EP4270304A4 (en) | 2020-12-24 | 2021-11-04 | IDENTIFICATION METHOD FOR AN OBJECT TO BE SORTED, SORTING METHOD, SORTING DEVICE AND IDENTIFICATION DEVICE |
JP2022571934A JP7497760B2 (ja) | 2020-12-24 | 2021-11-04 | 被選別物の識別方法、選別方法、選別装置、および識別装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2020-214808 | 2020-12-24 | ||
JP2020214808 | 2020-12-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022137822A1 true WO2022137822A1 (ja) | 2022-06-30 |
Family
ID=82157545
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/040519 WO2022137822A1 (ja) | 2020-12-24 | 2021-11-04 | 被選別物の識別方法、選別方法、選別装置、および識別装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240046613A1 (ja) |
EP (1) | EP4270304A4 (ja) |
JP (1) | JP7497760B2 (ja) |
CN (1) | CN116686003A (ja) |
WO (1) | WO2022137822A1 (ja) |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1076233A (ja) * | 1996-09-06 | 1998-03-24 | Fuji Electric Co Ltd | 農産物のガク向き検出装置 |
JPH10279060A (ja) * | 1997-04-08 | 1998-10-20 | Ishii Ind Co Ltd | 物品搬送装置 |
JP2000237696A (ja) * | 1999-02-18 | 2000-09-05 | Ishii Ind Co Ltd | 物品検査装置 |
JP2002039731A (ja) * | 2000-07-25 | 2002-02-06 | Techno Ishii:Kk | 被検出物の空洞化検出方法及びその装置 |
JP2002062113A (ja) * | 2000-08-17 | 2002-02-28 | Ishii Ind Co Ltd | 被検出物の計測方法及びその装置 |
JP2002062126A (ja) * | 2000-08-23 | 2002-02-28 | Ishii Ind Co Ltd | 長尺物計測装置 |
JP2002312762A (ja) * | 2001-04-12 | 2002-10-25 | Seirei Ind Co Ltd | ニューラルネットワークを利用した穀粒選別装置 |
JP2004301690A (ja) | 2003-03-31 | 2004-10-28 | Nissei Co Ltd | 食品中の異物・夾雑物の検出方法 |
JP2005055245A (ja) * | 2003-08-01 | 2005-03-03 | Seirei Ind Co Ltd | 穀粒選別装置および穀粒選別方法 |
JP2005083775A (ja) * | 2003-09-05 | 2005-03-31 | Seirei Ind Co Ltd | 穀粒選別装置 |
JP2005091159A (ja) * | 2003-09-17 | 2005-04-07 | Seirei Ind Co Ltd | 穀粒選別装置 |
JP2006064662A (ja) * | 2004-08-30 | 2006-03-09 | Anritsu Sanki System Co Ltd | 異物検出方法、異物検出プログラム及び異物検出装置 |
JP2007198394A (ja) * | 2002-09-18 | 2007-08-09 | Nippon Soken Inc | 蒸発燃料漏れ検査装置 |
JP2015161522A (ja) * | 2014-02-26 | 2015-09-07 | 株式会社イシダ | X線検査装置 |
JP2015529154A (ja) * | 2012-09-07 | 2015-10-05 | トムラ ソーティング リミテッド | 収穫された根菜作物を処理する方法及び装置 |
WO2018131685A1 (ja) * | 2017-01-13 | 2018-07-19 | 株式会社ニチレイフーズ | 莢豆の検査方法及び莢豆食品の製造方法 |
JP2019023664A (ja) * | 2018-11-14 | 2019-02-14 | オムロン株式会社 | X線検査の処理装置およびx線検査方法 |
JP2020012741A (ja) * | 2018-07-18 | 2020-01-23 | 株式会社ニチレイフーズ | 莢豆の検査方法及び莢豆食品の製造方法 |
-
2021
- 2021-11-04 WO PCT/JP2021/040519 patent/WO2022137822A1/ja active Application Filing
- 2021-11-04 US US18/258,938 patent/US20240046613A1/en active Pending
- 2021-11-04 CN CN202180086319.9A patent/CN116686003A/zh active Pending
- 2021-11-04 JP JP2022571934A patent/JP7497760B2/ja active Active
- 2021-11-04 EP EP21909977.7A patent/EP4270304A4/en active Pending
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1076233A (ja) * | 1996-09-06 | 1998-03-24 | Fuji Electric Co Ltd | 農産物のガク向き検出装置 |
JPH10279060A (ja) * | 1997-04-08 | 1998-10-20 | Ishii Ind Co Ltd | 物品搬送装置 |
JP2000237696A (ja) * | 1999-02-18 | 2000-09-05 | Ishii Ind Co Ltd | 物品検査装置 |
JP2002039731A (ja) * | 2000-07-25 | 2002-02-06 | Techno Ishii:Kk | 被検出物の空洞化検出方法及びその装置 |
JP2002062113A (ja) * | 2000-08-17 | 2002-02-28 | Ishii Ind Co Ltd | 被検出物の計測方法及びその装置 |
JP2002062126A (ja) * | 2000-08-23 | 2002-02-28 | Ishii Ind Co Ltd | 長尺物計測装置 |
JP2002312762A (ja) * | 2001-04-12 | 2002-10-25 | Seirei Ind Co Ltd | ニューラルネットワークを利用した穀粒選別装置 |
JP2007198394A (ja) * | 2002-09-18 | 2007-08-09 | Nippon Soken Inc | 蒸発燃料漏れ検査装置 |
JP2004301690A (ja) | 2003-03-31 | 2004-10-28 | Nissei Co Ltd | 食品中の異物・夾雑物の検出方法 |
JP2005055245A (ja) * | 2003-08-01 | 2005-03-03 | Seirei Ind Co Ltd | 穀粒選別装置および穀粒選別方法 |
JP2005083775A (ja) * | 2003-09-05 | 2005-03-31 | Seirei Ind Co Ltd | 穀粒選別装置 |
JP2005091159A (ja) * | 2003-09-17 | 2005-04-07 | Seirei Ind Co Ltd | 穀粒選別装置 |
JP2006064662A (ja) * | 2004-08-30 | 2006-03-09 | Anritsu Sanki System Co Ltd | 異物検出方法、異物検出プログラム及び異物検出装置 |
JP2015529154A (ja) * | 2012-09-07 | 2015-10-05 | トムラ ソーティング リミテッド | 収穫された根菜作物を処理する方法及び装置 |
JP2015161522A (ja) * | 2014-02-26 | 2015-09-07 | 株式会社イシダ | X線検査装置 |
WO2018131685A1 (ja) * | 2017-01-13 | 2018-07-19 | 株式会社ニチレイフーズ | 莢豆の検査方法及び莢豆食品の製造方法 |
JP2020012741A (ja) * | 2018-07-18 | 2020-01-23 | 株式会社ニチレイフーズ | 莢豆の検査方法及び莢豆食品の製造方法 |
JP2019023664A (ja) * | 2018-11-14 | 2019-02-14 | オムロン株式会社 | X線検査の処理装置およびx線検査方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4270304A4 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022137822A1 (ja) | 2022-06-30 |
CN116686003A (zh) | 2023-09-01 |
US20240046613A1 (en) | 2024-02-08 |
EP4270304A4 (en) | 2024-05-22 |
JP7497760B2 (ja) | 2024-06-11 |
EP4270304A1 (en) | 2023-11-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2471024B1 (en) | Method for real time detection of defects in a food product | |
CN109791111B (zh) | 食品检查装置、食品检查方法以及食品检查装置的识别机构的学习方法 | |
Zareiforoush et al. | Design, development and performance evaluation of an automatic control system for rice whitening machine based on computer vision and fuzzy logic | |
JP5898388B2 (ja) | 食品の品質をスコア付け及び制御するための方法及び装置 | |
JP6152845B2 (ja) | 光学式粒状物選別機 | |
EP2418020B1 (en) | Sorting device and method for separating products from a random stream of bulk inhomogeneous products | |
EP0566397A2 (en) | Apparatus and method for inspecting articles such as agricultural produce | |
Mohammadi Baneh et al. | Mechatronic components in apple sorting machines with computer vision | |
JP2005300281A (ja) | 種子破片検査装置 | |
JP2013238579A (ja) | 穀粒成分分析装置および穀粒成分分析方法 | |
Pearson et al. | Color image based sorter for separating red and white wheat | |
TW202103810A (zh) | 基於自我學習技術之移動物品分類系統及方法 | |
He et al. | Online detection of naturally DON contaminated wheat grains from China using Vis-NIR spectroscopy and computer vision | |
Chiu et al. | Development of on-line apple bruise detection system | |
WO2022137822A1 (ja) | 被選別物の識別方法、選別方法、選別装置、および識別装置 | |
JP2002205019A (ja) | 自動選別装置 | |
Sharma et al. | Combining near-infrared hyperspectral imaging and ANN for varietal classification of wheat seeds | |
JP2018001115A (ja) | 芋判断装置及び芋選別装置 | |
Sun et al. | Tea stalks and insect foreign bodies detection based on electromagnetic vibration feeding combination of hyperspectral imaging | |
US20230398576A1 (en) | Method for identifying object to be sorted, sorting method, and sorting device | |
CN115999943A (zh) | 一种非金属矿石分选设备 | |
Chen et al. | Detect black germ in wheat using machine vision | |
JP6742037B1 (ja) | 学習モデルの生成方法、学習モデル、検査装置、異常検出方法、及びコンピュータプログラム | |
Bayram et al. | Color-sorting systems for bulgur production | |
Mishra et al. | Detection of aflatoxin contamination in single kernel almonds using multispectral imaging system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21909977 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2022571934 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202180086319.9 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18258938 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021909977 Country of ref document: EP Effective date: 20230724 |