WO2024111183A1 - Dispositif de mesure et machine de tri - Google Patents
Dispositif de mesure et machine de tri Download PDFInfo
- Publication number
- WO2024111183A1 WO2024111183A1 PCT/JP2023/029975 JP2023029975W WO2024111183A1 WO 2024111183 A1 WO2024111183 A1 WO 2024111183A1 JP 2023029975 W JP2023029975 W JP 2023029975W WO 2024111183 A1 WO2024111183 A1 WO 2024111183A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- transport speed
- line sensor
- transport
- Prior art date
Links
- 230000009471 action Effects 0.000 claims abstract description 4
- 238000001514 detection method Methods 0.000 claims description 14
- 238000012937 correction Methods 0.000 description 23
- 230000002950 deficient Effects 0.000 description 20
- 235000013339 cereals Nutrition 0.000 description 19
- 238000000034 method Methods 0.000 description 19
- 238000012546 transfer Methods 0.000 description 18
- 230000003287 optical effect Effects 0.000 description 17
- 241000209094 Oryza Species 0.000 description 12
- 235000007164 Oryza sativa Nutrition 0.000 description 12
- 235000009566 rice Nutrition 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 239000002245 particle Substances 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 230000007547 defect Effects 0.000 description 7
- 230000007423 decrease Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000005484 gravity Effects 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000003702 image correction Methods 0.000 description 3
- 238000005507 spraying Methods 0.000 description 3
- 235000021329 brown rice Nutrition 0.000 description 2
- 239000012634 fragment Substances 0.000 description 2
- 235000010523 Cicer arietinum Nutrition 0.000 description 1
- 244000045195 Cicer arietinum Species 0.000 description 1
- 244000068988 Glycine max Species 0.000 description 1
- 235000010469 Glycine max Nutrition 0.000 description 1
- 241000238631 Hexapoda Species 0.000 description 1
- 235000010627 Phaseolus vulgaris Nutrition 0.000 description 1
- 244000046052 Phaseolus vulgaris Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 229910052736 halogen Inorganic materials 0.000 description 1
- 150000002367 halogens Chemical class 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000008188 pellet Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B07—SEPARATING SOLIDS FROM SOLIDS; SORTING
- B07C—POSTAL SORTING; SORTING INDIVIDUAL ARTICLES, OR BULK MATERIAL FIT TO BE SORTED PIECE-MEAL, e.g. BY PICKING
- B07C5/00—Sorting according to a characteristic or feature of the articles or material being sorted, e.g. by control effected by devices which detect or measure such characteristic or feature; Sorting by manually actuated devices, e.g. switches
- B07C5/34—Sorting according to other particular properties
- B07C5/342—Sorting according to other particular properties according to optical properties, e.g. colour
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/30—Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B21/00—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
- G01B21/20—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring contours or curvatures, e.g. determining profile
Definitions
- the present disclosure relates to techniques for measuring geometric and/or dimensional characteristics of an object in whole and/or in part.
- Optical sorting machines have been known for some time that use light information obtained by an optical sensor when a sorting object (hereinafter simply referred to as an object) being transported is irradiated with light from a light source to determine the characteristics of the object (e.g., whether it is a good or defective item) and remove certain objects (e.g., defective items).
- a sorting machine that uses a line sensor as an optical sensor, in which multiple light-receiving elements are arranged in a straight line in a direction intersecting the transport direction of the object. With a line sensor, the illuminance of light from the light source on each light-receiving element is kept uniform, making it possible to accurately detect defects in the appearance of the sorting object.
- a line sensor acquires an image of an object by synthesizing multiple line-shaped images obtained by repeatedly scanning an object being moved, and so is affected by the speed and/or direction of movement of the object.
- a line sensor has a characteristic in which the resolution in a direction perpendicular to the direction in which the multiple light receiving elements are arranged (hereinafter also referred to as the array direction) changes depending on the speed at which the object is moved in that direction. Therefore, under processing conditions in which the object moving speed cannot be kept constant, the above-mentioned characteristic reduces the detection accuracy of the object's dimensions and shape in the acquired image.
- the above problem is not limited to sorting machines that irradiate light onto the object, but is common to various sorting machines that have an electromagnetic wave irradiation source and a line sensor that detects the electromagnetic waves. Furthermore, this problem is not limited to sorting machines, but is also common to measuring devices for measuring the geometric and/or dimensional characteristics of an object. For these reasons, there is a demand for improving the detection accuracy of the geometric and/or dimensional characteristics of an object in measuring devices or sorting machines that have a line sensor.
- a measuring device for measuring geometric and/or dimensional characteristics of an entire object and/or a part of it.
- the measuring device includes a transport unit configured to transport the object, an electromagnetic wave irradiation source configured to irradiate electromagnetic waves to the object being transported by the action of the transport unit, a line sensor having a plurality of electromagnetic wave detection elements linearly arranged in a first direction intersecting the transport direction of the object and configured to detect at least one of the electromagnetic waves reflected by the object and the electromagnetic waves transmitted through the object, which are irradiated from the electromagnetic wave irradiation source, and a transport speed detection unit configured to detect the transport speed in a predetermined direction of the object being transported, and a characteristic determination unit configured to determine the characteristics of the object based on an image acquired by the line sensor and the transport speed in the predetermined direction.
- the "object being transported by the action of the transport unit” may include, for example, an object being transported on the transport unit, or an object falling from the transport unit.
- the electromagnetic wave irradiation source may also irradiate at least one of visible light, near-infrared light, and X-rays.
- the feature determination unit determines the features of the object based on the image acquired by the line sensor and the transport speed in a specified direction of the object being transported. Therefore, the geometric and/or dimensional features of the object can be accurately determined by reflecting the difference in the transport speed in the specified direction of the object (in other words, reducing or eliminating the effect on the image of the difference in the transport speed in the specified direction).
- the feature in the first embodiment, includes a first feature amount for the whole and/or a part of the object.
- the feature determination unit is further configured to correct the image acquired by the line sensor based on the transport speed in a predetermined direction and determine the first feature amount based on the corrected image, or to determine the first feature amount by correcting the second feature amount determined based on the image acquired by the line sensor based on the transport speed in the predetermined direction.
- the first feature amount for the whole and/or a part of the object can be accurately determined.
- the first feature amount may be, for example, at least one of the area, height, width, perimeter length, and circularity of the whole and/or a part of the object. This point is similar to the third embodiment described later.
- the feature in the first or second embodiment, includes quality of the whole and/or part of the object.
- the feature determination unit is configured to determine the quality based on a first feature amount of the whole and/or part of the object.
- the feature determination unit is further configured to correct the image acquired by the line sensor based on the transport speed in a predetermined direction and acquire the first feature amount based on the corrected image, or to acquire the first feature amount by correcting the second feature amount determined based on the image acquired by the line sensor based on the transport speed in the predetermined direction.
- the quality of the whole and/or part of the object can be accurately determined by correcting the image or the second feature amount based on the transport speed in the predetermined direction (in other words, by performing a correction that reduces or removes the influence of the difference in the transport speed in the predetermined direction on the image).
- the quality may be, for example, a pass/fail judgment result (a judgment result of whether the product is good or defective) based on a predetermined standard, or may be a quality grade.
- the quality may also include a type of defect.
- the predetermined direction includes a second direction perpendicular to the first direction.
- the features include quality of the whole and/or part of the object.
- the feature determination unit is further configured to determine the quality by comparing the feature amount determined based on the image acquired by the line sensor with a threshold determined based on the transport speed in the second direction.
- the quality of the whole and/or part of the object can be accurately determined by determining the threshold based on the transport speed in the second direction (in other words, correcting the threshold so that the threshold increases or decreases in the same direction of variation as the image dimension in accordance with the variation in the image dimension in the second direction caused by the variation in the transport speed in the second direction).
- the predetermined direction includes a second direction perpendicular to the first direction.
- the feature determination unit is further configured to correct the image by modifying the size of the image in the second direction based on the transport speed in the second direction, or to correct the second feature by modifying the feature component in the second direction of the second feature based on the transport speed in the second direction.
- the predetermined direction includes a first direction.
- the feature determination unit is further configured to correct the image by modifying the coordinate values in the first direction of a plurality of pixels constituting the image based on the transport speed in the first direction, and to determine the features of the object based on the corrected image.
- the image can be corrected so that distortion of the object on the image caused by the shifting laterally is reduced or eliminated. Therefore, the geometric and/or dimensional features of the object can be accurately determined based on the corrected image.
- a sorting machine includes a measuring device according to any one of the first to sixth aspects, and a sorting section configured to sort objects based on the characteristics determined by the characteristic determination section. With this sorting machine, the same effect as any one of the first to sixth aspects can be obtained.
- FIG. 1 is a schematic diagram illustrating a schematic configuration of an optical sorting machine according to an embodiment of the present disclosure.
- FIG. 2 is a schematic diagram showing the arrangement of optical elements in a color sensor.
- 1 is an explanatory diagram illustrating an example of an area imaged in one scanning of a single selection object;
- FIG. 11 is an explanatory diagram showing an example of a method for calculating a color misregistration amount.
- 13 is an explanatory diagram showing an example of a method for correcting an image based on a moving speed of an object in a second direction.
- FIG. 11 is an explanatory diagram showing an example of a method for correcting an image based on a moving speed of an object in a first direction;
- FIG. 11 is an explanatory diagram showing an example of a method for correcting an image based on a moving speed of an object in a first direction;
- FIG. 11 is an explanatory diagram showing an example of a method for correcting an image based on a moving speed of
- FIG. 1 is a schematic diagram showing the general configuration of an optical sorting machine (hereinafter simply referred to as a sorting machine) 10 according to an embodiment of the present disclosure.
- the sorting machine 10 is used to sort out defective products (e.g., broken rice, immature grains, discolored grains, damaged grains, dead rice, foreign objects (e.g., pebbles, mud, glass fragments, etc.)) from rice grains (more specifically, brown rice or polished rice) as an example of a sorting object (hereinafter simply referred to as an object) 90.
- the object 90 is not limited to brown rice or polished rice, and may be any granular object.
- the object 90 may be unhulled rice, wheat grains, beans (soybeans, chickpeas, edamame, etc.), resin (pellets, etc.), rubber fragments, etc.
- the sorting machine 10 includes an optical detection unit 20, a storage tank 71, a feeder 72, a chute 73, a good product discharge gutter 74, a defective product discharge gutter 75, a sorting unit 60, and a controller 80.
- the controller 80 controls the overall operation of the sorting machine 10.
- the controller 80 also functions as a transport speed calculation unit 81 and a feature determination unit 82.
- the functions of the controller 80 may be realized by a CPU executing a predetermined program, or may be realized by a dedicated circuit (e.g., PLD, ASIC, etc.), or may be realized by a combination of a CPU and a dedicated circuit.
- the functions of the controller 80 may be assigned to a single integrated device, or may be assigned in a distributed manner to multiple devices. The functions of the controller 80 will be described in detail later.
- the storage tank 71 temporarily stores the objects 90.
- the feeder 72 supplies the objects 90 stored in the storage tank 71 onto a chute 73, which is an example of a transfer section for transferring the objects.
- the objects 90 supplied onto the chute 73 slide downward on the chute 73 and drop from the bottom end of the chute 73.
- the chute 73 has a predetermined width that allows multiple objects 90 to drop simultaneously.
- a conveyor may be used as the transfer section instead of the chute 73.
- the optical detection unit 20 irradiates light onto the object 90 that has slid down the chute 73 (i.e., the object 90 falling from the chute 73) and detects light associated with the object 90 (specifically, the transmitted light that has passed through the object 90 and/or the reflected light that has been reflected by the object 90).
- light may be irradiated onto the object 90 sliding along the chute 73.
- a conveyor instead of the chute 73, light may be irradiated onto the object 90 being transported on the conveyor or onto the object 90 falling from the conveyor.
- the optical detection unit 20 includes a first light source 31, a second light source 32, a first line sensor 40, and a second line sensor 50.
- the first light source 31 and the first line sensor 40 are disposed on one side (also called the front side) of the transport path of the object 90 (in other words, the falling trajectory from the chute 73).
- the second light source 32 and the second line sensor 50 are disposed on the other side (also called the rear side) of the transport path of the object 90.
- the first light source 31 emits a first light 33 toward the multiple objects 90 being transported (i.e., falling from the chute 73).
- the second light source 32 emits a second light 34 toward the multiple objects 90 being transported.
- Each of the first light 33 and the second light 34 has a wavelength corresponding to red, a wavelength corresponding to green, and a wavelength corresponding to blue.
- the first light source 31 and the second light source 32 are so-called color LEDs.
- the light sources 31 and 32 may be any other light-emitting element (e.g., a halogen lamp).
- the number of light sources 31 and 32 is shown as one each in FIG. 1, at least one of the light sources 31 and 32 may be more than one.
- the first line sensor 40 and the second line sensor 50 detect light associated with the object 90 being moved.
- the first line sensor 40 on the front side can detect light 33 (hereinafter also referred to as reflected light 33) emitted from the first light source 31 on the front side and reflected by the object 90, and light 34 (hereinafter also referred to as transmitted light 34) emitted from the second light source 32 on the rear side and transmitted through the object 90.
- the second line sensor 50 on the rear side can detect light 34 (hereinafter also referred to as reflected light 34) emitted from the second light source 32 on the rear side and reflected by the object 90, and light 33 (hereinafter also referred to as transmitted light 33) emitted from the light source 31 on the front side and transmitted through the object 90.
- the type of light detected by the line sensors 40 and 50 is determined by the lighting pattern of the light sources 31 and 32.
- the first line sensor 40 detects light (hereinafter also referred to as reflected transmitted light) that is a combination of the reflected light 33 and the transmitted light 34
- the second line sensor 50 detects the reflected transmitted light that is a combination of the reflected light 34 and the transmitted light 33.
- the first line sensor 40 detects the reflected light 33
- the second line sensor 50 detects the transmitted light 33.
- the first line sensor 40 detects the transmitted light 34
- the second line sensor 50 detects the reflected light 34.
- Which of the first to third lighting patterns is adopted can be arbitrarily determined depending on the type and characteristics of the target object 90 and the type of defective product to be removed. Only one of the first to third lighting patterns may be adopted. Alternatively, two or more of the first to third lighting patterns may appear alternately at a predetermined time interval or according to a predetermined repetition rule.
- the line sensors 40 and 50 are color CCD sensors. More specifically, each of the line sensors 40 and 50 has a plurality of optical elements for detecting light having a wavelength corresponding to red (hereinafter referred to as R elements), a plurality of optical elements for detecting light having a wavelength corresponding to green (hereinafter referred to as G elements), and a plurality of optical elements for detecting light having a wavelength corresponding to blue (hereinafter referred to as B elements).
- R, G, and B refer to R, G, and B in the RGB color space, respectively.
- Each of these optical elements includes a condenser lens, a color filter, and a photoelectric conversion element.
- Each of the color filters has a characteristic of transmitting light of a wavelength corresponding to the color of light to be detected (e.g., red for the R element) and not transmitting light of other wavelengths.
- the line sensors 40 and 50 are not limited to CCD sensors, and may be other types of line sensors such as CMOS sensors.
- the first line sensor 40 is a so-called three-line sensor, and includes an R element group 44 in which a plurality of R elements 41 are arranged in a row, a G element group 45 in which a plurality of G elements 42 are arranged in a row, and a B element group 46 in which a plurality of B elements 43 are arranged in a row.
- the plurality of R elements 41, the plurality of G elements 42, and the plurality of B elements 43 are all linearly arranged in a first direction D1 (which is also the width direction of the chute 73) that intersects with the transport direction of the object 90 (the falling direction of the object 90).
- the R element group 44, the G element group 45, and the B element group 46 are arranged in parallel to be spaced apart from each other in a second direction D2 that is perpendicular to the first direction D1.
- the second direction D2 is also the direction intended as the transport direction of the object 90.
- the object 90 has a velocity component in the first direction D1 due to collision between objects 90, and in such cases, the direction of movement of the object 90 intersects with the second direction D2.
- the distance between the R element group 44 and the G element group 45 is L1
- the distance between the G element group 45 and the B element group 46 is L2
- L1 L2
- L1 and L2 may be different values.
- the second line sensor 50 has the same configuration as the first line sensor 40, so a description thereof will be omitted.
- a line sensor can obtain a line-shaped image with one scan, so that an image (an image large enough to contain at least one object 90, typically an image large enough to contain many objects 90) having a predetermined height in the direction corresponding to the second direction D2 can be obtained by combining multiple line-shaped images obtained by multiple scans. For example, as shown in FIG.
- an entire image of one object 90 is obtained for each of the colors R, G, and B by 10 scans (for the sake of simplicity, the example shows a smaller number of scans than the actual number).
- the numbers 1 to 10 shown in FIG. 3 indicate which scan is used to capture the line-shaped area to which the number is attached. For example, image data is obtained from the area marked with "2" by the second scan.
- the controller 80 determines the characteristics of each of the objects 90 as processing by the characteristic determination unit 82 based on the images thus acquired by the line sensors 40 and 50. Such processing is performed for both the images acquired by the first line sensor 40 and the images acquired by the second line sensor 50.
- the characteristics include color characteristics (in other words, optical characteristics) and shape and/or dimensional characteristics.
- the characteristics also include feature amounts that represent the characteristics in physical quantities and quality that is determined based on the feature amounts.
- “quality” includes, for example, a distinction between good products (i.e., rice grains of relatively high quality) and defective products (i.e., rice grains of relatively low quality and/or foreign objects).
- “quality” may also include the type of defect (e.g., whether it is broken rice, immature grains, discolored grains, damaged grains, dead rice, or foreign objects).
- “quality” may include a distinction between objects that should be removed by the sorting unit 60 and objects that should not be removed.
- “quality” includes quality determined based on color characteristics. Defective products determined based on color characteristics may include, for example, immature grains, discolored grains, damaged grains, dead rice, and foreign objects. Defective products determined based on shape and/or dimensional characteristics may include, for example, broken rice, grains infested with insects, and foreign objects.
- the feature determination unit 82 determines whether the object 90 is a good or defective product by comparing the color feature amount (in other words, the gradation value of the image data) with a predetermined threshold value (in other words, based on whether the color feature amount is within a predetermined normal range). Such a determination may be made based on a representative value (average, median, maximum, minimum, etc.) of the gradation values of the multiple pixels that make up the image of the object 90.
- the defective product may include an object 90 that has a partial defect of a predetermined size or larger.
- Such a partial defect may be determined based on the criterion that the number of pixels that have gradation values that are not within the normal range among the multiple pixels that make up the image of the object 90 is a predetermined number or more (in other words, the area of the defective part is a predetermined value or more).
- the feature determination unit 82 determines whether the object 90 is a good or bad product by comparing the geometric and/or dimensional features with a predetermined threshold value (in other words, based on whether the geometric and/or dimensional features are within a predetermined normal range).
- the sorting unit 60 sorts the objects 90 based on the characteristics determined by the characteristic determination unit 82. This sorting is performed by a trajectory change operation to change the trajectory of a particular object 90.
- the sorting unit 60 has a number of nozzles 61 and a number of valves 62 corresponding to the number of nozzles 61 (in this embodiment, the number is the same as the number of nozzles 61, but the number of valves 62 may be different from the number of nozzles 61).
- the multiple nozzles 61 are arranged in the width direction of the chute 73.
- the nozzles 61 are connected to a compressor (not shown) via valves 62, respectively.
- the valves 62 are selectively opened in response to a control signal from the controller 80, and the nozzles 61 selectively spray air 63 toward the objects 90 determined to be defective (more precisely, the objects 90 determined to be defective based on color characteristics and the objects 90 determined to be defective based on shape and/or dimensional characteristics in the images acquired by at least one of the line sensors 40 and 50).
- the objects 90 determined to be defective are blown away by the air 63, deviate from the falling trajectory from the chute 73, and are guided to the defective product discharge gutter 75 (shown as object 91 in FIG. 1).
- the air 63 is not sprayed toward the objects 90 determined to be good products. Therefore, the objects 90 determined to be good products are guided to the good product discharge gutter 74 without changing the falling trajectory (shown as object 92 in FIG. 1).
- air 63 may be sprayed toward the object 90 sliding along the chute 73 to change the transport path of the object 90.
- a belt conveyor may be used as the transport means. In this case, air may be sprayed from one end of the belt conveyor toward the object falling. Alternatively, air may be sprayed toward the object being transported on the belt conveyor.
- air 63 instead of spraying air 63 against objects 90 determined to be defective, air 63 may be sprayed against objects 90 determined to be non-defective (so-called reverse shot).
- the trajectory change operation is not limited to spraying air 63, and any other known method may be adopted.
- the transport speed in a predetermined direction of the object 90 during transport is detected, and the geometric and/or dimensional characteristics are determined based on the transport speed.
- the predetermined direction includes both the first direction D1 and the second direction D2.
- the predetermined direction may be only one of the first direction D1 and the second direction D2.
- a method for detecting the transport speed of the object 90 in a predetermined direction will be described with reference to Figs. 4 and 5.
- a detailed description will be given of a method for detecting the transport speed of the object 90 at the moment when the object 90 is imaged by the first line sensor 40 using the first line sensor 40.
- Such detection of the transport speed is performed as processing by the transport speed calculation unit 81 of the controller 80.
- the transport speed calculation unit 81 first calculates the amount of color shift for the color image acquired by the first line sensor 40.
- the R element group 44, the G element group 45, and the B element group 46 perform scanning simultaneously, but since the R element group 44, the G element group 45, and the B element group 46 are spaced apart from one another in the second direction D2 (see Figure 2), strictly speaking, the imaged locations of the object 90 are shifted between each color by the distance between them.
- a color shift occurs in the direction corresponding to the second direction D2 between the red image acquired by the R element group 44, the green image acquired by the G element group 45, and the blue image acquired by the B element group 46.
- the direction of movement of the object 90 includes a component of the first direction D1 due to collision between the objects 90 (in other words, if the direction of movement of the object 90 is a direction that intersects with the second direction D2 (excluding the perpendicular direction))
- color shifts will also occur in the direction corresponding to the first direction D1 between the red image, green image, and blue image.
- the amount of such color shifts is calculated in units smaller than one pixel, which is the unit that constitutes an image.
- the transport speed calculation unit 81 calculates the color shift amounts S1rg and S2rg between the red image and the green image, the color shift amounts S1gb and S2gb between the green image and the blue image, and the color shift amounts S1rb and S2rb between the red image and the blue image.
- FIG. 4 is an explanatory diagram showing an example of a method for calculating the color shift amounts. In the following, with reference to FIG. 4, the color shift amounts S1rg and S2rg between the red image 92R and the green image 92G are described as being calculated for each grain of the object 90.
- the transport speed calculation unit 81 calculates the red density center of gravity coordinate point 93R of the red image 92R and the green density center of gravity coordinate point 93G of the green image 92G based on the color gradation value of each coordinate point (the gradation values of one million coordinate points corresponding to one pixel are the same for each other).
- the coordinate value of the density center of gravity coordinate point can be calculated by dividing the sum of the values obtained by multiplying the coordinate value and the gradation value for each coordinate point for each of the X and Y coordinates by the sum of the gradation values of each coordinate point.
- the transport speed calculation unit 81 calculates the color shift amount from the determined density center coordinate point. For example, as shown in FIG. 4, the transport speed calculation unit 81 acquires the distance between the red density center coordinate point 93R and the green density center coordinate point 93G in the Y direction (i.e., the direction corresponding to the second direction D2) as the color shift amount S2rg (unit: pixel) in the Y direction between the red image 92R and the green image 92G.
- the transport speed calculation unit 81 also acquires the distance between the red density center coordinate point 93R and the green density center coordinate point 93G in the X direction (i.e., the direction corresponding to the first direction D1) as the color shift amount S1rg in the X direction between the red image 92R and the green image 92G. Although not shown in the figure, the transfer speed calculation unit 81 similarly calculates the color shift amounts S2gb, S1gb in the Y and X directions between the green image 92G and the blue image 92B, and the color shift amounts S2rb, S1rb in the Y and X directions between the red image 92R and the blue image 92B.
- the calculation of the color shift amount is not limited to the above-mentioned method using the center of density gravity, and can be performed by any known method.
- the color shift amount may be calculated for each particle group that overlaps on the color image.
- image areas other than the image area representing the particle or particle group of the object 90 hereinafter also referred to as blank areas
- the blank areas can be easily removed by binarizing the color image. In this way, if the color shift amount is calculated for each particle or particle group, the color shift amount can be calculated with high accuracy.
- the transport speed calculation unit 81 may divide the color image into multiple areas (these areas are large enough to include multiple particles) and calculate the color shift amount for each of the multiple areas.
- the transport speed calculation unit 81 calculates the transport speed of the object 90 in the second direction D2 based on the color shift amounts S2rg, S2gb, and S2rb in the Y direction (i.e., the direction corresponding to the second direction D2) for each grain of the object 90. If the time required for one scan of the first line sensor 40 is the scan time T, the transport speed V2rg of the object 90 in the second direction D2 calculated based on the color shift amount S2rg between the red image 92R and the green image 92G is obtained, for example, by the following formula (1). In formula (1), L1>0. In addition, in formula (1), the unit of the color shift amount S2rg is a unit representing distance.
- the unit of the separation distance L1 and the color shift amount S2rg is "mm", and the unit of the scan time T is "ms", in which case the unit of the obtained transport speed V2rg is "m/s".
- the transport speed calculation unit 81 obtains the color shift amount S2rg in units of "pixels”
- the color shift amount S2rg in units of "mm” is obtained by multiplying the color shift amount S2rg in units of "pixels” by the size (mm) of the pixel in the second direction D2 of the first line sensor 40 (i.e., the size per pixel).
- the transport speed V2rg is not limited to formula (1), and may be calculated by another formula that includes the color shift amount S2rg as a variable based on experiments, etc.
- V2rg (L1+S2rg)/T ... (1)
- the transport speed V2gb of the object 90 in the second direction D2 calculated based on the color shift amount S2gb is obtained by the following formula (2)
- the transport speed V2rb of the object 90 in the second direction D2 calculated based on the color shift amount S2rb is obtained by the following formula (3).
- V2gb (L2+S2gb)/T ...
- V2rb (L3+S2rb)/T ... (3)
- the transport speed calculation unit 81 calculates transport speeds V1rg, V1gb, and V1rb of the object 90 in the first direction D1 based on the color shift amounts S1rg, S1gb, and S1rb in the X direction (i.e., the direction corresponding to the first direction D1).
- the transport speeds V1rg, V1gb, and V1rb are obtained, for example, by the following equations (4) to (6).
- V1rg S1rg/T (4)
- V1gb S1gb/T (5)
- V1rb S1rb/T (6)
- the transfer speed calculation unit 81 detects the representative value (e.g., their average value, median value, etc.) of the transfer speeds V2rg, V2gb, V2rb calculated as described above as the transfer speed V2 in the second direction D2 of the object 90. Similarly, the transfer speed calculation unit 81 detects the representative value (e.g., their average value, median value, etc.) of the transfer speeds V1rg, V1gb, V1rb as the transfer speed V1 in the first direction D1 of the object 90.
- the representative value e.g., their average value, median value, etc.
- the transfer speed calculation unit 81 may detect the transfer speed V2 based on one or two of the transfer speeds V2rg, V2gb, V2rb, or may detect the transfer speed V1 based on one or two of the transfer speeds V1rg, V1gb, V1rb.
- the feature determination unit 82 corrects the image of the object 90 based on the transport speed V1 in the first direction D1 and the transport speed V2 in the second direction D2, and determines the geometric and/or dimensional characteristics of the object 90 based on the corrected image.
- the correction based on the transport speed V1 and the correction based on the transport speed V2 may be performed. The correction based on the transport speed V1 and/or the transport speed V2 is performed for each grain of the object 90.
- the resolution of the line sensor in the second direction D2 (in the example of FIG. 2, the direction perpendicular to the direction in which the elements 41 to 43 are arranged) is determined by the distance the object 90 moves in the second direction D2 during one scan time.
- the resolution in the second direction D2 is expressed as the product of one scan time and the moving speed V2 of the object 90 in the second direction D2. Therefore, the size (in other words, the number of pixels) of the object 90 in the image in the direction corresponding to the second direction D2 (i.e., the Y direction in FIG. 4) varies depending on the moving speed V2.
- the moving speed V2 of the object 90 in the second direction D2 and the size of the object 90 in the image in the direction corresponding to the second direction D2 are inversely proportional to each other.
- the resolution of the line sensor in the first direction D1 is constant and does not depend on the moving speed of the object 90.
- the size of the object 90 in the direction corresponding to the second direction D2 on the image will differ from the actual size.
- the measurement accuracy of the geometric and/or dimensional features will decrease.
- the threshold value for determining the geometric and/or dimensional quality is set in advance based on the design value for the transport speed V2 (in other words, based on the actual size of the object 90). Therefore, when the object 90 is transported at a transport speed V2 different from a previously assumed transport speed, the measurement accuracy of the geometric and/or dimensional quality will also decrease.
- the image acquired by the first line sensor 40 is corrected based on the transport speed V2 in the second direction D2.
- the feature determination unit 82 modifies the size of the second direction D2 of the object 90 on the image acquired by the first line sensor 40 in a direction that reduces or eliminates the effect on the image (more specifically, the size of the image of the object 90 in the direction corresponding to the second direction D2) of the difference between the transport speed V2 in the second direction D2 calculated by the transport speed calculation unit 81 and a design value for the transport speed V2 in the second direction D2 (hereinafter also referred to as the reference value VR).
- the transfer speed V2 in the second direction D2 calculated by the transfer speed calculation unit 81 is greater than the reference value VR, the size of the object 90 on the image in the direction corresponding to the second direction D2 will be smaller than its actual size.
- the feature determination unit 82 corrects the image so that the size of the object 90 on the image in the direction corresponding to the second direction D2 is larger.
- the transfer speed V2 in the second direction D2 calculated by the transfer speed calculation unit 81 is smaller than the reference value VR, the size of the object 90 on the image in the direction corresponding to the second direction D2 will be larger than its actual size.
- the feature determination unit 82 corrects the image so that the size of the object 90 on the image in the direction corresponding to the second direction D2 is smaller.
- the actual transport speed V2 of the object 90 in the second direction D2 and the size of the object 90 in the image in the direction corresponding to the second direction D2 are inversely proportional to each other, so if a correction is made by multiplying the dimension (i.e., the number of pixels) of the image of the object 90 acquired by the first line sensor 40 in the direction corresponding to the second direction D2 by the ratio V2/VR, the effect of fluctuations in the transport speed V2 on the image can be minimized (i.e., the size of the object 90 in the image can be corrected to its actual size).
- Figure 5 shows an example of such a correction.
- the object 94 image of the object 90 before correction
- the object 93 image of the object 90 at actual size
- the transport speed V2 is greater than the reference value VR
- the object 95 image of the object 90 before correction
- the object 93 image of the object 90 at actual size
- the feature determination unit 82 determines the geometric and/or dimensional features (i.e., features and quality based thereon) of the object 90 based on the image corrected in this manner based on the transport speed V2. With this type of correction, even if the resolution of the line sensor 40 in the second direction D2 changes due to fluctuations in the transport speed V2 and an image of the object 90 is obtained in which the size in the direction corresponding to the second direction D2 differs from the actual size, the image can be corrected so that the size in the second direction D2 approaches the actual size or becomes the actual size. Therefore, the feature determination unit 82 can accurately determine the geometric and/or dimensional features of the object 90 based on the corrected image.
- image correction based on the transport speed V1 in the first direction D1 will be described.
- the object 90 has a speed component in the first direction D1
- simply combining the images of each scan will cause the linear areas numbered 1 to 10 in FIG. 3 to be displaced in stages in the direction corresponding to the first direction D1.
- a distorted shape in the direction (Y direction) corresponding to the first direction D1 is detected compared to the actual shape of the object 90, which results in a decrease in the measurement accuracy of the geometric and/or dimensional characteristics of the object 90. Therefore, in order to suppress such a decrease in measurement accuracy, in this embodiment, image correction is performed based on the transport speed V1 in the first direction D1.
- the feature determination unit 82 corrects the coordinate values in the first direction D1 of the multiple pixels constituting the image of the object 90 by the amount of deviation caused by the transport speed V1 in the first direction D1. Specifically, the coordinate values of the object 90 in the first direction D1 are shifted by a distance L4 that the object 90 moves in the first direction D1 during one scan time by the first line sensor 40 between the image area of the object 90 obtained by the Nth (N is a natural number) scan and the image area of the object 90 obtained by the N+1th scan.
- the feature determination unit 82 corrects the coordinate values of the image of the object 90 obtained by the second and subsequent scans so that they are returned to their original positions by the amount of deviation from the reference, using the first scan performed on one object 90 as a reference.
- the correction amount (actual distance) of the coordinate values of the image area of the object 90 obtained by the Mth (M is an integer equal to or greater than 2) scan is expressed as (M-1) x L4. If we divide the value of (M-1) x L4 by the size of the pixels in the first direction D1 of the first line sensor 40 (i.e., the size per pixel), we can calculate the correction amount in units of the number of pixels.
- Each grid in the figure represents one pixel that constitutes the image.
- the numerical values in the figure represent coordinate values in an XY coordinate system defined by the X direction corresponding to the first direction D1 and the Y direction corresponding to the second direction D2.
- the coordinate value in the Y direction is assigned for each scan of the first line sensor 40.
- the Y coordinate value indicates the number of scans that will obtain the linear image area.
- the hatched grid in FIG. 6 represents pixels that represent the object 90 (shown as object 96) on the image before correction
- the hatched grid in FIG. 7 represents pixels that represent the object 90 (shown as object 97) on the image after correction.
- FIG. 6 and 7 show a case where it is assumed that the direction of the transport speed V1 is the negative direction of the X coordinate, and that the distance L4 (the distance that the object 90 moves in the first direction D1 during one scan time) is equal to the resolution of the line sensor 40 in the first direction D1.
- the image of the object 96 shown in FIG. 6 (an image of the object 90 captured in a distorted shape due to the transport speed V1 in the first direction D1) is corrected so that the X coordinate value increases by X coordinate distance L4 (here, the distance of one pixel) for each increase in the Y coordinate value of 1, based on the line with the Y coordinate value of 2, and is thereby corrected to the image of the object 97 shown in FIG. 7 (an image of the object 90 in its actual shape).
- the feature determination unit 82 determines the geometric and/or dimensional features of the object 90 based on the image thus corrected based on the transport speed V1. With this correction, even if the object 90 is transported while shifting laterally (i.e., while shifting in the first direction D1) with respect to the intended transport direction (i.e., the second direction D2) and an image of the object 90 distorted in a direction corresponding to the first direction D1 is obtained, the image can be corrected so that the distortion is reduced or eliminated. Therefore, the feature determination unit 82 can accurately determine the geometric and/or dimensional features of the object 90 based on the corrected image.
- the transport speed calculation unit 81 may calculate the transport speeds V1 and V2 based on the amount of color shift in the image acquired by the second line sensor 50.
- the feature determination unit 82 may correct the image acquired by the second line sensor 50 based on the transport speeds V1 and V2 acquired using the second line sensor 50, and determine the shape and/or dimensional features of the object 90 based on the corrected image.
- the transport speed calculation unit 81 corrects the image of the corresponding object 90 based on the transport speeds V1 and V2 calculated for each object 90.
- the transport speeds V1 and V2 used for correction are values specific to each object 90.
- the transport speed calculation unit 81 can perform more accurate correction using more accurate transport speeds V1 and V2 for each object 90.
- the shape and/or dimensional characteristics can be determined more accurately.
- the transport speeds V1 and V2 detected at a predetermined timing for at least one object 90 may be commonly used to correct the images of multiple objects 90.
- the transport speeds V1 and V2 used for correction may be updated, for example, at a predetermined frequency (for example, every 5 minutes).
- the transport speeds V1 and V2 detected at the start of operation of the sorting machine 10 may be used continuously for correction in subsequent operations.
- the calculation load of the transport speed calculation unit 81 can be reduced.
- a representative value e.g., an average value
- for multiple objects 90 may be commonly used to correct the images of the multiple objects 90.
- the feature determination unit 82 may directly correct the geometric and/or dimensional feature amount based on the transport speed V2 in the second direction D2.
- the feature determination unit 82 in order to obtain a feature amount (hereinafter also referred to as a first feature amount) corresponding to the geometric and/or dimensional feature amount determined based on the corrected image, the feature determination unit 82 first determines the geometric and/or dimensional feature amount (second feature amount) from the image (image to which the above-mentioned correction is not applied) obtained by the first line sensor 40 (or the second line sensor 50).
- the feature determination unit 82 corrects the second feature amount by modifying the feature amount component in the direction corresponding to the second direction D2 of the second feature amount based on the transport speed V2 in the second direction D2, thereby obtaining the first feature amount.
- the feature determination unit 82 determines the value obtained by multiplying the second feature by the ratio V2/VR as the first feature. If the feature to be acquired as the first feature is the width of the object 90 (length in the direction corresponding to the first direction D1), the feature determination unit 82 determines the second feature as the first feature without making any correction.
- the length component of the perimeter in the direction corresponding to the first direction D1 (also called Lx) is not corrected, but only the length component in the direction corresponding to the second direction D2 (also called Ly) is corrected by multiplying it by the ratio V2/VR, and the value of Lx+Ly ⁇ V2/VR is determined as the first feature.
- the components Lx and Ly can be identified, for example, using the method described in JP 2012-127706 A. This method is well known, so to briefly explain it, first, the image is binarized and a pixel range representing the object 90 is identified. Next, a window of a predetermined size (for example, 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, etc.) is applied to the pixel range representing the object 90. Depending on where in the window the pixel representing the object 90 is located, it is possible to identify whether the pixel located in the window is a pixel representing the periphery of the object 90 or a pixel representing the inside of the object 90, and also to identify which of the components Lx and Ly the pixel representing the periphery of the object 90 corresponds to. This identification process is repeated while shifting the application position of the window by one pixel until the attributes of all the pixels representing the object 90 are identified.
- a window of a predetermined size for example, 2 ⁇ 2 pixels, 3 ⁇ 3 pixels, etc.
- the feature determination unit 82 determines the shape and/or size quality based on the first feature amount by the above-mentioned method. As described above, instead of correcting the image based on the transport speed V2, the second feature amount is corrected based on the transport speed V2 to obtain the first feature amount, and the same effect as when correcting the image can be obtained.
- the feature determination unit 82 may determine (correct) a threshold for determining the geometric and/or dimensional quality based on the transport speed V2 in the second direction D2.
- the size of the object 90 on the image in the direction corresponding to the second direction D2 is VR/V2 times the actual size. Therefore, the feature determination unit 82 corrects the threshold by, for example, multiplying the design value of the threshold related to the direction corresponding to the second direction D2 by the ratio VR/V2. In other words, the feature determination unit 82 corrects the threshold so that the threshold changes at the same rate in accordance with the fluctuation in the image size in the second direction D2 caused by the fluctuation in the transport speed V2.
- the feature determination unit 82 may correct a threshold value related to the area or height of the object 90 (the length in the direction corresponding to the second direction D2) by multiplying it by the ratio VR/V2. In this case, the feature determination unit 82 determines whether the object 90 is a good or bad product by comparing the geometric and/or dimensional feature amount calculated from the uncorrected image with the corrected threshold value. With this configuration, the geometric and/or dimensional quality of the object 90 can be accurately determined.
- the transport speeds V1 and V2 may be detected by any method other than the above-mentioned method.
- the sorting machine 10 may be equipped with an area sensor for detecting the transport speeds V1 and V2.
- the feature determination unit 82 calculates the transport speeds V1 and V2 for the same object 90 on two images acquired by the area sensor at two different times based on the travel distance and travel time between the two times.
- the travel distance may be calculated based on the difference in the position of the concentration center of gravity of the object 90 in each of the two images, as in the above example.
- the travel time can be calculated based on the known scan time.
- the sorting machine 10 may use two line sensors arranged at different detection positions (image capture positions of the object 90 on the transport path) instead of such an area sensor.
- the line sensors 40 and 50 may be any color sensor in which at least two of the R, G, and B element groups are arranged so as to be spaced apart from each other in the second direction D2.
- the line sensors 40 and 50 may be color sensors in a Bayer array.
- the transport speeds V1 and V2 can be determined based on the amount of color shift between the red and blue images obtained from the R and B element groups.
- the feature determination unit 82 may determine the shape and/or dimensional features of a portion of the object 90 based on the transport speed V1 and/or transport speed V2. Such a portion may be a portion of the object 90 that has a predetermined color feature. For example, the feature determination unit 82 may determine the shape and/or dimensional feature amount of an area having a partial defect (in other words, an area on the image consisting of a group of pixels whose color gradation values are not within the normal range) based on the transport speed V1 and/or transport speed V2 in order to determine the color quality of the object 90 based on whether or not it has a partial defect of a predetermined size or more.
- a partial defect in other words, an area on the image consisting of a group of pixels whose color gradation values are not within the normal range
- the sorting machine 10 may be equipped with an electromagnetic wave irradiation source that irradiates the object 90 with electromagnetic waves having any wavelength.
- electromagnetic waves may be, for example, at least one of X-rays and near-infrared rays.
- the sorting machine 10 may be equipped with a line sensor having multiple electromagnetic wave detection elements corresponding to the wavelengths of the electromagnetic waves irradiated by the electromagnetic wave irradiation source. The various corrections described above can also be applied to images acquired using such an electromagnetic wave irradiation source and line sensor.
- the present invention is not limited to sorting machines and can be realized in various forms.
- the present invention may be realized as a measuring device for measuring geometric and/or dimensional characteristics of an entire and/or partial object.
- a measuring device may have a form in which the sorting unit 60 is removed from the sorting machine 10 shown in FIG. 1.
- the controller 80 may output the determined characteristics of the object 90 (or statistics thereof) to any device (e.g., a display, a communication interface, a storage medium, a printing device, etc.).
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Sorting Of Articles (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Ce dispositif de mesure pour mesurer une caractéristique géométrique ou dimensionnelle de tout et/ou partie d'un objet cible comprend : une unité de transport configurée pour transporter l'objet cible ; une source d'émission d'ondes électromagnétiques conçue pour émettre des ondes électromagnétiques au niveau de l'objet cible qui est transporté par l'action de l'unité de transport ; un capteur de ligne qui comprend une pluralité d'éléments de détection d'onde électromagnétique agencés linéairement dans une première direction croisant une direction de transport de l'objet cible, et qui est configuré pour détecter au moins l'une des ondes électromagnétiques réfléchies qui ont été émises par la source d'émission d'onde électromagnétique et ont été réfléchies par l'objet cible, et des ondes électromagnétiques transmises qui ont été transmises à travers l'objet cible ; une unité de détection de vitesse de transport configurée pour détecter une vitesse de transport, dans une direction prédéterminée, de l'objet cible qui est transporté ; et une unité de détermination de caractéristique configurée pour déterminer la caractéristique de l'objet cible sur la base d'une image acquise par le capteur de ligne et de la vitesse de transport dans la direction prédéterminée.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022188537A JP2024076783A (ja) | 2022-11-25 | 2022-11-25 | 測定装置および選別機 |
JP2022-188537 | 2022-11-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024111183A1 true WO2024111183A1 (fr) | 2024-05-30 |
Family
ID=91196052
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/029975 WO2024111183A1 (fr) | 2022-11-25 | 2023-08-21 | Dispositif de mesure et machine de tri |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2024076783A (fr) |
WO (1) | WO2024111183A1 (fr) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0814866A (ja) * | 1994-06-29 | 1996-01-19 | Nippon Light Metal Co Ltd | 画像処理方法および画像処理装置 |
JP2008018419A (ja) * | 2006-06-15 | 2008-01-31 | Satake Corp | 光学式胴割選別機 |
CN106599838A (zh) * | 2016-12-13 | 2017-04-26 | 南京林业大学 | 一种籽棉异纤分选系统延迟时间动态调整装置及方法 |
JP2021018175A (ja) * | 2019-07-22 | 2021-02-15 | 京セラドキュメントソリューションズ株式会社 | 画像形成装置 |
-
2022
- 2022-11-25 JP JP2022188537A patent/JP2024076783A/ja active Pending
-
2023
- 2023-08-21 WO PCT/JP2023/029975 patent/WO2024111183A1/fr unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0814866A (ja) * | 1994-06-29 | 1996-01-19 | Nippon Light Metal Co Ltd | 画像処理方法および画像処理装置 |
JP2008018419A (ja) * | 2006-06-15 | 2008-01-31 | Satake Corp | 光学式胴割選別機 |
CN106599838A (zh) * | 2016-12-13 | 2017-04-26 | 南京林业大学 | 一种籽棉异纤分选系统延迟时间动态调整装置及方法 |
JP2021018175A (ja) * | 2019-07-22 | 2021-02-15 | 京セラドキュメントソリューションズ株式会社 | 画像形成装置 |
Also Published As
Publication number | Publication date |
---|---|
JP2024076783A (ja) | 2024-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8035052B2 (en) | Method and apparatus for visiometric in-line product inspection | |
CN101088633B (zh) | 光学裂纹粒挑选器 | |
KR20010067172A (ko) | 2 개 이상의 다른 임계 레벨로 입상물들을 선별하는 방법및 장치 | |
JP2009115613A (ja) | 異物検査装置 | |
JP6084214B2 (ja) | 交互側方照明型検査装置 | |
WO2024111183A1 (fr) | Dispositif de mesure et machine de tri | |
WO2021106964A1 (fr) | Machine de tri optique | |
US20170160080A1 (en) | Method of measuring a 3d profile of an article | |
JP4076414B2 (ja) | 不良物検出装置及びそれを用いた分離装置 | |
US5586663A (en) | Processing for the optical sorting of bulk material | |
US20230009210A1 (en) | Optical sorter | |
WO2024154665A1 (fr) | Dispositif de mesure et dispositif de sélection | |
JP7521570B2 (ja) | 光学式選別機 | |
JP7111275B2 (ja) | 光学式選別機 | |
JP4338284B2 (ja) | 粉粒体検査装置 | |
JPH09318547A (ja) | 農産物の外観検査方法及び装置 | |
JP2004097969A (ja) | 農産物下面撮像装置及び農産物ランク選別装置 | |
JP3694590B2 (ja) | 農産物の画像読取装置及びこれを用いた選別装置 | |
JP2023136102A (ja) | 測定装置および選別装置 | |
JPH09108640A (ja) | 穀粒選別機 | |
JP2862821B2 (ja) | 穀粒選別機 | |
CN118871774A (zh) | 测定装置及分选装置 | |
US11666947B2 (en) | Selector machine | |
JPS6342411A (ja) | 物体の三方向計測検査方法と装置 | |
WO2016018157A1 (fr) | Système de tri d'articles avec détection de voies synchronisée |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23894199 Country of ref document: EP Kind code of ref document: A1 |