WO2023064786A1 - Pallet inspection system and associated methods - Google Patents
Pallet inspection system and associated methods Download PDFInfo
- Publication number
- WO2023064786A1 WO2023064786A1 PCT/US2022/077936 US2022077936W WO2023064786A1 WO 2023064786 A1 WO2023064786 A1 WO 2023064786A1 US 2022077936 W US2022077936 W US 2022077936W WO 2023064786 A1 WO2023064786 A1 WO 2023064786A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pallet
- images
- algorithm
- support blocks
- nails
- Prior art date
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims description 32
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 72
- 238000012545 processing Methods 0.000 claims abstract description 37
- 238000001514 detection method Methods 0.000 claims description 43
- 238000007635 classification algorithm Methods 0.000 claims description 26
- 230000007547 defect Effects 0.000 claims description 18
- 238000010801 machine learning Methods 0.000 claims description 16
- 230000004044 response Effects 0.000 claims description 16
- 230000006870 function Effects 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 9
- 230000008439 repair process Effects 0.000 description 8
- 230000001960 triggered effect Effects 0.000 description 7
- 238000011176 pooling Methods 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000002360 preparation method Methods 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 239000003973 paint Substances 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000003111 delayed effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000010813 municipal solid waste Substances 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 238000011045 prefiltration Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
- 239000002023 wood Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G43/00—Control devices, e.g. for safety, warning or fault-correcting
- B65G43/08—Control devices operated by article or material being fed, conveyed or discharged
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2201/00—Indexing codes relating to handling devices, e.g. conveyors, characterised by the type of product or load being conveyed or handled
- B65G2201/02—Articles
- B65G2201/0267—Pallets
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/02—Control or detection
- B65G2203/0208—Control or detection relating to the transported articles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/04—Detection means
- B65G2203/041—Camera
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/04—Detection means
- B65G2203/042—Sensors
- B65G2203/044—Optical
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
Definitions
- the present invention relates to pallet inspection, and more particularly, to determining nail defects on a pallet.
- Wooden pallets are used to transport a variety of bulk goods and equipment as required in manufacturing and warehousing operations. In high volume industries, pallet pools provide a lower total industry cost than one-way pallets.
- the pallets are returned to pallet inspection and repair facilities. A damaged pallet will be discarded if the repairs are too extensive. Otherwise, if the damage is minor, then the pallet is repaired and painted before being returned to service.
- a pallet inspection system includes a conveyor to move a pallet that is to be inspected.
- the pallet includes a top deck and a bottom deck separated by spaced apart support blocks positioned therebetween, with nails being used to secure the top and bottom decks to the support blocks.
- a plurality of cameras are positioned to generate images of the pallet as the pallet is moved on the conveyor.
- a processor is coupled to the plurality of cameras and receives the images for processing. The processing includes executing a first algorithm on the images to tag the images having support blocks visible therein, and executing a second algorithm on the tagged images to detect nails having exposed tips.
- the pallet inspection system may further include a first sensor positioned before the plurality of cameras, and a second sensor positioned after the plurality of cameras.
- the processor may be configured to receive a first set of images of the pallet in response to the first sensor being activated, and receive a second set of images of the pallet in response to the second sensor being activated.
- the first and second sensors comprise photoelectric sensors.
- the first and second algorithms may be executed by the processor so that nails having exposed tips are detected in real time.
- the plurality of support blocks are spaced apart to form a pair of outer rows and a center row therebetween, with the outer rows including corner support blocks.
- Each camera may be in focus on the support blocks in one of the rows and out-of-focus on the support blocks in the other rows.
- the plurality of cameras may be divided into first and second camera sets, with the first camera set being directed toward an entrance of the pallet inspection system, and the second camera set being directed toward an exit of the pallet inspection system.
- the first camera set may provide a front perspective of the plurality of support blocks
- the second camera set may provide a rear perspective of the plurality of support blocks, with the first and second camera sets collectively providing images on all sides of each support block.
- the first camera set may include cameras adjacent each side of the conveyor, and the second camera set may include cameras adjacent each side of the conveyor.
- Each camera may be configured as a color camera.
- the first algorithm may include an image classification algorithm
- the second algorithm may include an object detection algorithm
- the image classification algorithm may be configured to classify each image as one of the following: a block image corresponding to a support block being in focus, a non-block image corresponding to a support block being out-of-focus, or a background image corresponding to neither a support block being in focus or a support block being out-of-focus.
- the block images may be tagged by the image classification algorithm.
- the object detection algorithm may be configured to place a bounding box around the support block in each block image, and in response to an exposed nail tip being detected, place a bounding box around the exposed nail tip.
- the object detection algorithm may be configured to detect other nail defects in addition to nails having exposed tips, with the other nail defects being ignored by the processor.
- Another aspect is directed to a method for operating a pallet inspection station as described above.
- the method includes operating a conveyor to move a pallet that is to be inspected, the pallet comprising a top deck and a bottom deck separated by a plurality of spaced apart support blocks positioned therebetween, and with nails being used to secure the top and bottom decks to the plurality of support blocks.
- the method further includes operating a plurality of cameras positioned to generate images of the pallet as the pallet is moved on the conveyor, and receiving the images for processing.
- the processing may include executing a first algorithm on the images to tag the images having support blocks visible therein, and executing a second algorithm on the tagged images to detect nails having exposed tips.
- Another aspect is directed to a method for training an object detection algorithm as described above.
- the method includes creating a database of images of pallets with protruding nails and other nail defects, defining different categories that are to be detected in the database, and annotating the images in the database corresponding to the different categories to be detected.
- a model is trained using machine learning to learn a function that produces mappings between the annotated images and the different categories to be detected.
- the method further includes analyzing the output data from the model, and optimizing the model based on the analyzed output data.
- Yet another aspect is directed to a method for operating an object detect algorithm as described above.
- the method includes receiving images of pallets to be inspected, and executing a machine learning model trained to learn a function that produces mappings between annotated images of pallets with protruding nails and other nail defects to be detected.
- the annotated images correspond to different categories to be detected.
- the method further includes identifying objects by location in the received images corresponding to the different categories to be detected, providing confidence values for the categories that were detected in the received images, and identifying the pallets with protruding nail based on the confidence values.
- FIG. 1 is a top perspective view of a wooden pallet in which various aspects of the disclosure may be implemented.
- FIG. 2 is a bottom perspective view of the wooden pallet illustrated in FIG. 1.
- FIG. 3 is a partial cross-sectional view of the wooden pallet illustrated in FIG. 1 with a protruding nail.
- FIG. 4 is a block diagram of a pallet inspection system in which various aspects of the disclosure may be implemented.
- FIG. 5 is a more detailed block diagram of the protruding nail detection station illustrated in FIG. 4 for detecting protruding nails in a wooden pallet.
- FIGS. 6-8 are images generated by the protruding nail detection station illustrated in FIG. 5 that have been classified by the image classification algorithm as a block image, a non-block image or a background image.
- FIG. 9 is a block diagram of example object detect algorithm used by the protruding nail detection station illustrated in FIG. 5 for detecting protruding nails in a wooden pallet.
- FIG. 10 is a flow diagram for training the object detect algorithm illustrated in FIG. 9.
- FIGS. 11-13 are annotated images used to train the object detect algorithm illustrated in FIG. 9.
- FIG. 14 is a flow diagram for operating the object detect algorithm illustrated in FIG. 9.
- FIGS. 15-17 are image outputs from the object detect algorithm illustrated in FIG. 9.
- FIG. 18 is a flow diagram for operating the pallet inspection system illustrated in FIG. 4.
- Nail defects in a pallet are a concern, particularly when the nail defect is a protruding nail.
- a protruding nail is when a tip of a nail is exposed.
- the form factor of a pallet is not affected, but the protruding nail needs to be detected during inspection of the pallet so the defect can be corrected.
- FIGS. 1-3 an example wooden pallet 10 will be discussed.
- the wooden pallet 10 is for discussion purposes on nail placement in general within a wooden pallet 10.
- the wooden pallet 10 as illustrated is not to be limiting as other wooden pallet configurations are readily available.
- the wooden pallet 10 includes a bottom deck 20, a top deck 30, and a plurality of wooden support blocks 40, 46 coupled between the bottom and top decks.
- the support blocks 40, 46 form a gap 50 (i.e. , pockets) between the bottom and top decks 20, 30 for receiving a lifting member, such as fork lift tines.
- the top deck 30 includes a pair of spaced apart wooden end deck boards 32, and wooden intermediate deck boards 34 positioned between the end deck boards 32. Also included within the top deck 30 are a pair of spaced apart wooden connector boards 36 and a wooden intermediate connector board 37. The connector boards 36 and the intermediate connector boards 37 are orthogonal to the end deck boards 32 and the intermediate deck boards 34. The end deck boards 32 and the intermediate deck boards 34 are positioned on the connector boards 36 and are directly coupled to the support blocks 40, 46 via nails 70.
- the bottom deck 20 includes bottom deck boards 22, 26 orientated in the same direction as the end deck boards 32 and the intermediate deck boards 34 in the top deck 30.
- the bottom deck boards 22, 26 are also directly coupled to the support blocks 40, 46 via nails 70.
- the support blocks include corner support blocks 40 and center support blocks 46 between the corner support blocks 40. In total, there are 9 support blocks 40, 46 positioned in rows of 3.
- the outer rows each include a pair of outer support blocks 40 and a single center support block 46, and the center row includes all center support blocks 46.
- the actual number of support blocks may vary based on the configuration and size of the wooden pallet 10.
- the comer support blocks 40 and the center support blocks 46 each have a rectangular shape. In other configurations, one or more of the supports blocks 40, 46 may have a non-rectangular shape, such as a circular shape.
- FIG. 3 A partial cross-sectional view of the wooden pallet 10 is provided in FIG. 3 to illustrate placement of the nails 70 within the support blocks 40, 46.
- nails enter from the bottom and top decks 20, 30 into the support blocks 40, 46.
- a protruding nail 80 may occur when a nail operator places the nail 70 off center causing the tip 72 to be exposed.
- a protruding nail 80 may also occur when a nail 70 is driven into the location of an existing nail and bounces off the existing nail causing the tip 72 to be exposed. In other cases, wear and tear on the wooden pallet 10 may cause part of a support block 40, 46 to break off resulting in a nail tip 72 being exposed.
- a protruding nail 80 is not limited to protruding from support blocks 40, 46.
- a protruding nail 80 may also protrude from other regions or areas of the pallet 10, such as from one of the connector boards 36, for example. [0042] Detecting protruding nails 80 in a wooden pallet 10 is challenging.
- the processing unit executes a machine learning object detection algorithm that has been trained to detect protruding nails 80. How the algorithm is trained and executed will also be described in detail below.
- the processing unit may be a graphics processing unit (GPU), a central processing unit (CPU) or an edge computing device, for example.
- the pallets 10 Prior to inspection, the pallets 10 are provided to a stack in-feed 152.
- the stack in-feed 152 squares each stack of pallets before being passed to a tipper/accumulator 154.
- the tipper/accumulator 154 provides a steady stream of spaced apart pallets 10 to a conveyor, for example. As the pallets 10 are moved on the conveyor, they pass through a preparation screening line 156.
- each pallet 10 is visually inspected by a human operator to remove any loose debris or trash that may affect inspection. If necessary, the human operator will also make minor repairs. In some cases, a pallet 10 may be discarded during the preparation screening line 156 if it is damaged too badly, as indicated by block 158.
- each pallet 10 is moved to a protruding nail detection station 140 for inspection.
- the pallets 10 are inspected for pallet classification, and to detect protruding nails 80.
- Pallet classification by the protruding nail detection station 160 is to determine if a pallet 10 being inspected belongs to the pallet pooling company operating the pallet inspection system 140.
- a first set of cameras 162 within the protruding nail detection station 166 generate upper and lower images of each pallet 10.
- the images may be video images or still images.
- the images may be in color or monochrome, and are provided to a processing unit 170.
- the processing unit 170 executes a first algorithm for pallet classification 172.
- the first algorithm may be a machine learning (ML) image classification algorithm, for example.
- the image classification algorithm compares the generated image data to an expected profile.
- the expected profile is used for pallet classification.
- the image classification algorithm labels or tags each image as to whether or not the pallet 10 belongs to the pallet pooling company. For example, if 75 images were generated of the pallet 10, and a majority of these images are tagged as belonging to the pallet pooling company, then the pallet 10 is classified as such.
- the image classification algorithm associated with the first set of cameras 162 may also detect certain obvious nail defects, such as raised nails and free standing nails.
- raised nails the head of the nail extends slightly above the top deck. A raised nail could potentially impact any product placed on the pallet.
- a free standing nail the head and body of the nail are visible but not the tip.
- Protruding nail detection by the protruding nail detection station 160 is to determine a protruding nail 80 in a pallet 10.
- a second set of cameras 164 within the protruding nail detection station 160 generate images of the sides of each pallet 10. The images are in color, and are provided to the processing unit 170. Alternatively, the images may be monochrome.
- the processing unit 170 executes a second algorithm that is trained for detecting protruding nails 80.
- the second algorithm may be a machine learning (ML) object detection algorithm, for example.
- the pallet sort line 180 receives the pallets 10 as they exit the protruding nail detection station 160.
- the pallet sort line 180 queries the processing unit 170 to determine pallet classification. If the inspected pallet 10 does not belong to the pallet pooling company operating the pallet inspection system 140, then the pallet 10 is sent to a discard line 182.
- the protruding nail detection station 160 includes first and second sets of cameras 162, 164.
- the cameras may be combined such that the same set of cameras used for pallet classification are also used for protruding nail detection. That is, the cameras are not mutually exclusive.
- the pallet sort line 180 queries the processing unit 170 to determine if the pallet 10 is good or bad.
- the inspected pallet 10 is bad, this means that the pallet requires repair and is sent to a repair line 184. After repair, the pallet 10 is sent to a paint line 186 for painting before being returned to service. If the inspected pallet 10 is good, this means that the pallet does not require repair and is instead sent to the paint line 186 before being returned to service.
- the conveyor 105 moves the pallet 10 through the protruding nail detection station 160 in the direction of the illustrated arrows.
- the conveyor 105 includes a first sensor 110 at the entrance of the protruding nail detection station 160, and a second sensor 112 at the exit of the protruding nail detection station 160. In other embodiments, a single sensor may be used.
- the first and second sensors 110, 112 may be configured as photoelectric sensors, for example.
- Each photoelectric sensor includes a transmitter and receiver on opposite sides of the conveyor 105. The transmitter transmits a light signal, which may be visible or infrared, to the receiver. The pallet 10 is detected when the light beam is blocked from getting to the receiver from the transmitter.
- Output from the first and second sensors 110, 112 is provided to a controller 130.
- the controller 130 In response to receiving an output from the first and second sensors 110, 112, the controller 130 is configured to trigger the second set of cameras 164 as the pallet is being moved on the conveyor 105.
- the second set of cameras 164 are divided into a first camera set 164(1), 164(2) and a second camera set 164(1 ), 164(2).
- the cameras in the first camera set 164(1 ), 164(2) are triggered by the first sensor 110.
- a first stopper 111 in the path of the pallet 10 is dropped below the conveyor 105 to allow the pallet 10 to enter the protruding nail detection station 160.
- the controller 130 activates or triggers the cameras in the first camera set 164(1), 164(2) to provide images to the processing unit 170 for processing.
- the images generated by the first camera set 164(1), 164(2) are a subset of the overall images received by the GPU 170 for the pallet 10.
- the remaining images received by the processing unit 170 are generated by the second camera set 164(3), 164(4).
- the cameras in the first camera set 164(1 ), 164(2) are positioned to view a front perspective of the support blocks 40, 46.
- the cameras in the second camera set 164(3), 164(4) are positioned to view a rear perspective of the support blocks 40, 46.
- the cameras in the second camera set 164(3), 164(4) are triggered by the second sensor 112.
- a second stopper 113 in the path of the pallet 10 is dropped below the conveyor 105 to allow the pallet 10 to exit the protruding nail detection station 160.
- the controller 130 activates or triggers the cameras in the second camera set 164(3), 164(4) to provide images to the processing unit 170 for processing.
- the first camera set 164(1), 164(2) includes 6 cameras
- the second camera set 164(3), 164(4) also includes 6 cameras. Half of the cameras are on each side of the conveyor 105. Even though 12 cameras are being used, a different number of cameras may be used in other embodiments.
- the support blocks 40, 46 in each pallet 10 are spaced apart to form a pair of outer rows and a center row therebetween.
- the illustrated pallet includes 9 support blocks 40, 46 with 3 support blocks 40, 46 in each row. Each row is parallel with the conveyor 105.
- Each of the outer rows includes a pair of corner support blocks 40 and a center support block 46.
- the center row includes center support blocks 46 only.
- Each camera in the first and second camera sets 164(1 )-164(4) is focused on a particular row of support blocks.
- camera group 164(1 ) includes 3 cameras.
- a first camera is focused on the outer row of support blocks 40, 46 closet to the camera group 164(1 ).
- a second camera is focused on the center row of support blocks 46.
- a third camera in focused on the outer row of support blocks 40, 46 farthest from the camera group 164(1).
- each camera in camera group 164(2) on the opposite side of the conveyor 105 is focused on a particular row of support blocks 40, 46.
- the cameras in the first camera set 164(1), 164(2) are positioned to view a front perspective of the support blocks 40, 46. There is typically a delay between the 6 cameras in camera groups 164(1), 164(2) on when the support blocks 40, 46 in a particular row are in focus.
- the pair of cameras focused on the outer row of support blocks 40, 46 closet to each respective camera come into focus first. This pair of cameras may be immediately triggered in response to the first sensor 110 detecting arrival of the pallet 10.
- the pair of cameras focused on the center row of support blocks 46 come into focus next since the center support blocks 46 are further away from the cameras. Consequently, this pair of cameras may be delayed 25 milliseconds, for example, before being triggered by the controller 130 to generate images.
- the pair of cameras focused on the outer row of support blocks 40, 46 farthest away come into focus next after the support blocks 46 in the center row come into focus since they are the farthest away from the cameras. Consequently, this pair of cameras may be delayed 50 milliseconds, for example, before being triggered by the controller 130 to generate images.
- the trigger Once a camera is triggered to provide images to the processing unit 170, the trigger lasts a predetermined time period. The predetermined time period may between 3 to 4 seconds, for example, and may be varied based on the speed of the conveyor 10.
- the cameras in the second camera set 164(3), 164(4) are configured similar to the cameras in the first camera set 164(1), 164(2). Each camera in the second camera set 164(3), 164(4) is likewise focused on a particular row of support blocks. As noted above, the cameras in the second camera set 164(3), 164(4) are positioned to view a rear perspective of the support blocks 40, 46.
- the controller 130 may immediately trigger the pair of cameras focused on the outer row of support blocks 40, 46 closet to the respective cameras, and delay triggering of the pair of cameras focused on the center row of support blocks 46 and the pair of cameras focused on the outer row of support blocks 40, 46 farthest away from the respective cameras.
- Operation of the processing unit 170 to detect protruding nails 80 from the received images is a two-step process.
- a first step is to determine the images having a support block 40, 46 therein that is in focus by one of the cameras.
- a second step is to analyze only the images having a focused support block 40, 46 therein for protruding nails 80.
- Each camera may generate about 40 images per pallet 10. With 12 cameras, this corresponds to about 480 images per pallet. The actual number of images generated is variable, and can change from one deployment site to another.
- the processing unit 170 executes a first algorithm on the 480 images.
- the first algorithm may be an image classification algorithm 132, for example.
- the image classification algorithm 132 is trained using artificial intelligence (Al) and machine learning (ML) to determine images with focused support blocks 40, 46 therein.
- an image has a focused support block 40, 46 therein, it is tagged by the image classification algorithm 132.
- the image classification algorithm 132 functions as a pre-filter on the images provided by the second set of cameras 164 to the processing unit 170.
- the tagged images are provided to a second algorithm, which may be an object detect algorithm 134, for example.
- the images that are not tagged are not passed to the object detect algorithm 134.
- the object detect algorithm 134 As an example, out of the 480 images, about one-half to one-third of the images may not be tagged.
- Execution of the object detect algorithm 134 is computationally more extensive then execution of the image classification algorithm 132. Reducing the number of images to be executed by the object detect algorithm 134 simplifies the overall processing to determine protruding nails 80.
- the image classification algorithm 132 is trained to classify each image as a block image, a non-block image, or a background image.
- a block image corresponds to a support block 40, 46 that is in focus.
- a non-block image corresponds to a support block 40, 46 this is out-of-focus.
- a background image corresponds to neither a support block 40, 46 that is in focus or a support block 40, 46 that is out-of-focus.
- the image 200 in FIG. 6 is a block image since the support block 40 is in focus.
- the image 202 in FIG. 7 is a non-block image since the support block 40 is out-of-focus.
- the image 204 in FIG. 8 is a background image since an in focus or out-of-focus support block is not visible.
- a background image is typically generated when the pallet 10 first arrives at the protruding nail detection station 160, and when the pallet 10 exits the protruding nail detection station 160.
- the image classification algorithm 132 uses percentages to classify each image.
- the classification percentage varies as the pallet moves 10 on the conveyor 105.
- the classification percentages are displayed for each image and are to total 100%.
- a percentage number is assigned by the image classification algorithm 132 to each of the three possible classifications.
- the image classification algorithm 132 will classify the image accordingly.
- classification as a block image is 100% whereas classification as a non-block image or a background image are each 0%.
- image 202 a non-block image is classified as 99.9511 %, whereas classification as a block image is 0.0333% and as a background image as 0.0156%.
- image 204 a background image is classified as 82.796%, whereas classification as a non-block image is 17.0867% and as a block image as 0.1173%.
- the pallet 10 is partially visible as it is exiting the protruding nail detection station 160.
- the tagged images that have a focused support block 40, 46 will be generally referred to as tagged images 200. These images are passed to the object detect algorithm 134 for processing. Instead of tagging the images, the object detect algorithm 134 is trained to locate objects within the tagged images 200.
- the objects to be located are support blocks and visible nails overlapping with the support blocks.
- the visible nails 70 may be protruding nails 80 or may be nails 70 where the body of the nail is visible but not the tip.
- the object detect algorithm 134 is trained using annotated images that include a number of different categories that are to be detected, where each category corresponds to a particular type object to be detected and located. By learning the different categories, the object detect algorithm 134 will be able to differentiate between a protruding nail that is to be corrected and other types of nail defects that do not need to be corrected. The different categories provide contextual details to the object detect algorithm 134.
- the object detect algorithm 134 may operate based on artificial intelligence (Al) and machine learning (ML) to determine objects within the images 200 having the focused support blocks 40, 46 therein.
- the object detect algorithm 134 may be a single shot detector (SSD) 210 as illustrated in FIG.
- SSD 210 Other types of object detectors may be used as an alternative to the illustrated SSD 210.
- the SSD 210 only one single shot is taken of the image 200 to detect multiple objects within the image 200.
- the SSD 210 is an open source algorithm modified to detect protruding nails 80.
- a SSD 210 in general, has a base VGG-16 network 212 followed by multibox convolution layers 214, 216.
- the base VGG-16 network 212 is used to extract features within an image 200.
- Convolutional layers 214 are for detection, and convolutional layers 216 help with detection of objects at multiple scales since these layers decrease in size progressively.
- the convolutional model for detection is different for each feature layer.
- the multibox convolution layers 214, 216 are applied to multiple feature maps from the later stages of a network. This helps perform detection at multiple scales. Prediction for the bounding boxes and confidence for different objects in the image 200 is done not by one but by multiple feature maps of different sizes that represent multiple scales.
- the problem of detecting protruding nails can be addressed in different ways.
- One way is to use object detection using bounding boxes, as will be discussed below.
- a segmentation approach to segment pixels may be used to define a contour of protruding nail pixels in an image.
- the algorithm may be trained to detect regions using both bounding boxes and segmentation pixels.
- a database of images of pallets 10 with protruding nails 70 and other nail defects is created at Block 254.
- the number of pallets 10 may be rather large, such as 100 or more, for example.
- the images from all 12 cameras for each pallet 10 are collected and stored in the database. There are about 480 images associated with each pallet 10.
- the different categories that are to be detected are defined at Block 256. These categories include support blocks, protruding nails, partially visible nails, clinched nails, free standing nails, and splinters. Other types of categories may be defined to help the object detect algorithm 134 detect protruding nails 80.
- the images in the database are annotated at Block 258 to reflect the different categories to be detected. Each image is manually reviewed, and if the image has a category that is to be detected by the object detect algorithm 134, then the image is annotated with a bounding box and labeled accordingly.
- Example images that have been annotated are provided in FIGS. 11- 13.
- a green bounding box 202 is placed around each support block 40, 46.
- a red bounding box 204 is used for protruding nails 80.
- a yellow bounding box is used for protruding nails 80.
- a yellow bounding box is used in FIG. 12, for example, a yellow bounding box 206 is placed around a nail 70 having the body of the nail being visible but not the tip.
- the support block 40 in FIG. 13 is splintered resulting in a pair of protruding nails 80 being visible.
- Detecting splintered support blocks is helpful. At times it could be hard to differentiate between a splintered support block and a nail that has been colored. Normally, the nail would appear as a gray/block color. However, if the splintered support block was not detected earlier and the pallet was sent to paint, now the exposed nail is painted the same color as the block.
- the images may be annotated for identifying other types of defects.
- the images may be annotated for detecting stickers and stretch wrap, for example, that remain on the pallet 10. Normally, each pallet 10 is visually inspected by a human operator to remove any loose debris or trash that may affect inspection. Adding sticker and shrink wrap detection helps to identify such areas that may not be caught during the visual inspection of each pallet 10.
- a model is trained at Block 260 using machine learning to learn a function that produces mappings between the annotated images and the different categories to be detected.
- the machine learning may be based on a neural network to train the model.
- a neural network is not used to train the model. Since a neural network based model/algorithm or another type of non-neural network based model/algorithm may be used, we are not limited to a single or a set of algorithms.
- the output data from the model is analyzed at Block 262.
- the model is optimized at Block 264 based on the analyzed output data.
- the method ends at Block 266
- the method includes executing a machine learning model that has been trained to learn a function that produces mappings between annotated images of pallets 10 with protruding nails 80 and other nail defects to be detected.
- the annotated images correspond to the different categories to be detected, as discussed above.
- Detected objects corresponding to the different categories to be detected in the images 200 are identified by location at Block 308. A bounding box is placed around each identified object. Example images that have detected objects are provided in FIGS. 15-17.
- a bounding box 330 is placed around the support block 46.
- the support block 46 has been detected even though a chunk of wood has been removed.
- separate bounding boxes 332, 334 are used for the protruding nails 80.
- a bounding box 336 is placed around the splintered support block 46.
- separate bounding boxes 338, 340 are used for the protruding nails 80.
- a bounding box 342 is placed around the splintered support block 46.
- a pair of clinched nails 71 have been detected where the tips are exposed.
- bounding boxes 344, 346 are placed around the clinched nails 71.
- the body of a nail 70 is visible within the image 324, but since the tip of the nail is not exposed, this nail is ignored.
- a location in pixels is determined by the object detect algorithm 134.
- the image 320 in FIG. 15 is 1 ,000 pixels by 1 ,000 pixels.
- the 0, 0 coordinates, for example, are at the bottom left of the image 320.
- the x-axis of the bounding box 330 around the support block 46 starts at 250 pixels, and the y-axis of the bounding box 330 starts at 100 pixels. A width and height of the bounding box 330 is then determined.
- the x-axis starts at 720 pixels
- the y-axis of the bounding box 334 starts at 110 pixels.
- a width and height of the bounding box 334 is then determined. This is performed for each bounding box.
- the method further includes providing confidence values at Block 310 for the categories that were detected in the received images 322, 322, 324.
- the confidence value for each of the support blocks is 99%.
- the confidence value also has the detected category associated therewith.
- the work “block” is used to indicate a support block 40, 46.
- a “protruding nail” label and a confidence value of each detected protruding nail are also provided.
- protruding nails 80 typically have a high confidence value.
- the confidence vales range from 87% to 99%.
- a “clinch nail” label and a confidence value are provided.
- the pallets 10 with protruding nails 80 are identified based on the confidence values at Block 312. If the confidence value is above a threshold, such as 75%, then each bounding box is labeled accordingly. The method ends at Block 314.
- Another aspect is directed to a method for operating the pallet inspection system 140 as described above. Referring now to the flow diagram 350 in FIG. 18, from the start (Block 352), the method includes operating a conveyor 105 at Block 354 to move a pallet 10 that is to be inspected. As described above, the pallet 10 includes a top deck 30 and a bottom deck 20 separated by a plurality of spaced apart support blocks 40, 46 positioned therebetween. Nails 70 are used to secure the top and bottom decks 30, 20 to the support blocks 40, 46.
- Cameras 162, 164 positioned adjacent the conveyor 105 are operated at Block 356 to generate images of the pallet 10 as the pallet 10 is moved on the conveyor 105.
- the images are received by the processing unit 170 at Block 358 for processing.
- the processing includes executing a first algorithm 132 at Block 360 on the images to tag the images having support blocks 40, 46 visible therein.
- a second algorithm 134 is executed on the tagged images to detect nails 70 having exposed tips 72.
- the first algorithm 132 is an image classification algorithm
- the second algorithm 134 is an object detection algorithm. The method ends at Block 364.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2022366938A AU2022366938A1 (en) | 2021-10-13 | 2022-10-12 | Pallet inspection system and associated methods |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163262452P | 2021-10-13 | 2021-10-13 | |
US63/262,452 | 2021-10-13 | ||
US18/045,579 | 2022-10-11 | ||
US18/045,579 US20230114085A1 (en) | 2021-10-13 | 2022-10-11 | Pallet inspection system and associated methods |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023064786A1 true WO2023064786A1 (en) | 2023-04-20 |
Family
ID=85796803
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/077936 WO2023064786A1 (en) | 2021-10-13 | 2022-10-12 | Pallet inspection system and associated methods |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230114085A1 (en) |
AU (1) | AU2022366938A1 (en) |
WO (1) | WO2023064786A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3913526A1 (en) * | 2020-05-18 | 2021-11-24 | CHEP Technology Pty Limited | Platform detection |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150105892A1 (en) * | 2003-12-19 | 2015-04-16 | Chep Technology Pty Limited | Software and methods for automated pallet inspection and repair |
US20210133666A1 (en) * | 2019-10-31 | 2021-05-06 | Lineage Logistics, LLC | Profiling pallets and goods in a warehouse environment |
-
2022
- 2022-10-11 US US18/045,579 patent/US20230114085A1/en active Pending
- 2022-10-12 WO PCT/US2022/077936 patent/WO2023064786A1/en active Application Filing
- 2022-10-12 AU AU2022366938A patent/AU2022366938A1/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150105892A1 (en) * | 2003-12-19 | 2015-04-16 | Chep Technology Pty Limited | Software and methods for automated pallet inspection and repair |
US20210133666A1 (en) * | 2019-10-31 | 2021-05-06 | Lineage Logistics, LLC | Profiling pallets and goods in a warehouse environment |
Also Published As
Publication number | Publication date |
---|---|
US20230114085A1 (en) | 2023-04-13 |
AU2022366938A1 (en) | 2024-04-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9405992B2 (en) | Method and system for identifying waste containers based on pattern | |
CN103052342B (en) | Cashier | |
US7860277B2 (en) | Food product checking system and method for identifying and grading food products | |
US20230114085A1 (en) | Pallet inspection system and associated methods | |
CN110238078A (en) | Method for sorting, device, system and storage medium | |
JPH05502749A (en) | System and method for classifying plastic products using computer control | |
CN113877836B (en) | Intelligent identification sorting system based on visual detection system | |
CN111767902A (en) | Method, device and equipment for identifying dangerous goods of security check machine and storage medium | |
CN114724076A (en) | Image recognition method, device, equipment and storage medium | |
CN113343971A (en) | Material acceptance method based on industrial internet | |
US20080144918A1 (en) | System and method for tube scarf detection | |
CN112069841A (en) | Novel X-ray contraband parcel tracking method and device | |
US20230112603A1 (en) | Detection of heat treated markings on a wooden pallet | |
CN105057226A (en) | Melon agricultural product sorting method | |
Blasco et al. | Development of a computer vision system for the automatic quality grading of mandarin segments | |
US20230211382A1 (en) | Platform detection | |
US20230124854A1 (en) | Systems and methods for assisting in object recognition in object processing systems | |
US20220270229A1 (en) | Automated detection of carton damage | |
KR102578920B1 (en) | Apparatus for PET sorting based on artificial intelligence | |
KR102578919B1 (en) | Automatic Sorting Separation System for Recycled PET | |
US20240149305A1 (en) | Air sorting unit | |
CN117523551A (en) | Image processing system for evaluating and sorting durian quality | |
CN115780299A (en) | Plastic bottle defect detection system and method | |
WO2023105408A1 (en) | Apparatus and method for classifying timber logs | |
CN114761145A (en) | Method and device for identifying fallen and/or damaged containers in a material flow in a container |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22881966 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022366938 Country of ref document: AU Ref document number: AU2022366938 Country of ref document: AU |
|
ENP | Entry into the national phase |
Ref document number: 2022366938 Country of ref document: AU Date of ref document: 20221012 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022881966 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022881966 Country of ref document: EP Effective date: 20240513 |