WO2023009270A1 - Conveyed-object identification system - Google Patents

Conveyed-object identification system Download PDF

Info

Publication number
WO2023009270A1
WO2023009270A1 PCT/US2022/035409 US2022035409W WO2023009270A1 WO 2023009270 A1 WO2023009270 A1 WO 2023009270A1 US 2022035409 W US2022035409 W US 2022035409W WO 2023009270 A1 WO2023009270 A1 WO 2023009270A1
Authority
WO
WIPO (PCT)
Prior art keywords
shape
outer boundary
bounding box
toppled
digital image
Prior art date
Application number
PCT/US2022/035409
Other languages
French (fr)
Inventor
Cuong P. MAI
Brian A. TRAPANI
Original Assignee
Laitram, L.L.C.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Laitram, L.L.C. filed Critical Laitram, L.L.C.
Publication of WO2023009270A1 publication Critical patent/WO2023009270A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes

Definitions

  • the invention relates generally to the conveyance of objects, such as cans or bottles, and, in particular, to the use of a rangefinder to identify, track, and count toppled or upright cans or bottles.
  • Cans and bottles, especially when empty, are subject to toppling over while being conveyed en masse.
  • the constant jostling caused by the continuous motion of the conveyor on the crowd of cans or bottles inevitably causes some cans or bottles to topple over on their sides.
  • Toppled cans or bottles are not oriented to be filled or labeled by a filler or labeler. So they have to be constantly monitored and removed from the flow or uprighted.
  • a method embodying features of the invention for identifying objects conveyed in a mass of objects on a conveyor comprises: a) at a predetermined rate with a rangefinder, taking consecutive depth frames encompassing a target area on a conveyor conveying objects in a conveying direction; b) producing an array of pixels constituting a digital image of each depth frame; c) finding an outer boundary or a bounding box surrounding the outer boundary of a group of contiguous pixels in each digital image whose values lie on the same side of a threshold value; d) determining the shape of the outer boundary or the bounding box; e) comparing the shape of the outer boundary or the bounding box to: 1) a predetermined toppled object model shape; or 2) a predetermined upright object model shape; or 3) both; and f) identifying a toppled object when the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape or an upright object when the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object
  • One version of a system for identifying objects on a conveyor comprises a conveyor conveying objects in a conveying direction and a rangefinder disposed above the conveyor scanning a field of view encompassing a target area on the conveyor at a predetermined rate.
  • the rangefinder produces depth frames indicating its distance to objects in the field of view.
  • a processing system executes program instructions to: a) produce an array of pixels constituting a digital image of each depth frame; b) find an outer boundary or a bounding box surrounding the outer boundary of a group of contiguous pixels in each digital image whose values lie on the same side of a threshold value; c) determine the shape of the outer boundary or the bounding box; d) compare the shape of the outer boundary or the bounding box to: 1) a predetermined toppled object model shape; or 2) a predetermined upright object model shape; or 3) both; and e) identify a toppled object when the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape or an upright object when the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape.
  • FIG. 1 is a side elevation schematic view of an identification, tracking, and counting system embodying features of the invention.
  • FIGS. 2A and 2B are top plan schematics of the identification, tracking, and counting system as in FIG. 1 showing the positions of cans or bottles at a first time and at a short time later.
  • FIGS. 3A and 3B represent rectangular bounding boxes surrounding the outer boundaries of the images of upright and toppled cans.
  • FIG. 4 is a flowchart describing an exemplary identification, tracking, and counting procedure for the counting system of FIG. 1.
  • the system 10 comprises a rangefinder, such as a Fidar camera 12, which measures distances to a target, and a programmable processing system 14, which may be realized as a conventional processor and a graphical processing unit.
  • a time- of-flight (TOF) sensor is an example of another kind of rangefinder.
  • the Fidar camera 12 produces depth frames, each composed of an array of distance, or depth, measurements from the camera to objects in the camera's field of view.
  • the Fidar camera may also include a color camera.
  • One example of a Fidar camera that can be used is the Intel® RealSenseTM FiDAR Camera F515.
  • the Fidar camera 12 is aimed at a portion of a conveyor 16, such as a belt conveyor, conveying cans 18, 19 in a conveying direction 20.
  • a conveyor 16 such as a belt conveyor
  • cans 18, 19 in a conveying direction 20.
  • the term can is used in the description as an example of one kind of object that can be conveyed and counted. Its exclusive use is not meant to limit the claims to the conveying and counting of cans. Bottles and other objects subject to toppling while being conveyed may be similarly processed.
  • a laser in the Lidar camera 12 scans a field of view 22 that covers a portion of the conveyor 16. The laser directs pulses of light in a pattern of discrete directions that define the field of view. Reflections of the light pulses off objects in the field of view are detected by the camera.
  • the interval between the transmission of each pulse and the reception of its reflection is the two-way time of flight, which is proportional to the distance of the Lidar camera 12 from a reflecting object in that direction.
  • each scan of the Lidar camera's laser produces a frame of distance, or depth, measurements to whatever is in the field of view 22.
  • the depth-measuring Lidar camera 12 does not depend on external illumination.
  • the Lidar camera 12 may have internal processing that produces a digital image of the field of view.
  • the digital image can be produced by the processing system 14 from raw depth-frame data received from the Lidar camera 12 over a communication link or data bus 24 (FIG. 1).
  • the digital image is composed of a two- dimensional array of pixels as in traditional color images, except that the pixels have values corresponding to the distance, or depth, measurements and covering the field of view.
  • FIG. 4 A flowchart describing the processing steps programed into program memories and executed by a processor in the Lidar camera 12 or by the external processing system 14 is shown in FIG. 4 as it applies to FIGS. 1, 2A, and 2B.
  • the sequence of program steps shown in the flowchart and executed by the processing systems is repeated at a regular repetition rate that is fast enough to keep up with the conveying speed to allow individual cans to be tracked as they advance through a target area in the field of view 22.
  • the Lidar camera grabs a depth frame covering the field of view 22, which encompasses a target area 24.
  • the target area 24 is optionally cropped from the field of view 22 to remove pixels that lie outside the target area through which cans are conveyed.
  • Cropping reduces the number of computations by eliminating pixels that represent the conveyor frame and other structure outside the area of interest. If the pixels in the depth frame produced by the Lidar camera 12 include color information, the color information is converted to gray scale, and the target area is optionally compressed, for example, to a smaller size that is large enough to retain enough information to accurately detect the can. The compressed target area reduces the number of computations. The result is a reduced-size digital image composed of a pixel array of gray-scale values that represent the distance of the Lidar camera from cans in the target area 24.
  • the digital image is blurred with a Gaussian filter to reduce high frequency image noise and produce a smoother image. Then static conveyor frame components, which do not change from frame to frame, are masked out. Each pixel in the digital image is compared to a threshold.
  • the threshold is a depth value that corresponds to a depth between the depth of an upright can and the depth of a toppled can.
  • the gray-scale digital image is converted into a binary digital image by comparing the pixel values to the threshold. The closer upright cans are assigned a minimum value, such as 0, and the more distant toppled cans are assigned a 1 or a maximum value, such as a full-scale digital value, or vice versa. Converting the gray-scale digital image to a binary digital image increases the contrast and simplifies the detection of can boundaries and distinguishing toppled from upright cans.
  • toppled cans for example, all pixel values that lie on the white side of the threshold are considered as possible toppled can pixels. The pixels on the dark side of the threshold can be ignored.
  • the processing system finds a non-square rectangular bounding box 36 closely surrounding the outer boundary 34 of each possible toppled can from each group of contiguous pixels all of whose values are on the white side of the threshold.
  • FIG. 3B shows the rectangular bounding box 36 surrounding the outer boundary 34 of an image of a toppled can.
  • the bounding box 36 for a toppled can is a non-square rectangle that is close in size and shape to the outer boundary 34, the outer boundary does not have to be found to detect a toppled can. Finding the bounding box 36 suffices.
  • the shape, including the size, or area, of each bounding box is then determined and compared to a model shape.
  • the model shape for a toppled can in plan view is a rectangle of a certain ratio of length to width. If the shape of the bounding box is rectangular and not square and matches the toppled-can model shape (a rectangle with a predetermined length- to-width ratio, the shape is classified as a toppled can. If there is no match, the current process ends for that shape.
  • a circular outer boundary 30 of an upright can is circumscribed by a square bounding box 32. So, for upright cans, the square bounding box, instead of the circular outer boundary 30, can be used to detect upright cans. That saves computation time. If the shape of the bounding box is a square the length of whose sides is within a predetermined range, the shape is classified as an upright can. In this example, where all the objects being conveyed are cans, using the bounding boxes to distinguish upright from topped cans is sufficient. But when a variety of objects of different shapes and sizes is being conveyed, outer boundaries rather than just bounding boxes or complex bounding boxes, such as higher-order polygons, may have to be used to distinguish among the various objects. And more complex geometrical analysis, such as pattern matching or neural net inferencing, may be required.
  • centroid 26 is then computed along with the orientation of the bounding box, which is determined by the angle its major axis makes with the conveying direction.
  • the centroid computed for the bounding box would be close to the centroid that would be computed for the outer boundary.
  • the centroid 26 is tracked from frame to frame as the toppled can 19 advances in the conveying direction 20.
  • the centroid 26 is associated with the corresponding centroid of a previous frame by using a least-mean-squares evaluation of all the centroids in each frame.
  • a centroid 26 in the current frame is associated with the nearest centroid in the previous frame.
  • the processing system 14 continues to track the centroids 26 of the toppled cans 19 and check for centroids that pass downstream of an imaginary counting line 28 that extends across the width of the conveyor 16 at a predetermined position along the conveying path. Every time a centroid 26 first passes the counting line 28, the processing system 14 increments a can counter by one to indicate another toppled can 19. Once counted and past the counting line, that toppled can is ignored by the process, which resumes with another frame at the repetition rate.
  • the example describes in detail ways for the identification, tracking, and counting of toppled cans. But it is also possible to identify, track, and count upright, dome-up cans, which would have a circular outer boundary and darker gray-scale values to be distinguishable from toppled cans. A higher-resolution image is necessary to detect the thin rims of untopped dome-down cans, but otherwise the process is the same. A different threshold could be used for upright cans, whose pixels have an average value different from the average pixel value of toppled cans. It would also be possible to include models of other shapes, such as those caused by two side-by-side or partly stacked toppled cans, to detect those other situations. And the process can be used on objects other than cans, such as bottles.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A method and system for detecting objects, such as cans or bottles, conveyed on a conveyor. A rangefinder, such as a Lidar camera, takes periodic depth frames of a target area on the conveyor. A processing system uses the depth frames to identify and track the objects from frame to frame as they advance along the conveyor. Objects having certain characteristics, such as a can that is toppled, are counted.

Description

CONVEYED-OBJECT IDENTIFICATION SYSTEM
BACKGROUND
The invention relates generally to the conveyance of objects, such as cans or bottles, and, in particular, to the use of a rangefinder to identify, track, and count toppled or upright cans or bottles.
Cans and bottles, especially when empty, are subject to toppling over while being conveyed en masse. The constant jostling caused by the continuous motion of the conveyor on the crowd of cans or bottles inevitably causes some cans or bottles to topple over on their sides. Toppled cans or bottles are not oriented to be filled or labeled by a filler or labeler. So they have to be constantly monitored and removed from the flow or uprighted.
SUMMARY
A method embodying features of the invention for identifying objects conveyed in a mass of objects on a conveyor comprises: a) at a predetermined rate with a rangefinder, taking consecutive depth frames encompassing a target area on a conveyor conveying objects in a conveying direction; b) producing an array of pixels constituting a digital image of each depth frame; c) finding an outer boundary or a bounding box surrounding the outer boundary of a group of contiguous pixels in each digital image whose values lie on the same side of a threshold value; d) determining the shape of the outer boundary or the bounding box; e) comparing the shape of the outer boundary or the bounding box to: 1) a predetermined toppled object model shape; or 2) a predetermined upright object model shape; or 3) both; and f) identifying a toppled object when the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape or an upright object when the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape.
One version of a system for identifying objects on a conveyor comprises a conveyor conveying objects in a conveying direction and a rangefinder disposed above the conveyor scanning a field of view encompassing a target area on the conveyor at a predetermined rate. The rangefinder produces depth frames indicating its distance to objects in the field of view. A processing system executes program instructions to: a) produce an array of pixels constituting a digital image of each depth frame; b) find an outer boundary or a bounding box surrounding the outer boundary of a group of contiguous pixels in each digital image whose values lie on the same side of a threshold value; c) determine the shape of the outer boundary or the bounding box; d) compare the shape of the outer boundary or the bounding box to: 1) a predetermined toppled object model shape; or 2) a predetermined upright object model shape; or 3) both; and e) identify a toppled object when the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape or an upright object when the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a side elevation schematic view of an identification, tracking, and counting system embodying features of the invention.
FIGS. 2A and 2B are top plan schematics of the identification, tracking, and counting system as in FIG. 1 showing the positions of cans or bottles at a first time and at a short time later.
FIGS. 3A and 3B represent rectangular bounding boxes surrounding the outer boundaries of the images of upright and toppled cans.
FIG. 4 is a flowchart describing an exemplary identification, tracking, and counting procedure for the counting system of FIG. 1.
DETAIFED DESCRIPTION
A can or bottle identification, tracking, and counting system embodying features of the invention is shown in FIG. 1. The system 10 comprises a rangefinder, such as a Fidar camera 12, which measures distances to a target, and a programmable processing system 14, which may be realized as a conventional processor and a graphical processing unit. A time- of-flight (TOF) sensor is an example of another kind of rangefinder. The Fidar camera 12 produces depth frames, each composed of an array of distance, or depth, measurements from the camera to objects in the camera's field of view. The Fidar camera may also include a color camera. One example of a Fidar camera that can be used is the Intel® RealSense™ FiDAR Camera F515.
The Fidar camera 12 is aimed at a portion of a conveyor 16, such as a belt conveyor, conveying cans 18, 19 in a conveying direction 20. (The term can is used in the description as an example of one kind of object that can be conveyed and counted. Its exclusive use is not meant to limit the claims to the conveying and counting of cans. Bottles and other objects subject to toppling while being conveyed may be similarly processed.) As also indicated by FIGS. 2A and 2B, a laser in the Lidar camera 12 scans a field of view 22 that covers a portion of the conveyor 16. The laser directs pulses of light in a pattern of discrete directions that define the field of view. Reflections of the light pulses off objects in the field of view are detected by the camera. The interval between the transmission of each pulse and the reception of its reflection is the two-way time of flight, which is proportional to the distance of the Lidar camera 12 from a reflecting object in that direction. Thus, each scan of the Lidar camera's laser produces a frame of distance, or depth, measurements to whatever is in the field of view 22. And, unlike RGB cameras, the depth-measuring Lidar camera 12 does not depend on external illumination.
The Lidar camera 12 may have internal processing that produces a digital image of the field of view. Alternatively, the digital image can be produced by the processing system 14 from raw depth-frame data received from the Lidar camera 12 over a communication link or data bus 24 (FIG. 1). The digital image is composed of a two- dimensional array of pixels as in traditional color images, except that the pixels have values corresponding to the distance, or depth, measurements and covering the field of view.
A flowchart describing the processing steps programed into program memories and executed by a processor in the Lidar camera 12 or by the external processing system 14 is shown in FIG. 4 as it applies to FIGS. 1, 2A, and 2B. The sequence of program steps shown in the flowchart and executed by the processing systems is repeated at a regular repetition rate that is fast enough to keep up with the conveying speed to allow individual cans to be tracked as they advance through a target area in the field of view 22. First, the Lidar camera grabs a depth frame covering the field of view 22, which encompasses a target area 24. The target area 24 is optionally cropped from the field of view 22 to remove pixels that lie outside the target area through which cans are conveyed. Cropping reduces the number of computations by eliminating pixels that represent the conveyor frame and other structure outside the area of interest. If the pixels in the depth frame produced by the Lidar camera 12 include color information, the color information is converted to gray scale, and the target area is optionally compressed, for example, to a smaller size that is large enough to retain enough information to accurately detect the can. The compressed target area reduces the number of computations. The result is a reduced-size digital image composed of a pixel array of gray-scale values that represent the distance of the Lidar camera from cans in the target area 24.
The digital image is blurred with a Gaussian filter to reduce high frequency image noise and produce a smoother image. Then static conveyor frame components, which do not change from frame to frame, are masked out. Each pixel in the digital image is compared to a threshold. The threshold is a depth value that corresponds to a depth between the depth of an upright can and the depth of a toppled can. The gray-scale digital image is converted into a binary digital image by comparing the pixel values to the threshold. The closer upright cans are assigned a minimum value, such as 0, and the more distant toppled cans are assigned a 1 or a maximum value, such as a full-scale digital value, or vice versa. Converting the gray-scale digital image to a binary digital image increases the contrast and simplifies the detection of can boundaries and distinguishing toppled from upright cans.
In the detection of toppled cans, for example, all pixel values that lie on the white side of the threshold are considered as possible toppled can pixels. The pixels on the dark side of the threshold can be ignored. The processing system finds a non-square rectangular bounding box 36 closely surrounding the outer boundary 34 of each possible toppled can from each group of contiguous pixels all of whose values are on the white side of the threshold. FIG. 3B shows the rectangular bounding box 36 surrounding the outer boundary 34 of an image of a toppled can. When the Lidar camera is positioned close to the cans, it can happen that the exposed sides of upright cans adjacent to toppled cans produce pixel values in the gray-scale image that lie on the same side of the threshold as the values of the pixels of toppled cans. This has the effect of falsely expanding the perceived outer boundary of a toppled can in the thresholded image. To avoid that false boundary, the corresponding pixels in the gray-scale frame image in the boundary-enclosed region are inspected. Those gray-scale pixels that are not close in value to the majority of the pixels in the boundary-enclosed region are deleted. Pixels associated with the sides of upright cans can be distinguished from the pixels associated with toppled cans and deleted in this way because the sides of upright cans have a much greater gradient than the exposed sides of toppled cans. As shown in FIG. 3B, because the bounding box 36 for a toppled can is a non-square rectangle that is close in size and shape to the outer boundary 34, the outer boundary does not have to be found to detect a toppled can. Finding the bounding box 36 suffices. The shape, including the size, or area, of each bounding box is then determined and compared to a model shape. For example, the model shape for a toppled can in plan view is a rectangle of a certain ratio of length to width. If the shape of the bounding box is rectangular and not square and matches the toppled-can model shape (a rectangle with a predetermined length- to-width ratio, the shape is classified as a toppled can. If there is no match, the current process ends for that shape.
As shown in FIG. 3A, a circular outer boundary 30 of an upright can is circumscribed by a square bounding box 32. So, for upright cans, the square bounding box, instead of the circular outer boundary 30, can be used to detect upright cans. That saves computation time. If the shape of the bounding box is a square the length of whose sides is within a predetermined range, the shape is classified as an upright can. In this example, where all the objects being conveyed are cans, using the bounding boxes to distinguish upright from topped cans is sufficient. But when a variety of objects of different shapes and sizes is being conveyed, outer boundaries rather than just bounding boxes or complex bounding boxes, such as higher-order polygons, may have to be used to distinguish among the various objects. And more complex geometrical analysis, such as pattern matching or neural net inferencing, may be required.
Returning to the flowchart of FIG. 4 for detecting a toppled can, the centroid 26 is then computed along with the orientation of the bounding box, which is determined by the angle its major axis makes with the conveying direction. The centroid computed for the bounding box would be close to the centroid that would be computed for the outer boundary. The centroid 26 is tracked from frame to frame as the toppled can 19 advances in the conveying direction 20. The centroid 26 is associated with the corresponding centroid of a previous frame by using a least-mean-squares evaluation of all the centroids in each frame. A centroid 26 in the current frame is associated with the nearest centroid in the previous frame. By tracking the centroids from frame to frame, the processing system can determine the conveying direction. The processing system 14 continues to track the centroids 26 of the toppled cans 19 and check for centroids that pass downstream of an imaginary counting line 28 that extends across the width of the conveyor 16 at a predetermined position along the conveying path. Every time a centroid 26 first passes the counting line 28, the processing system 14 increments a can counter by one to indicate another toppled can 19. Once counted and past the counting line, that toppled can is ignored by the process, which resumes with another frame at the repetition rate.
The example describes in detail ways for the identification, tracking, and counting of toppled cans. But it is also possible to identify, track, and count upright, dome-up cans, which would have a circular outer boundary and darker gray-scale values to be distinguishable from toppled cans. A higher-resolution image is necessary to detect the thin rims of untopped dome-down cans, but otherwise the process is the same. A different threshold could be used for upright cans, whose pixels have an average value different from the average pixel value of toppled cans. It would also be possible to include models of other shapes, such as those caused by two side-by-side or partly stacked toppled cans, to detect those other situations. And the process can be used on objects other than cans, such as bottles.

Claims

What is claimed is:
1. A method for identifying objects conveyed in a mass of objects on a conveyor, the method comprising: a) at a predetermined rate with a rangefinder, taking consecutive depth frames encompassing a target area on a conveyor conveying objects in a conveying direction; b) producing an array of pixels constituting a digital image of each depth frame; c) finding an outer boundary or a bounding box surrounding the outer boundary of a group of contiguous pixels in each digital image whose values lie on the same side of a threshold value; d) determining the shape of the outer boundary or the bounding box; e) comparing the shape of the outer boundary or the bounding box to:
1) a predetermined toppled object model shape; or
2) a predetermined upright object model shape; or
3) both; f) identifying a toppled object when the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape or an upright object when the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape.
2. The method of claim 1 comprising: g) if the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape:
1) computing the centroid of the pixels bounded by the outer boundary or the bounding box;
2) tracking the centroid from depth frame to depth frame;
3) incrementing a toppled object counter when the centroid being tracked first appears in a digital image downstream in the conveying direction of a predetermined position of a counting line extending across the target area; or h) if the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape: 1) computing the centroid of the pixels bounded by the outer boundary or the bounding box;
2) tracking the centroid from depth frame to depth frame;
3) incrementing an upright object counter when the centroid being tracked first appears in a digital image downstream in the conveying direction of a predetermined position of a counting line extending across the target area.
3. The method of claim 1 comprising determining the conveying direction relative to the orientation of the rangefinder.
4. The method of claim 1 wherein the pixel values represent the distance of the rangefinder from the nearest object in the target area.
5. The method of claim 1 comprising cropping the depth frames or the digital images to eliminate the conveyor frame from the digital images.
6. The method of claim 1 comprising blurring the digital images with a Gaussian filter.
7. The method of claim 1 wherein the objects are cans and the predetermined toppled object model shape is a non-square rectangle.
8. The method of claim 1 wherein the objects are cans and the predetermined upright object model shape is a square.
9. The method of claim 1 wherein determining the shape of the outer boundary or the bounding box includes finding its area.
10. The method of claim 1 comprising, besides determining the shape of the outer boundary or the bounding box, determining its major axis and the angle between the major axis and the conveying direction.
11. The method of claim 1 comprising tracking each centroid from frame to frame by associating each centroid in the previous frame to the nearest centroid in the current frame.
12. The method of claim 1 wherein upright objects have a different average pixel value than toppled objects.
13. The method of claim 1 comprising identifying partially toppled objects from outer boundaries or bounding boxes whose shapes do not match the upright object model shape or the toppled object model shape.
14. The method of claim 1 comprising determining the outer boundaries or the bounding boxes by first converting the digital image into a binary digital image in which pixels of the digital image having a value above the threshold are assigned a maximum value in the binary digital image and pixels of the digital image having a value below the threshold are assigned a minimum value in the binary digital image, or vice versa.
15. A system for identifying objects on a conveyor, the system comprising: a conveyor conveying objects in a conveying direction; a rangefinder disposed above the conveyor scanning a field of view encompassing a target area on the conveyor at a predetermined rate to produce depth frames indicating the distance from the rangefinder to objects in the field of view; a processing system executing program instructions to: a) produce an array of pixels constituting a digital image of each depth frame; b) find an outer boundary or a bounding box surrounding the outer boundary of a group of contiguous pixels in each digital image whose values lie on the same side of a threshold value; c) determine the shape of the outer boundary or the bounding box; d) compare the shape of the outer boundary or the bounding box to:
1) a predetermined toppled object model shape; or
2) a predetermined upright object model shape; or
3) both; e) identify a toppled object when the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape or an upright object when the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape.
16. The system of claim 15 wherein the processing system executes program instructions that: f) if the shape of the outer boundary or the bounding box matches the shape of the predetermined toppled object model shape:
1) compute the centroid of the pixels bounded by the outer boundary or the bounding box;
2) track the centroid from depth frame to depth frame; 3) increment a toppled object counter when the centroid being tracked first appears in a digital image downstream in the conveying direction of a predetermined position of a counting line extending across the target area; or g) if the shape of the outer boundary or the bounding box matches the shape of the predetermined upright object model shape:
1) compute the centroid of the pixels bounded by the outer boundary or the bounding box;
2) track the centroid from depth frame to depth frame;
3) increment an upright object counter when the centroid being tracked first appears in a digital image downstream in the conveying direction of a predetermined position of a counting line extending across the target area.
17. The system of claim 15 wherein a processor in the rangefinder or the processing system executes program instructions to crop the depth frames or the digital images to eliminate the conveyor frame from the digital images.
18. The system of claim 15 wherein the processing system executes program instructions to blur the digital images with a Gaussian filter.
19. The system of claim 15 wherein the objects are cans and the predetermined toppled object model shape is a non-square rectangle.
20. The system of claim 15 wherein the objects are cans and the predetermined upright object model shape is a square.
21. The system of claim 15 wherein the processing system, in determining the shape of the outer boundary or the bounding box, executes program instructions to find the area of the outer boundary.
22. The system of claim 15 wherein the processing system, besides determining the shape of the outer boundary or the bounding box, executes program instructions to determine the major axis of the outer boundary or the bounding box and the angle between the major axis and the conveying direction.
23. The system of claim 15 wherein the processing system executes program instructions to track each centroid from frame to frame by associating each centroid in the previous frame to the nearest centroid in the current frame.
24. The system of claim 15 wherein the processing system executes program instructions to identify partially toppled objects from outer boundaries or the bounding boxes whose shapes do not match the upright object model shape or the toppled object model shape.
25. The system of claim 15 wherein the processing system executes program instructions to determine the outer boundaries or the bounding boxes by first converting the digital image into a binary digital image by assigning pixels of the digital image having a value above the threshold a maximum value in the binary digital image and assigning pixels of the digital image having a value below the threshold a minimum value in the binary digital image, or vice versa.
26. The system of claim 15 wherein the processing system is external to the rangefinder.
27. The system of claim 15 wherein the rangefinder is a Lidar camera.
PCT/US2022/035409 2021-07-29 2022-06-29 Conveyed-object identification system WO2023009270A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163227016P 2021-07-29 2021-07-29
US63/227,016 2021-07-29

Publications (1)

Publication Number Publication Date
WO2023009270A1 true WO2023009270A1 (en) 2023-02-02

Family

ID=82701880

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/035409 WO2023009270A1 (en) 2021-07-29 2022-06-29 Conveyed-object identification system

Country Status (1)

Country Link
WO (1) WO2023009270A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033442A1 (en) * 2011-08-30 2013-03-07 Digimarc Corporation Methods and arrangements for identifying objects
US20190213389A1 (en) * 2018-01-05 2019-07-11 Aquifi, Inc. Systems and methods for volumetric sizing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033442A1 (en) * 2011-08-30 2013-03-07 Digimarc Corporation Methods and arrangements for identifying objects
US20190213389A1 (en) * 2018-01-05 2019-07-11 Aquifi, Inc. Systems and methods for volumetric sizing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALESSANDRO BEVILACQUA ET AL: "People Tracking Using a Time-of-Flight Depth Sensor", 2006 IEEE INTERNATIONAL CONFERENCE ON VIDEO AND SIGNAL BASED SURVEILLANCE, 1 November 2006 (2006-11-01), pages 89 - 89, XP055204367, ISBN: 978-0-76-952688-1, DOI: 10.1109/AVSS.2006.92 *
KUMAR AJAY ET AL: "Journal of Physics: Conference Series PAPER @BULLET OPEN ACCESS Overview of Municipal Solid Waste Generation and Energy Utilization Potential in Major Cities of Indonesia Mapping of Municipal Solid Waste Transportation System, Case Studies Seberang Ulu Region Palembang City Development of a method o", JOURNAL OF PHYSICS CONFERENCE SERIES, 22 September 2019 (2019-09-22), pages 12127, XP055964075, Retrieved from the Internet <URL:https://iopscience.iop.org/article/10.1088/1742-6596/1359/1/012127/pdf> [retrieved on 20220923] *
OMAR ARIF ET AL: "Tracking and Classifying Objects on a Conveyor Belt Using Time-of-Flight Camera", 35TH INTERNATIONAL SYMPOSIUM ON AUTOMATION AND ROBOTICS IN CONSTRUCTION (ISARC 2018), 20 June 2010 (2010-06-20), XP055729055, ISSN: 2413-5844, DOI: 10.22260/ISARC2010/0022 *

Similar Documents

Publication Publication Date Title
US10838067B2 (en) Object detection system
US20060115113A1 (en) Method for the recognition and tracking of objects
US8340420B2 (en) Method for recognizing objects in images
CA2392987C (en) Multi-resolution label locator
US8469261B2 (en) System and method for product identification
CN111753609A (en) Target identification method and device and camera
CN109800619B (en) Image recognition method for citrus fruits in mature period
CN113409362B (en) High altitude parabolic detection method and device, equipment and computer storage medium
CN112446225B (en) Determination of module size of optical code
Burger et al. Fast multi-pass 3D point segmentation based on a structured mesh graph for ground vehicles
Mizushima et al. A low-cost color vision system for automatic estimation of apple fruit orientation and maximum equatorial diameter
CN111913177A (en) Method and device for detecting target object and storage medium
CN111751279A (en) Optical image capturing parameter adjusting method and sensing device
Strachan et al. Image analysis in the fish and food industries
KR102062579B1 (en) Vehicle license-plate recognition system that recognition of Vehicle license-plate damaged by shadow and light reflection through the correction
CN115060162A (en) Chamfer dimension measuring method and device, electronic equipment and storage medium
CN115728734B (en) Laser radar shielding detection method and device, storage medium and laser radar
CN113838097A (en) Camera lens angle deviation detection method and device and storage medium
WO2023009270A1 (en) Conveyed-object identification system
CN107506739B (en) Night forward vehicle detection and distance measurement method
CN115147332A (en) Conveyor belt goods intelligent monitoring method and system based on machine vision
CN113192094A (en) Ghost outline extraction method, electronic device and storage medium
CN112257520A (en) People flow statistical method, device and system
US20220070364A1 (en) Information processing apparatus, control method, and nontransitory storage medium
US11748838B2 (en) System and method for three-dimensional scan of moving objects longer than the field of view

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22747506

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE