WO2024159126A1 - High performance machine vision system - Google Patents

High performance machine vision system Download PDF

Info

Publication number
WO2024159126A1
WO2024159126A1 PCT/US2024/013158 US2024013158W WO2024159126A1 WO 2024159126 A1 WO2024159126 A1 WO 2024159126A1 US 2024013158 W US2024013158 W US 2024013158W WO 2024159126 A1 WO2024159126 A1 WO 2024159126A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
imager
machine vision
vision system
primary
Prior art date
Application number
PCT/US2024/013158
Other languages
French (fr)
Other versions
WO2024159126A8 (en
Inventor
Caitlin WURZ
Deepak SURANA
Gyula Mate MOLNAR
Divya THANIGAI ARASU
Michael Corbett
Nicolas TUTUIANU
Andreas Savvides
Original Assignee
Cognex Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cognex Corporation filed Critical Cognex Corporation
Publication of WO2024159126A1 publication Critical patent/WO2024159126A1/en
Publication of WO2024159126A8 publication Critical patent/WO2024159126A8/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns

Definitions

  • the techniques described herein relate generally to imaging systems, including machine vision systems that are configured to acquire and analyze images of objects or symbols (e.g., barcodes).
  • objects or symbols e.g., barcodes
  • Machine vision systems are generally configured for use in capturing images of objects or symbols and analyzing the images to identify the objects or decode the symbols. Accordingly, machine vision systems generally include one or more devices for image acquisition and image processing. In conventional applications, these devices can be used to acquire images, or to analyze acquired images, such as for the purpose of decoding imaged symbols such as barcodes or text. In some contexts, machine vision and other imaging systems can be used to acquire images of objects that may be larger than a field of view (FOV) for a corresponding imaging device and/or that may be moving relative to an imaging device.
  • FOV field of view
  • aspects of the present disclosure relate to high performance machine vision system.
  • Some embodiments relate to a method for analyzing image data captured by a machine vision system comprising a primary imager and a secondary imager.
  • the method may include receiving, from the primary imager, first image data and metadata generated by the primary imager, the metadata based at least in part on information provided by the secondary imager to the primary imager; receiving, from the secondary imager, second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and generating correlated system information by correlating the first image data and the second image data based on the metadata.
  • the method further comprises generating an image based on the correlated system information.
  • the method further comprises displaying the image via a graphical user interface (GUI).
  • GUI graphical user interface
  • the method further comprises generating a new image in response to receiving new first image data from the primary imager.
  • the method further comprises, prior to receiving the second image data: decoding a symbol within the captured image; and generating the second image data based on a decode result.
  • decoding the symbol and generating the second image data is performed by the secondary imager.
  • the second image data further comprises a quality indication corresponding to the captured image, the quality indication determined by the primary imager based on the decode result.
  • receiving the first image data and the metadata comprises receiving the first image data and the metadata over a first wired connection with the primary imager; and receiving the second image data comprises receiving the second image data over a second wired connections with the secondary imager.
  • the correlated system information comprises status information of the machine vision system comprising at least one of: an indication of whether the primary imager is connected, an indication of whether the secondary imager is connected, a count of machine vision system triggers, a count of multi reads, or a count of objects.
  • the status information of the machine vision system comprises status information for individual imagers of the primary imager and the secondary imager.
  • the method further comprises generating a graphical analysis of a metric based on the correlated system information for the primary imager and the secondary imager, respectively.
  • the graphical analysis comprises a chart visually depicting the metric.
  • the metric comprises a symbol decode rate by the secondary imager.
  • the method further comprises filtering the correlated system information according to a metric; and generating a representation of the filtered correlated system information.
  • the method further comprises receiving, from a dimensioner associated with the machine vision system, an object dimension.
  • the method further comprises receiving, from a scale associated with the machine vision system, an object weight.
  • the metric comprises at least one of: dimensioner results, scale results, or sorter information.
  • the metric comprises at least one of: machine vision system trigger information or decode results.
  • Some embodiments relate to a non-transitory computer-readable medium comprising instructions which, when executed, cause at least one processor to carry out the method described herein.
  • the machine vision system may include a primary imager configured to generate first image data and metadata; a secondary imager in communication with the primary imager and configured to generate second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and at least one processor in communication with the primary imager and the secondary imager and configured to: receive the first image data and the metadata; receive the second image data; and generate correlated system information by correlating the first image data and the second image data based on the metadata, wherein the metadata is generated by the primary imager based on information provided by the secondary imager.
  • the at least one processor is further configured to: generate an image based on the correlated system information; and transmit the image to a graphical user interface (GUI).
  • GUI graphical user interface
  • the primary imager is configured to generate the first image data in response to receiving a trigger signal.
  • the secondary imager is configured to generate the second image data in response to receiving the trigger signal from the primary imager.
  • the second image data further comprises a decode result corresponding to a symbol within the captured image.
  • the at least one processor communicates with the primary imager via a first connection, and the at least one processor communicates with the secondary imager via a second connection.
  • the at least one processor is further configured to transmit status information for each of the primary imager and the secondary imager.
  • the machine vision system further comprises a plurality of secondary imagers in communication with the primary imager and the at least one processor, the second image data generated by the plurality of secondary imagers.
  • Some embodiments relate to a computerized method for analyzing runtime data captured by a machine vision system comprising a primary imager and one or more secondary imagers.
  • the method may include receiving, from the primary imager, first image data captured by the primary imager and metadata generated by the primary imager based on information provided by the one or more secondary imagers to the primary imager; receiving, from the one or more secondary imagers, second image data captured by the one or more secondary imagers; correlating the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information; and generating images based on the correlated system information for the primary imager and the one or more secondary imagers, respectively, for display.
  • Some embodiments relate to a method for analyzing runtime data captured by a machine vision system comprising a primary imager and one or more secondary imagers.
  • the method may include receiving, from the primary imager, first image data captured by the primary imager and metadata generated by the primary imager based on information provided by the one or more secondary imagers to the primary imager; receiving, from the one or more secondary imagers, second image data captured by the one or more secondary imagers; correlating the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information; and generating one or more graphical analyses of one or more metrics based on the correlated system information for the primary imager and the one or more secondary imagers, respectively.
  • Some embodiments relate to a computerized method for analyzing runtime data captured by a machine vision system comprising a primary imager and one or more secondary imagers.
  • the method may include receiving, from the primary imager, first image data captured by the primary imager and metadata generated by the primary imager based on information provided by the one or more secondary imagers to the primary imager; receiving, from the one or more secondary imagers, second image data captured by the one or more secondary imagers; correlating the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information; filtering the correlated system information according to one or more metrics; and generating a representation of the filtered correlated system information.
  • FIG. 1A is a schematic diagram illustrating an exemplary system configured to capture multiple images of sides of an object, according to some embodiments.
  • FIG. IB is another schematic diagram of the system of FIG. 1A with additional of a dimensioner and a motion measurement device, according to some embodiments.
  • FIG. 2 is a schematic diagram illustrating another exemplary system configured to capture multiple images of sides of an object, according to some embodiments.
  • FIG. 3 is a schematic diagram illustrating a third exemplary system configured to capture multiple images of sides of an object, according to some embodiments.
  • FIG. 4 is a schematic diagram illustrating a high performance vision system, according to some embodiments.
  • FIG. 5A is a flow chart illustrating a method for analyzing image data captured by a machine vision system, according to some embodiments.
  • FIG. 5B is a flow chart illustrating a method for providing a live dashboard, according to some embodiments.
  • FIG. 6A is an exemplary user interface showing live captured images, according to some embodiments.
  • FIG. 6B is another exemplary user interface showing live captured images, according to some embodiments.
  • FIG. 6C is the exemplary user interface of FIG. 6B when Tunnel Connection Status is selected, according to some embodiments.
  • FIG. 7 is an exemplary user interface showing live stitched images, according to some embodiments.
  • FIG. 8 is a flow chart illustrating a method for providing a performance dashboard, according to some embodiments.
  • FIG. 9 is a schematic diagram illustrating chart types available through a performance dashboard, according to embodiments.
  • FIG. 10 is an exemplary timeseries chart showing read rate of a device over time, according to some embodiments.
  • FIG. 11 is an exemplary numerical distribution showing dimensions distribution captured by a device, according to some embodiments.
  • FIG. 12A is an exemplary timeseries chart showing read rate of another device over time with one (1) hour intervals, according to some embodiments.
  • FIG. 12B is an exemplary timeseries chart showing the read rate of FIG. 12A over time with fifteen (15) minute intervals, according to some embodiments.
  • FIG. 13 is an exemplary timeseries chart showing read rates of multiple devices over time, according to some embodiments.
  • FIG. 14A is an exemplary user interface for creating new events, according to some embodiments.
  • FIG. 14B is an exemplary user interface showing created events, according to some embodiments.
  • FIG. 14C is another exemplary user interface for creating new events, according to some embodiments.
  • FIG. 14D is another exemplary user interface showing created events, according to some embodiments.
  • FIG. 15 is an exemplary user interface for viewing, filtering, and sorting notifications, according to some embodiments.
  • FIG. 16 is a flow chart illustrating a method for providing a result browser, according to some embodiments.
  • FIG. 17 is an exemplary user interface showing trigger data in a table, according to some embodiments.
  • FIG. 18A is an exemplary user interface for searching trigger data, according to some embodiments.
  • FIG. 18B is an exemplary user interface for searching the trigger data by decode results, according to some embodiments.
  • FIG. 18C is an exemplary user interface for searching the trigger data by dimensioner, according to some embodiments.
  • FIG. 18D is an exemplary user interface showing the result of the search of FIG. 18C, according to some embodiments.
  • FIG. 18E is an exemplary user interface for searching the trigger data by scale result, according to some embodiments.
  • FIG. 18F is an exemplary user interface for searching the trigger data by sorter information, according to some embodiments.
  • FIG. 19A is another exemplary user interface for searching trigger data, according to some embodiments.
  • FIG. 19B is an exemplary user interface for searching the trigger data by trigger information, according to some embodiments.
  • FIG. 19C is an exemplary user interface for downloading the search results, according to some embodiments.
  • FIG. 20 is an exemplary user interface for editing result tags, according to some embodiments.
  • FIG. 21A is an exemplary user interface showing trigger data in a table, according to some embodiments.
  • FIG. 21B is an exemplary user interface for reconfiguring the table showing the trigger data, according to some embodiments.
  • FIG. 21C is an exemplary user interface showing the result of a reconfigured table, according to some embodiments.
  • FIG. 22 is an exemplary user interface for reviewing trigger data, according to some embodiments.
  • Machine vision systems can be used to perform various tasks or processes, such as inspection processes, manufacturing processes, warehouse processes, and/or other processes that leverage machine vision.
  • a machine vision system may include several devices that are used to perform the machine vision task.
  • the devices can include, for example, one or more imaging devices configured to acquire image data and/or one or more measuring devices (e.g., integrated device sensors) configured to measure objects within a field of view (FOV) of the machine vision system.
  • FOV field of view
  • Each device of a machine vision system can capture its own associated data, such as image data and/or other sensor data. Over time, such data can result in a large amount of data for the machine vision system.
  • the inventors have recognized and appreciated that interpreting massive runtime data of a machine vision system, especially in real time, can be quite challenging, and may not be possible depending on the constraints of the machine vision system. For example, real time data interpretation may require transmitting and/or processing image data in a very short time period for the data to be relevant or useful for analyzing the machine vision system. Such interpretation may not be possible depending on latencies or other constraints of the machine vision system. Further, even if such interpretation is possible, too large of a delay in interpreting the runtime data can cause missed opportunities in addressing problems that might be occurring in the machine vision system (e.g., such as a no read result and/or errors in readings).
  • a machine vision system may include a live dashboard, a performance dashboard, and a result browser.
  • the live dashboard can provide, for viewing, a live stream of images and associated machine vision data (e.g., triggers, results, etc.).
  • the live dashboard can enable real-time analysis of the machine vision system, such as to determine whether the machine vision system including, for example, tunnels, is being triggered properly, whether packages are moving through the system correctly, etc.
  • the performance dashboard can provide, for viewing, graphical analyses of various data associated with the machine vision system.
  • the performance dashboard can enable real-time analysis of the machine vision system, such as to determine whether the system is functioning properly, etc.
  • the result browser may provide presentations by various metrics such as individual triggers.
  • the result browser can enable customized analyses of the machine vision system by system data such as triggers, which may expose hidden problems in the system.
  • a machine vision system may be implemented in a tunnel arrangement (or system), which may include a conveyor and a structure holding the devices such that each device may be positioned at an angle relative to the conveyor resulting in an angled FOV.
  • the FOVs of one or more imaging devices may overlap.
  • the imaging devices may be configured to acquire image data of a shared scene such as objects disposed on the conveyor and moving into the FOVs of the imaging devices by the conveyor.
  • the measuring devices may be configured to acquire measurement data of these objects, which may compensate the image data acquired by the imaging devices to provide various aspects of the performance of the machine vision system.
  • Each imaging device may be configured to include multiple imagers, each of which may be configured to capture image data of the objects in the shared scene.
  • One of the multiple imagers may be configured as a primary imager; and the rest of the multiple imagers may be configured as secondary imagers.
  • the secondary imagers may be configured to provide information to the primary imager such that the primary imager may generate metadata based on the information provided by the secondary imagers.
  • the metadata generated by the primary imager may indicate relationship between image data and respective imagers.
  • the measuring devices may include a dimensioner and/or a motion measurement device.
  • the dimensioner may be configured to measure dimensions (e.g., height, length, and width) of the objects in the shared scene.
  • the motion measurement device may be configured to track the physical movements of the objects in the shared scene, based on which rates of capture by the imaging devices may be derived.
  • the imaging devices and measuring devices may acquire respective image data and measurement data corresponding to trigger signals.
  • the trigger signals may be initiated by the imaging devices and measuring devices, an image processing device, or any suitable processors/servers/computing devices integrated with the imaging devices and measuring devices or remote from the imaging devices and measuring devices.
  • the image processing device may be configured to provide trigger signals to cause the imaging devices to acquire image data and/or the measuring devices to acquire measurement data.
  • Each trigger signal may be referred to as a trigger.
  • the image processing device may be configured to receive the image data from the imaging devices, the metadata from the primary imagers of the imaging devices, and the measurement data acquired by the measuring devices.
  • the image processing device may be configured to decode symbols from the received image data.
  • the image processing device may be configured to generate composite image data by, for example, stitching image data acquired by multiple imagers.
  • the image processing device may be configured to generate correlated system information, based at least in part on the metadata generated by the primary imagers of the imaging devices, by correlating the received image data to respective imagers of respective imaging devices, correlating the composite image data to sides of the captured objects in the shared scene, and/or correlating the received measurement data to the received image data and/or the composite image data.
  • the techniques use system data, such as the correlated system information, to provide flexible runtime data through a live dashboard.
  • system data such as the correlated system information
  • the techniques can provide live captured images and/or live stitched images.
  • the live data can be data provided within a threshold time period (e.g., whereas otherwise data provided after the threshold time period can be considered historical data).
  • the techniques described herein may provide for generating images for display within a threshold time period from when the image data is acquired by the imagers of the imaging devices, respectively.
  • the threshold time period may be, for example, five to twenty seconds when the images are generated based on composite image data from a plurality of image devices, and two to ten seconds (e.g., five seconds) when the images are generated based on received image data from individual imaging devices.
  • the techniques described herein can continuously provide live data from the machine vision system, and can therefore provide live result streams. Such live data may enable a reviewer of the live data to monitor and/or analyze the machine vision system. In some embodiments, a review can pause the live data so as to investigate areas of interest of the machine vision system, such as data associated with a particular trigger.
  • the techniques allow users to analyze the system data, such as to perform customizable data analysis (e.g., including graphical charts and statistics), data searching and/or data filtering through a performance dashboard.
  • the system data may be aggregated, based at least in part on the correlated system information, into one or more performance metrics, which may provide a holistic view of the performance of the system.
  • the one or more performance metrics may include, for example, one or more metrics for individual devices in the machine vision system, one or more metrics for individual triggers, and/or the like.
  • the techniques described herein can provide for generating graphical analyses of the one or more metrics for individual devices in the machine vision system.
  • the one or more metrics may include a rate of capture by a respective device, dimensions of objects captured by a respective device, scales of objects captured by a respective device, decode time by a respective device, etc.
  • the graphical analyses may include one or more charts.
  • the charts may show one or more of the metrics of respective devices over time.
  • a step chart may show rates of capture or good read count by respective imagers in an imaging device.
  • a histogram chart may show dimensions of objects captured by a respective imager.
  • the one or more metrics for individual devices in the machine vision system may be customized according to requests. For example, minimum values, maximum values, and/or mean values of the one or more metrics for individual devices in the machine vision system may be provided.
  • the techniques described herein may provide a result browser that enables image data from various imaging devices and measurement data from various measuring devices to be organized and searchable based on performance metrics.
  • the techniques described herein may enable the system data to be searched and filtered by one or more metrics for individual triggers.
  • the one or more metrics for individual triggers may include, for example, trigger information, decode results, dimensioner results, scale results, and/or sorter information.
  • a representation of the filtered system data may be generated, which may facilitate analyses of the performance of system (e.g., based on triggers).
  • the techniques can allow the system data to be narrowed to a period of interest related to the decreased read rate, which can allow the user to view images from that time period to check for and/or diagnose a problem with the tunnel that may be causing the decreased read rate.
  • the techniques can allow system data to be broken into sub-results (e.g., data for symbol readers, data for dimensioners, etc.).
  • FIG. 1A shows an example of a system 100 for capturing multiple images of each side of an object in accordance with an embodiment of the technology.
  • system 100 can be configured to evaluate symbols (e.g., barcodes, two-dimensional (2D) codes, fiducials, hazmat, machine readable code, etc.) on objects (e.g., objects 118a, 118b) moving through a tunnel 102, such as a symbol 120 on object 118a, including assigning symbols to objects (e.g., objects 118a, 118b).
  • symbol 120 is a flat 2D barcode on a top surface of object 118a, and objects 118a and 118b are roughly cuboid boxes.
  • any suitable geometries are possible for an object to be imaged, and any variety of symbols and symbol locations can be imaged and evaluated, including non-direct part mark (DPM) symbols and DPM symbols located on a top or any other side of an object.
  • DPM non-direct part mark
  • objects 118a and 118b are disposed on a conveyor 116 that is configured to move objects 118a and 118b in a horizontal direction through tunnel 102 at a relatively predictable and continuous rate, or at a variable rate measured by a device, such as an encoder or other motion measurement device. Additionally or alternatively, objects can be moved through tunnel 102 in other ways (e.g., with non-linear movement).
  • conveyor 116 can include a conveyor belt.
  • conveyor 116 can consist of other types of transport systems.
  • system 100 can include imaging devices 112 and an image processing device 132.
  • system 100 can include multiple imaging devices in a tunnel arrangement (e.g., implementing a portion of tunnel 102), representatively shown via imaging devices 112a, 112b, and 112c, each with a field-of-view (“FOV"), representatively shown via FOV 114a, 114b, 114c, that includes part of the conveyor 116.
  • FOV field-of-view
  • each imaging device 112 can be positioned at an angle relative to the conveyor top or side (e.g., at an angle relative to a normal direction of symbols on the sides of the objects 118a and 118b or relative to the direction of travel), resulting in an angled FOV.
  • system 100 can be configured to capture one or more images of multiple sides of objects 118a and/or 118b as the objects are moved by conveyor 116.
  • the captured images can be used to identify symbols on each object (e.g., a symbol 120) and/or assign symbols to each object, which can be subsequently decoded (as appropriate).
  • a gap in conveyor 116 (not shown) can facilitate imaging of a bottom side of an object (e.g., as described in U.S. Patent Application Publication No.
  • each array can include four or more imaging devices.
  • an array of imaging devices 112 may be referred to as an imaging device, and imaging devices 112 in the array may be referred to as an imager.
  • imaging devices 112 are generally shown imaging objects 118a and 118b without mirrors to redirect a FOV, this is merely an example, and one or more fixed and/or steerable mirrors can be used to redirect a FOV of one or more of the imaging devices as described below with respect to FIGs. 2A and 3, which may facilitate a reduced vertical or lateral distance between imaging devices and objects in tunnel 102.
  • imaging device 112a can be disposed with an optical axis parallel to conveyor 116, and one or more mirrors can be disposed above tunnel 102 to redirect a FOV from imaging devices 112a toward a front and top of objects in tunnel 102.
  • imaging devices 112 can be implemented using any suitable type of imaging device(s).
  • imaging devices 112 can be implemented using 2D imaging devices (e.g., 2D cameras), such as area scan cameras and/or line scan cameras.
  • imaging device 112 can be an integrated system that includes a lens assembly and an imager, such as a CCD or CMOS sensor.
  • imaging devices 112 may each include one or more image sensors, at least one lens arrangement, and at least one control device (e.g., a processor device) configured to execute computational operations relative to the image sensor.
  • Each of the imaging devices 112a, 112b, or 112c can selectively acquire image data from different fields of view (FOVs).
  • FOVs fields of view
  • system 100 can be utilized to acquire multiple images of each side of an object where one or more images may include more than one object.
  • Object 118 may be associated with one or more symbols, such as a barcode, a QR code, etc.
  • system 100 can be configured to facilitate imaging of the bottom side of an object supported by conveyor 116 (e.g., the side of object 118a resting on conveyor 116).
  • conveyor 116 may be implemented with a gap (not shown).
  • gaps between objects can range in size.
  • gaps between objects can be substantially the same between all sets of objects in a system, or can exhibit a fixed minimum size for all sets of objects in a system. In some embodiments, smaller gap sizes may be used to maximize system throughput.
  • system 100 can include a dimensioning system (not shown), sometime referred to herein as a dimensioner, that can measure dimensions of objects moving toward tunnel 102 on conveyor 116, and such dimensions can be used (e.g., by image processing device 132) in a process to assign a symbol to an object in an image captured as one or more objects move through tunnel 102.
  • system 100 can include devices (e.g., an encoder or other motion measurement device, not shown) to track the physical movement of objects (e.g., objects 118a, 118b) moving through the tunnel 102 on the conveyor 116.
  • FIG. IB shows an example of a system for capturing multiple images of each side of an object in accordance with an embodiment of the technology.
  • FIG IB shows a simplified diagram of a system 140 to illustrate an example arrangement of a dimensioner and a motion measurement device (e.g., an encoder) with respect to a tunnel.
  • the system 140 may include a dimensioner 150 and a motion measurement device 152.
  • a conveyor 116 is configured to move objects 118d, 118e along the direction indicated by arrow 154 past a dimensioner 150 before the objects 118d, 118e are imaged by one or more imaging devices 112.
  • a gap 156 is provided between objects 118d and 118e and an image processing device 132 may be in communication with imaging devices 112, dimensioner 152 and motion measurement device 152.
  • Dimensioner 150 can be configured to determine dimensions and/or a location of an object supported by support structure 116 (e.g., object 118d or 118e) at a certain point in time. For example, dimensioner 150 can be configured to determine a distance from dimensioner 150 to a top surface of the object, and can be configured to determine a size and/or orientation of a surface facing dimensioner 150. In some embodiments, dimensioner 150 can be implemented using various technologies. For example, dimensioner 150 can be implemented using a 3D camera (e.g., a structured light 3D camera, a continuous time of flight 3D camera, etc.). As another example, dimensioner 150 can be implemented using a laser scanning system (e.g., a LiDAR system).
  • a 3D camera e.g., a structured light 3D camera, a continuous time of flight 3D camera, etc.
  • dimensioner 150 can be implemented using a laser scanning system (e.g., a LiDAR system).
  • dimensioner 150 can be implemented using a 3D-A1000 system available from Cognex Corporation.
  • the dimensioning system or dimensioner e.g., a time-of-flight sensor or computed from stereo
  • the dimensioning system or dimensioner may be implemented in a single device or enclosure with an imaging device (e.g., a 2D camera) and, in some embodiments, a processor (e.g., that may be utilized as the image processing device) may also be implemented in the device with the dimensioner and imaging device.
  • an imaging device e.g., a 2D camera
  • a processor e.g., that may be utilized as the image processing device
  • dimensioner 150 can determine 3D coordinates of each corner of the object in a coordinate space defined with reference to one or more portions of system 140. For example, dimensioner 150 can determine 3D coordinates of each of eight corners of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with an origin at dimensioner 150. As another example, dimensioner 150 can determine 3D coordinates of each of eight comers of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with respect to conveyor 116 (e.g., with an origin that originates at a center of conveyor 116).
  • a motion measurement device 152 may be linked to the conveyor 116 and imaging devices 112 to provide electronic signals to the imaging devices 112 and/or image processing device 132 that indicate the amount of travel of the conveyor 116, and the objects 118d, 118e supported thereon, over a known amount of time. This may be useful, for example, in order to coordinate capture of images of particular objects (e.g., objects 118d, 118e), based on calculated locations of the object relative to a field of view of a relevant imaging device (e.g., imaging device(s) 112).
  • a relevant imaging device e.g., imaging device(s) 112
  • motion measurement device 152 may be configured to generate a pulse count that can be used to identify the position of conveyor 116 along the direction of arrow 154.
  • motion measurement device 152 may provide the pulse count to image processing device 132 for identifying and tracking the positions of objects (e.g., objects 118d, 118e) on conveyor 116.
  • the motion measurement device 152 can increment a pulse count each time conveyor 116 moves a predetermined distance (encoder pulse count distance) in the direction of arrow 154.
  • an object's position can be determined based on an initial position, the change in the pulse count, and the pulse count distance.
  • image processing device 132 can coordinate operations of various components of system 100.
  • image processing device 132 can cause a dimensioner (e.g., dimensioner 150 shown in FIG. IB) to acquire dimensions of an object positioned on conveyor 116 and can cause imaging devices 112 to capture images of each side.
  • image processing device 132 can control detailed operations of each imaging device, for example, by providing trigger signals to cause the imaging device to capture images at particular times, etc.
  • another device e.g., a processor included in each imaging device, a separate controller device, etc.
  • image processing device 132 can provide a trigger signal to each imaging device and/or dimensioner (e.g., dimensioner 150 shown in FIG. IB), and a processor of each imaging device can be configured to implement a predesignated image acquisition sequence that spans a predetermined region of interest in response to the trigger.
  • system 100 can also include one or more light sources (not shown) to illuminate surfaces of an object, and operation of such light sources can also be coordinated by a central device (e.g., image processing device 132), and/or control can be decentralized (e.g., an imaging device can control operation of one or more light sources, a processor associated with one or more light sources can control operation of the light sources, etc.).
  • system 100 can be configured to concurrently (e.g., at the same time or over a common time interval) acquire images of multiple sides of an object, including as part of a single trigger event.
  • each imaging device 112 can be configured to acquire a respective set of one or more images over a common time interval.
  • imaging devices 112 can be configured to acquire the images based on a single trigger event. For example, based on a sensor (e.g., a contact sensor, a presence sensor, an imaging device, etc.) determining that object 118 has passed into the FOV of the imaging devices 112, imaging devices 112 can concurrently acquire images of the respective sides of object 118.
  • a sensor e.g., a contact sensor, a presence sensor, an imaging device, etc.
  • each imaging device 112 can generate an image set depicting a FOV or various FOVs of a particular side or sides of an object supported by conveyor 116 (e.g., object 118).
  • image processing device 132 can map 3D locations of one or more comers of object 118 to a 2D location within each image in set of images output by each imaging device.
  • image processing device can generate a mask that identifies which portion of an image is associated with each side (e.g., a bit mask with a 1 indicating the presence of a particular side, and a 0 indicating an absence of a particular side) based on the 2D location of each comer.
  • image processing device can stitch images associated with a same side of an object into one image that shows a more complete view of the side of the object (e.g., as described in U.S. Application No. 17/019,742, filed on September 14, 2020, which is hereby incorporated by reference herein in its entirety; and in U.S. Application No. 17/837,998, filed on June 10, 2022, which is hereby incorporated by reference herein in its entirety).
  • the 3D locations of one or more corners of a target object e.g., object 118a
  • the 3D locations of one or more corners of an object 118c (a leading object) ahead of the target object 118a on the conveyor 116 and/or the 3D locations of one or more comers of an object 118b (a trailing object) behind the target object 118a on the conveyor 116 may be mapped to a 2D location within each image in the set of images output by each imaging device. Accordingly, if an image captures more than one object (118a, 118b, 118c), one or more corners of each object in the image may be mapped to the 2D image.
  • FIG. 2 shows another example of a system for capturing multiple images of each side of an object in accordance with an embodiment of the technology.
  • System 200 includes multiple banks of imaging devices 212, 214, 216, 218, 220, 222 and multiple mirrors 224, 226, 228, 230 in a tunnel arrangement 202.
  • each bank 212, 214, 216, 218, 220, 222 includes four imaging devices that are configured to capture images of one or more sides of an object (e.g., object 208a) and various FOVs of the one or more sides of the object.
  • top trail bank 216 and mirror 228 may be configured to capture images of the top and back surfaces of an object using imaging devices 234, 236, 238, and 240.
  • the banks of imaging devices 212, 214, 216, 218, 220, 222 and mirrors 224, 226, 228, 230 can be mechanically coupled to a support structure 242 above a conveyor 204.
  • imaging devices for imaging different sides of an object can be reoriented relative to the illustrated positions in FIG. 2 (e.g., imaging devices can be offset, imaging devices can be placed at the corners, rather than the sides, etc.).
  • an imaging device can be dedicated to acquiring images of multiple sides of an object including with overlapping acquisition areas relative to other imaging devices included in the same system.
  • a bank e.g., 212, 214, 216, 218, 220, 222 of imaging devices (e.g., 212, 214, 216, 218) may be referred to as an imaging device, and imaging devices (e.g., 212, 214, 216, 218) in the bank may be referred to as an imager.
  • system 200 also includes a dimensioner 206 and an image processing device 232.
  • multiple objects 208a, 208b and 208c may be supported in the conveyor 204 and travel through the tunnel 202 along a direction indicated by arrow 210.
  • each bank of imaging devices 212, 214, 216, 218, 220, 222 (and each imaging device in a bank) can generate a set of images depicting a FOV or various FOVs of a particular side or sides of an object supported by conveyor 204 (e.g., object 208a).
  • FIGs. 1A and 2 depict a dynamic support structure (e.g., conveyor 116, conveyor 204) that is moveable, in some embodiments, a stationary support structure may be used to support objects to be imaged by one or more imaging devices.
  • FIG. 3 shows another example system for capturing multiple images of each side of an object in accordance with an embodiment of the technology.
  • system 300 can include multiple imaging devices 302, 304, 306, 308, 310, and 312, which can each include one or more image sensors, at least one lens arrangement, and at least one control device (e.g., a processor device) configured to execute computational operations relative to the image sensor.
  • control device e.g., a processor device
  • imaging devices 302, 304, 306, 308, 310, and/or 312 can include and/or be associated with a steerable mirror (e.g., as described in U.S. Application No. 17/071,636, filed on October 13, 2020, which is hereby incorporated by reference herein in its entirety).
  • Each of the imaging devices 302, 304, 306, 308, 310, and/or 312 can selectively acquire image data from different fields of view (FOVs), corresponding to different orientations of the associated steerable mirror(s).
  • system 300 can be utilized to acquire multiple images of each side of an object. [00101]
  • system 300 can be used to acquire images of multiple objects presented for image acquisition.
  • system 300 can include a support structure that supports each of the imaging devices 302, 304, 306, 308, 310, 312 and a platform 316 configured to support one or more objects 318, 334, 336 to be imaged (note that each object 318, 334, 336 may be associated with one or more symbols, such as a barcode, a QR code, etc.).
  • a transport system (not shown), including one or more robot arms (e.g., a robot bin picker), may be used to position multiple objects (e.g., in a bin or other container) on platform 316.
  • the support structure can be configured as a caged support structure. However, this is merely an example, and support structure can be implemented in various configurations.
  • support platform 316 can be configured to facilitate imaging of the bottom side of one or more objects supported by the support platform 316 (e.g., the side of an object (e.g., object 318, 334, or 336) resting on platform 316).
  • support structure 316 can be implemented using a transparent platform, a mesh or grid platform, an open center platform, or any other suitable configuration.
  • acquisition of images of the bottom side can be substantially similar to acquisition of other sides of the object.
  • imaging devices 302, 304, 306, 308, 310, and/or 312 can be oriented such that a FOV of the imaging device can be used to acquire images of a particular side of an object resting on support platform 316, such that each side of an object (e.g., object 318) placed on and supported by support platform 316 can be imaged by imaging devices 302, 304, 306, 308, 310, and/or 312.
  • a FOV of the imaging device can be used to acquire images of a particular side of an object resting on support platform 316, such that each side of an object (e.g., object 318) placed on and supported by support platform 316 can be imaged by imaging devices 302, 304, 306, 308, 310, and/or 312.
  • imaging device 302 can be mechanically coupled to the support structure above support platform 316, and can be oriented toward an upper surface of support platform 316
  • imaging device 304 can be mechanically coupled to the support structure below support platform 316
  • imaging devices 306, 308, 310, and/or 312 can each be mechanically coupled to a side of the support structure, such that a FOV of each of imaging devices 306, 308, 310, and/or 312 faces a lateral side of support platform 316.
  • each imaging device can be configured with an optical axis that is generally parallel with another imaging device, and perpendicular to other imaging devices (e.g., when the steerable mirror is in a neutral position).
  • imaging devices 302 and 304 can be configured to face each other (e.g., such that the imaging devices have substantially parallel optical axes), and the other imaging devices can be configured to have optical axis that are orthogonal to the optical axis of imaging devices 302 and 304.
  • imaging devices 302, 304, 306, 308, 310, and 312 can be advantageous, in some embodiments, imaging devices for imaging different sides of an object can be reoriented relative the illustrated positions of FIG. 3 (e.g., imaging device can be offset, imaging devices can be placed at the comers, rather than the sides, etc.).
  • a different number or arrangement of imaging devices, a different arrangement of mirrors can be used to configure a particular imaging device to acquire images of multiple sides of an object.
  • fixed mirrors disposed such that imaging devices 306 and 310 can capture images of a far side of object 318 and can be used in lieu of imaging devices 308 and 312.
  • system 300 can be configured to image each of the multiple objects 318, 334, 336 on the platform 316.
  • system 300 can include a dimensioner 330.
  • a dimensioner can be configured to determine dimensions and/or a location of an object supported by support structure 316 (e.g., object 318, 334, or 336).
  • dimensioner 330 can determine 3D coordinates of each comer of the object in a coordinate space defined with reference to one or more portions of system 300.
  • dimensioner 330 can determine 3D coordinates of each of eight comers of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with an origin at dimensioner 330.
  • dimensioner 330 can determine 3D coordinates of each of eight comers of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with respect to support platform 316 (e.g., with an origin that originates at a center of support platform 316).
  • an image processing device 332 can coordinate operations of imaging devices 302, 304, 306, 308, 310, and/or 312 and/or may be configured similar to image processing device described herein (e.g., image processing device 132 of FIG. 1A, image processing device 232 of FIG. 2, and image processing device 408 of FIG. 4).
  • FIG. 4 illustrates a high performance vision system 400, which may be configured to provide sufficient visibility into the machine vision system 400 so as to enable the maintenance of the performance of the machine vision system 400, for example, in real time.
  • the machine vision system 400 may include one or more imaging devices 402 (only one of which is illustrated).
  • Each imaging device 402 may include a primary imager 412 and one or more secondary imagers 414.
  • the one or more secondary imagers 414 may be configured to provide information to the primary imager 412 over, for example, a wired connection, which may deliver the information faster than a wireless connection.
  • the primary imager 412 may be configured to generate metadata based on the information provided by the one or moresecondary imagers.
  • the metadata generated by the primary imager 412 may indicate relationship between image data and respective imagers.
  • the machine vision system 400 may include a dimensioner 404 and a motion measurement device 406.
  • the dimensioner 404 may be configured to measure dimensions (e.g., height, length, and width) of the objects captured by the imaging device 402, based on which performance metrics of a tunnel such as distances between objects in the tunnel may be derived.
  • the dimensioner 404 may be configured similar to dimensioners described herein (e.g., dimensioner 150 of FIG. IB, dimensioner 206 of FIG. 2, and dimensioner 330 of FIG. 3).
  • the motion measurement device 406 may be configured to track the physical movements of the objects captured by the imaging device 402, based on which performance metrics of a tunnel such as rates of capture by respective imagers 412 and 414 of the imaging device 402 may be derived.
  • the motion measurement device 406 may be configured similar to motion measurement device described herein (e.g., motion measurement device 152 of FIG. IB).
  • An image processing device 408 may be configured to control when the imaging device 402, dimensioner 404, and motion measurement device 406 can acquire respective image data and measurement data through, for example, providing trigger signals configured to cause the imaging device 402 to acquire image data and the dimensioner 404 and motion measurement device 406 to acquire measurement data.
  • the primary imager 412 can receive a trigger.
  • the primary imager 412 can initiate the trigger to the one or more secondary imagers 414.
  • the triggered imagers can capture images.
  • the primary imager 412 can generate metadata about the trigger such as a time of trigger, a count of decode results of images captured according to the trigger, etc.
  • the image processing device 408 may be configured to receive the image data from the imaging device 402, the metadata from the primary imager 412 of the imaging device 402, and the measurement data acquired by the dimensioner 404 and motion measurement device 406 over, for example, one or more wired connections, which may deliver the information faster than wireless connections.
  • the image processing device 408 may be configured to receive the image data and metadata from the primary imager 412 of the imaging device 402 over a first wired connection, and the image data from the secondary imagers 414 of the imaging device 402 over one or more second wired connections. Such a configuration may enable the image processing device 408 to receive and interpret respective data simultaneously and in real time.
  • the image processing device 408 may be configured to interpret the received data.
  • the image processing device 408 may include a symbol decoder 416, which may be configured to decode symbols from the received image data from the one or more imaging devices 402.
  • the primary imager 412 and/or the one or more secondary imagers 414 may be configured to decode symbols in images captured respectively.
  • the one or more secondary imagers 414 can send decode results to the primary imager 412, and the primary imager 412 can deteremine the quality of the decode results and provide feedback to the one or more secondary imagers 414.
  • the image processing device 408 may include an image stitcher 418, which may be configured to generate composite image data by, for example, stitching image data acquired by one or more imagers 412 and 414 over one or more triggers.
  • the image processing device 408 may include a correlator 426, which may be configured to generate correlated system information.
  • the correlated system information may be generated based at least in part on the metadata generated by the primary imagers 412 of the one or more imaging devices 402.
  • the correlator 425 may be configured to correlate the received image data to respective imagers 412 and 414 of respective imaging devices 402, and/or the decoded symbols by the symbol decoder 416.
  • the correlator 425 may be configured to correlate the composite image data generated by the image stitcher 418 to sides of the objects captured by the one or more imaging devices 402.
  • the correlator 425 may be configured to correlate the received measurement data to the received image data from respective imagers 412 and 414 of respective imaging devices 402, and/or the composite image data generated by the image stitcher 418.
  • the image processing device 408 may include an aggregator 428, which may be configured to aggregate the correlated system information into one or more performance metrics.
  • the one or more performance metrics may provide a holistic view of the performance of the machine vision system 400, which may include one or more tunnels described herein (e.g., tunnel 102 of FIG. 1A, and tunnel 202 of FIG. 2).
  • the one or more performance metrics may include one or more metrics for individual devices (e.g., imagers 412, 414, dimensioner 404, and motion measurement device 406) in the machine vision system 400, and one or more metrics for individual triggers.
  • tunnel analytics 410 may include live dashboard 410, performance dashboard 422, and result browser 424.
  • the live dashboard 410 may be configured to generate presentations for display based at least in part on information received from the image processing device and enable the presentation to be searchable by the performance metrics.
  • the live dashboard 410 may generate and display images for display within a threshold time period from when the image data is acquired by the imagers 412 and 414 of the imaging devices 402, respectively.
  • the threshold time period may be, for example, five to twenty seconds when the images are generated based on composite image data from a plurality of imaging devices, and two to ten seconds (e.g., five seconds) when the images are generated based on received image data from individual imaging devices.
  • Such live result streams may enable a reviewer of the live result streams to pause the live result streams so as to investigate areas of interest such as data associated with a particular trigger.
  • the performance dashboard 422 may be configured to generate graphical analyses of the one or more metrics for individual devices in the machine vision system.
  • the one or more metrics may include a rate of capture by a respective device, dimensions of objects captured by a respective device, scales of objects captured by a respective device, decode time by a respective device, etc.
  • the graphical analyses may include one or more charts. In some embodiments, the charts may show one or more of the metrics of respective devices over time. For example, a step chart may show rates of capture or good rate count by respective imagers in an imaging device. As another example, a histogram charge may show dimensions of objects captured by a respective imager.
  • the one or more metrics for individual devices in the machine vision system may be customized according to requests. For example, minimum values, maximum values, and/or mean values of the one or more metrics for individual devices in the machine vision system may be provided.
  • the result browser 424 may be configured to generate presentations of the data by the one or more metrics for individual triggers.
  • the one or more metrics for individual triggers may include trigger information, decode results, dimensioner results, scale results, and sorter information.
  • the data may be searchable and filterable by the one or more metrics.
  • the result browser 424 may be configured to generate representations of the filtered data, which may facilitate further analyses of the performances of respective tunnels.
  • the image processing device 408 and tunnel analytics 410 may communicate over one or more communication links 430.
  • communicating link 430 can be any suitable communication network or combination of communication networks.
  • communication link 430 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, a 5G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, NR, etc.), a wired network, etc.
  • Wi-Fi network which can include one or more wireless routers, one or more switches, etc.
  • a peer-to-peer network e.g., a Bluetooth network
  • a cellular network e.g., a 3G network, a 4G network, a 5G network, etc., complying with any suitable standard, such as CDMA, GSM,
  • communication link 430 can be a local area network (LAN), a wide area network (WAN), a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks.
  • LAN local area network
  • WAN wide area network
  • public network e.g., the Internet
  • private or semi-private network e.g., a corporate or university intranet
  • any other suitable type of network e.g., a corporate or university intranet
  • Communications links shown in FIG. 4 can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, etc.
  • components of system 400 may communicate directly as compared to through communication network.
  • the components of system 400 may communicate through one or more intermediary devices not illustrated in FIG. 4.
  • FIG. 5A is a flow chart illustrating a method 500 for analyzing image data captured by a machine vision system, according to some embodiments.
  • an image processing device e.g., image processing device 408 may receive, from a primary imager (primary imager 412 of imaging device 402), first image data and metadata generated by the primary imager.
  • the first image data can include images captured by the primary imager.
  • the metadata can indicate triger information such as time of trigger, decode count for each triger, etc.
  • the metadata can be generated by the primary imager based on information provided by one or more secondary imagers (e.g., secondary imagers 414 of imaging device 402) to the primary imager.
  • the information may be provided by the secondary imagers to the primary imager through wired connections between the primary imager and secondary imagers (e.g., which may deliver the information faster than a wireless connection).
  • the one or more secondary imagers can decode symbols in the images captured by the one or more secondary imagers, and the information provided by the one or more secondary imagers can include decode results of the symbols in the images captured by the one or more secondary imagers and metadata.
  • the image processing device may receive, from the one or more secondary imagers, second image data generated by the one or more secondary imagers.
  • the second image data can include images captured by the one or more secondary imagers and metadata.
  • the second image data can include quality indications for the images captured by the one or more secondary imagers.
  • the quality indications can be determined, by the primary imager, based on the decode results, and sent to the one or more secondary imagers from the primary imager. For example, the primary imager can determine whether a decode result of a symbol in an image is a “good” read or a “bad” read of the symbol such that the image can have a corresponding label associated therewith.
  • the second image data may also be sent to the primary imager by the secondary imagers and transmitted to the image processing device by the primary imager.
  • the first image data and second image data and metadata may be provided to the image processing device through wired connections.
  • FIG. 5B is a flow chart illustrating a method 501 for providing a live dashboard (e.g., live dashboard 420), according to some embodiments.
  • tunnel analytics e.g., tunnel analytics 410 may generate images based on the correlated system information for the primary imager and the secondary imagers, respectively, for display via a graphical user interface (GUI).
  • GUI graphical user interface
  • a live dashboard can provide, for viewing, a live stream of images and associated machine vision data (e.g., triggers, results, etc.).
  • the live dashboard can enable real-time analysis of the machine vision system, such as to determine whether the tunnels are being triggered properly, whether packages are moving through the tunnels correctly, etc.
  • FIG. 6A is an exemplary user interface 600 of a live dashboard that shows live captured images and associated data, according to some embodiments.
  • the user interface 600 may include a source tunnel selection drop-down menu 604 that allows a user to select a tunnel (tunnel 102 of FIG. 1A, and tunnel 202 of FIG. 2) to view in the user interface 600.
  • the user interface 600 may include an imaging device selection drop-down menu 606 configured for the selection of one of more selected image source devices (e.g., imagers 412, 414) in the user interface 600.
  • image source devices e.g., imagers 412, 414.
  • multiple image source devices are selected and shown in an array 608, each row of which may show images captured by a respective image source device.
  • the images may be normalized by selection of box 620.
  • the images may be selected to be further examined in an image viewer 612.
  • image 610 is selected.
  • the image viewer 612 may include image control selectable list 614, which may be configured to manipulate the selected image in various ways including, for example, Rotate Left 90, Rotate Right 90, Reset Rotation, Zoom In, Zoom Out, Zoom to Original Size, Reset Zoom, Move Center, Reset All Settings, etc.
  • the user interface 600 may include a result table 616, which may be configured to provide information for individual triggers including, for example, Timestamp, Trigger Index, Read String, Length, Width, Heigh, Object Gap, etc.
  • Each entry in the results table 616 can be stored, for example, as an object with the associated data.
  • the techniques can provide for processing large streams of data from primary and/or secondary imaging devices, which may be received separately from the devices, and organizing the data into objects.
  • the primary imaging device may provide its own associated imaging data and metadata that is generated based on data for the primary imaging device as well as data from the secondary imaging devices.
  • the secondary imaging devices may also provide their own associated imaging data.
  • the techniques as described herein, can include combining the image data that is (separately) received from the various imaging devices based on the metadata in order to generate the objects that are used to populate results table 616.
  • the techniques can provide for downloading data associated with individual entries of the results table 616.
  • the techniques can provide for downloading data associated with individual triggers.
  • the user interface 600 may include a status tile 618, which may be configured to provide information such as Triggers 618A, Multi Reads 618B, Packages Dimensioned 618C, etc.
  • Triggers 618A may provide information such as overall throughput (e.g., 75,100 in FIG. 6), good read count (e.g., 74,552 in FIG. 6), no read count (e.g., 548 in FIG. 6), and read rate (e.g., 99.27% in FIG. 6).
  • Multi Reads 618B may provide information such as multi read count when multiple symbols (e.g., barcodes) are decoded from a single imaged object (e.g., 32,939 in FIG.
  • Packages Dimensioned 618C may provide information such as the packages dimensioned, which may mean the number of packages have dimensions greater than zero (e.g., 75,100 in FIG. 6), Count of packages legal for trade, and Count of packages not legal for trade.
  • the user interface 600 may include a Reset button 622, which may be configured to provide options to reset the live dashboard on a regular basis by, for example, each shift, daily, weekly, which make the statistics more meaningful.
  • FIG. 6B is another exemplary user interface 601 of a live dashboard that shows live captured images and associated data, according to some embodiments.
  • the user interface 601 may be configured similar to the user interface 600, but the status tile 619 of the user interface 601 includes an additional interface component Tunnel Connection Status 619A.
  • Tunnel Connection Status 619A may provide status information such as whether the image processing device 408 and/or the devices coupled to the image processing device 408 (imaging devices 412, 414, dimensioner 404, motion measurement device 406) are connected to the tunnel analytics 410.
  • FIG. 6C shows the exemplary user interface 601 when Tunnel Connection Status 619A is selected, according to some embodiments.
  • FIG. 7 is an exemplary user interface 700 showing live stitched images 702, according to some embodiments.
  • the user interface 700 may be configured similar to the user interface 600 discussed above. The differences may include the images 702 are stitched images, which may be generated based on one or more live images 602.
  • the stitched images 702 provide a more complete view of respective sides of an object.
  • stitched images 704, 706, and 708 show the back, bottom, and front of an object, respectively. All six sides of the object may be provided.
  • FIG. 8 is a flow chart illustrating a method 800 for providing a performance dashboard (e.g., performance dashboard 422), according to some embodiments.
  • an image processing device e.g., image processing device 408 may receive, from a primary imager (primary imager 412 of imaging device 402), first image data captured by the primary imager and metadata.
  • the metadata can be generated by the primary imager based on information provided by secondary imagers (e.g., secondary imagers 414 of imaging device 402) to the primary imager.
  • the information may be provided by the secondary imagers to the primary imager through wired connections between the primary imager and secondary imagers (e.g., which may deliver the information faster than a wireless connection).
  • the image processing device may receive, from the secondary imagers, second image data captured by the one or more secondary imagers.
  • the second image data may also be sent to the primary imager by the secondary imagers and transmitted to the image processing device by the primary imager.
  • the first image data and second image data and metadata may be provided to the image processing device through wired connections. Alternatively or additionally, wireless connections may be used to transmit the first image data and second image data and metadata.
  • the image processing device may correlate the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information.
  • tunnel analytics e.g., tunnel analytics 410) may generate one or more graphical analyses of one or more metrics based on the correlated system information for the primary imager and the one or more secondary imagers, respectively.
  • a performance dashboard can provide, for viewing, graphical analyses of various data associated with the machine vision system.
  • the performance dashboard can enable real-time analysis of the machine vision system, such as to determine whether the tunnels are functioning properly, etc.
  • FIG. 9 is a schematic diagram illustrating a user interface 900 of a performance dashboard and the types of charts available through the performance dashboard, according to embodiments.
  • the user interface 900 may provide a chart type selection drop-down menu 906.
  • the types of charts may include timeseries chart and numerical distribution. When the option of timeseries chart is selected, the performance dashboard may provide further options listed in the table of 902; and when the option of numerical distribution is selected, the performance dashboard may provide further options listed in the table of 904.
  • the underlying data can be received from, and/or pulled from, the machine vision system devices periodically (e.g., every ten seconds, thirty seconds, minute, etc.).
  • FIG. 10 is an exemplary timeseries chart 1000 showing read rate of a device (e.g., primary imager 412) over time, according to some embodiments.
  • the user interface may provide a button 1002 configured to enable the selection of additional charts.
  • the selection of the button 1002 may lead to another user interface such as the user interface 900 discussed above.
  • FIG. 11 is an exemplary numerical distribution 1100 showing dimensions distribution captured by the device of FIG. 10, according to some embodiments. In some embodiments, these charts may be displayed together. These charts may be used together to determine whether the device is performing properly. For example, a sudden drop of read rate may indicate abnormalities in the tunnel.
  • FIG. 12A is an exemplary timeseries chart 1200A showing read rate of another device (e.g., a secondary imager) over time with one (1) hour intervals, according to some embodiments.
  • FIG. 12B is an exemplary timeseries chart 1200B showing the read rate of FIG. 12A over time with fifteen (15) minute intervals, according to some embodiments. This configuration enables the identification of potential issues quickly with, for example, bigger intervals, and then drill down through, for example, smaller intervals.
  • a chart may provide one or more metrics of multiple devices.
  • FIG. 13 is an exemplary timeseries chart 1300 showing read rates of multiple devices (e.g., the primary imager 412 and the secondary imagers 414 of the imaging device 402) over time, according to some embodiments.
  • Such a configuration may enable the identification of one or more devices that are under- performing, which may allow users to improve their performances.
  • the techniques can be used to determine when a device is dirty (e.g., a dirty lens), when a device has moved (e.g., bumped by an object moving through the tunnel), when a device is offline, etc.
  • a performance dashboard may provide options to create standard and/or custom events .
  • the creation of these events may enable notification and analyses of the occurrence of the events.
  • An event may be defined as a condition around one or more metrics.
  • Such a configuration may enable the creation of various levels of notifications including, for example, a critical level, a warning level, and an info-level.
  • FIG. 14A is an exemplary user interface 1400A for creating new events, according to some embodiments. As illustrated, the user interface 1400A may provide multiple drop-down menus 1402 such that aspects of an event may be selected including, for example, statistic (e.g., Trigger Overrun Count in FIG.
  • FIG. 14B is an exemplary user interface 1400B showing created events, according to some embodiments.
  • the user interface 1400B may include buttons 1404 configured for actions such as editing an event, removing an event, and/or changing notification type.
  • FIG. 14C is an exemplary user interface 1400C for creating new events, according to some embodiments.
  • the user interface 1400C can be a subpage (instead of a dialog as shown with the user interface 1400 A).
  • the user interface 1400C can allow a user to provide inputs to create the conditions of a new event. As illustrated, the user interface 1400C can allow for the designation or selection of devices such that events can be created for individual devices. Although one device (here, INBD_Top_3-Right_P) is illustrated with an associated selection button 1410, it should be appreciated that multiple device options can be available for selection, and the designation of devices can be achieved by any suitable manner, such as by additionally or alternatively providing text boxes for a user to input the name of a device, etc.
  • the user interface 1400C can also allow for specifying metric conditions for individual events as illustrated via the dropdowns for metric, operator, and time window, and the text field for threshold in the section 1412 for creating a performance condition.
  • alert details can be configured including an alert name, an alert level, and any desired alert notes.
  • FIG. 14D is another exemplary user interface 1400D showing created events, according to some embodiments.
  • the user interface 1400B may include buttons 1408 configured for actions such as editing an event, removing an event, and/or changing notification type.
  • FIG. 15 is an exemplary user interface 1500 for viewing, filtering, and sorting notifications, according to some embodiments.
  • the user interface 1500 may provide multiple drop-down menus 1502 configured for various sorting options including, for example, timeframe and various filtering options including, for example, timeframe and notification level.
  • the user interface 1500 may also provide input boxes 1504 configured for additional filtering options including, for example, Service Name, Title, Details.
  • FIG. 16 is a flow chart illustrating a method 1600 for providing a result browser (e.g., result browser 424), according to some embodiments.
  • an image processing device e.g., image processing device 408 may receive, from a primary imager (primary imager 412 of imaging device 402), first image data captured by the primary imager and metadata.
  • the metadata may be generated by the primary imager based on information provided by secondary imagers (e.g., secondary imagers 414 of imaging device 402) to the primary imager.
  • the information may be provided by the secondary imagers to the primary imager through wired connections between the primary imager and secondary imagers (e.g., which may deliver the information faster than a wireless connection).
  • the image processing device may receive, from the secondary imagers, second image data captured by the one or more secondary imagers.
  • the second image data may also be sent to the primary imager by the secondary imagers and transmitted to the image processing device by the primary imager.
  • the first image data and second image data and metadata may be provided to the image processing device through wired connections. Alternatively or additionally, wireless connections may be used to transmit the first image data and second image data and metadata.
  • the image processing device may correlate the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information.
  • tunnel analytics e.g., tunnel analytics 410) may filter the correlated system information according to one or more metrics.
  • tunnel analytics may generate a representation of the filtered correlated system information.
  • a result browser may provide presentations by various metrics such as individual triggers. The result browser can enable customized analyses of the machine vision system by system data such as triggers, which may expose hidden problems in the tunnels.
  • FIG. 17 is an exemplary user interface 1700 showing trigger data 1702 in a table 1704, according to some embodiments. As illustrated, each row of the table 1704 may include various aspects of individual triggers such as Date & Time, Trigger Index, Good Read, Read String, Angle, Length, Width, Height, Gaps, and LFT.
  • the user interface 1700 may include drop-down menu 1706 configured to provide searching and filtering options for the table 1704.
  • various aspects of the tunnels may be searched based on user-defined multi-conditional search queries including, for example, No Reads that have package heights greater than a user defined thresholds, Non-legal for trade packages, packages with all dimensions assigned to -1, all packages with all dimensions assigned 0, triggers that are flagged as off-box (which may require 3D calibration), and triggers that have codes on the bottom of the package (which may require 3D calibration).
  • FIG. 18A is an exemplary user interface 1800A for searching trigger data, according to some embodiments.
  • the user interface 1800A may provide multiple selection lists 1802 configured for trigger data to be searched by, for example, Decode Results, Dimensioner, Scale Result, and Sorter Information, respectively.
  • selection of the selection lists 1802 may lead to user interfaces 1800B, 1800C, 1800E, and 1800F shown in FIGs. 18B, C, E, and F, respectively.
  • the user interface 1800B may be configured for searching the trigger data by decode results.
  • the user interface 1800B may include input boxes 1804 configured for providing search options such as Symbology, Module Size, etc.
  • the user interface 1800B may include boxes 1806 configured for providing search options such as Assignment Results, Assigned Surface, etc.
  • the user interface 1800C may be configured for search the trigger data by dimensioner.
  • the user interface 1800C may include radio buttons 1808 configured for providing search options such as Legal for Trade, Is Side By Side, etc.
  • the user interface 1800C may include input boxes 1810 configured for providing search options such as Angle, Length, Width, Heigh, Object Gap, etc.
  • FIG. 18D is an exemplary user interface 1800D showing the result of the search of FIG. 18C, according to some embodiments.
  • the user interface 1800E may be configured for search the trigger data by scale result.
  • the user interface 1800E may include radio buttons 1812 configured for providing search options such as Legal For Trade, etc.
  • the user interface 1800E may include input boxes 1814 configured for providing search options such as Weight.
  • the user interface 1800F may be configured for search the trigger data by sorter information.
  • the user interface 1800E may include input boxes 1816 configured for providing search options such as Vendor Token.
  • FIG. 19A is another exemplary user interface 1900A for searching trigger data, according to some embodiments.
  • the user interface 1900A may be configured similar to the user interface 1800A discussed above.
  • trigger data may be searched by Trigger Information, Decode Results, Dimensioner, Scale Result, and Sorter Information.
  • the searches by Decode Results, Dimensioner, Scale Result, and Sorter Information may be configured similar to the example shown in FIGs.A-F.
  • FIG. 19B is an exemplary user interface 1900B for searching the trigger data by trigger information, according to some embodiments.
  • the user interface 1900B may include buttons 1902 configured for providing search options such as Multi Read.
  • the user interface 1900B may include input boxes 1904 configured for providing search options such as Trigger Index.
  • the user interface 1900B may include drop-down menu 1906 configured for providing search options such as Trigger Type.
  • FIG. 19C is an exemplary user interface 1900C for downloading the search results, according to some embodiments.
  • various data may be saved including, for example, Trigger and Image.
  • Individual trigger information may include image data and measurement data.
  • the triggers may be assigned into one or more groups by result tags.
  • FIG. 20 is an exemplary user interface 2000 for editing result tags, according to some embodiments.
  • the illustrated trigger has a result tag “Multi Read” shown in area 2002 of the user interface 2000.
  • the result tag may be edited by, for example, button 2004 of the user interface 2000.
  • New result tag may be added through input box 2006 of the user interface 2000.
  • Such a configuration enables customized grouping of trigger data.
  • FIG. 21A is an exemplary user interface 2100A showing trigger data in a table 2102, according to some embodiments.
  • the user interface 2100A may include button 2104, which may activate options for reconfiguring the table 2102.
  • FIG. 21B is an exemplary user interface 2100B for reconfiguring the table showing the trigger data, according to some embodiments.
  • the table 2102 may include radio buttons 2104 such that the table 2102 may be reconfigured based on one or more of Date & Time, Trigger Index, Good Read, Multi Read, Decode Results, Assigned Surface, Dimensioner, Scale, and Sorter Information.
  • Decode Results may include Read String, Assignment Result, and Read String Valid.
  • Assigned Surface may include Left, Right, Top, Front, Back, Top, and Bottom.
  • FIG. 21C is an exemplary user interface 2100C showing the result of a reconfigured table 2104, according to some embodiments.
  • FIG. 22 is an exemplary user interface 2200 for reviewing trigger data, according to some embodiments.
  • the user interface 2200 may include button 2202, which enable the trigger data to be downloaded for further analyses.
  • any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein.
  • computer readable media can be transitory or non-transitory.
  • non- transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as RAM, Flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media.
  • magnetic media such as hard disks, floppy disks, etc.
  • optical media such as compact discs, digital video discs, Blu-ray discs, etc.
  • semiconductor media such as RAM, Flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.
  • transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
  • mechanism can encompass hardware, software, firmware, or any suitable combination thereof.
  • a method for analyzing image data captured by a machine vision system comprising a primary imager and a secondary imager, the method comprising: receiving, from the primary imager, first image data and metadata generated by the primary imager, the metadata based at least in part on information provided by the secondary imager to the primary imager; receiving, from the secondary imager, second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and generating correlated system information by correlating the first image data and the second image data based on the metadata.
  • GUI graphical user interface
  • the second image data further comprises a quality indication corresponding to the captured image, the quality indication determined by the primary imager based on the decode result.
  • receiving the first image data and the metadata comprises receiving the first image data and the metadata over a first wired connection with the primary imager; and receiving the second image data comprises receiving the second image data over a second wired connections with the secondary imager.
  • the correlated system information comprises status information of the machine vision system comprising at least one of: an indication of whether the primary imager is connected, an indication of whether the secondary imager is connected, a count of machine vision system triggers, a count of multi reads, or a count of objects.
  • the status information of the machine vision system comprises status information for individual imagers of the primary imager and the secondary imager.
  • metric comprises at least one of: dimensioner results, scale results, or sorter information.
  • the metric comprises at least one of: machine vision system trigger information or decode results.
  • a non-transitory computer-readable medium comprising instructions which, when executed, cause at least one processor to carry out the method of any one of the preceding.
  • a machine vision system comprising: a primary imager configured to generate first image data and metadata; a secondary imager in communication with the primary imager and configured to generate second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and at least one processor in communication with the primary imager and the secondary imager and configured to: receive the first image data and the metadata; receive the second image data; and generate correlated system information by correlating the first image data and the second image data based on the metadata, wherein the metadata is generated by the primary imager based on information provided by the secondary imager.
  • GUI graphical user interface

Landscapes

  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The techniques described herein provide high performance machine vision systems. A high performance machine vision system includes a live dashboard, a performance dashboard, and a result browser. The live dashboard provides, for viewing, a live stream of images and associated machine vision data. The live dashboard enables real-time analysis of the machine vision system, such as to determine whether the system including, for example, tunnels, is being triggered properly, and whether packages are moving through the system correctly. The performance dashboard provides, for viewing, graphical analyses of various data associated with the machine vision system. The performance dashboard enables real-time analysis of the machine vision system, such as to determine whether the system is functioning properly, etc. The result browser provides presentations by various metrics such as individual triggers. The result browser enables customized analyses of the machine vision system, which may expose hidden problems in the system.

Description

HIGH PERFORMANCE MACHINE VISION SYSTEM
RELATED APPLICATION
[0001] This Application claims priority to and the benefit of U.S. Provisional Application Serial No. 63/441570, titled “HIGH PERFORMANCE MACHINE VISION SYSTEM,” filed on January 27, 2023, which is herein incorporated by reference in its entirety.
FIELD
[0002] The techniques described herein relate generally to imaging systems, including machine vision systems that are configured to acquire and analyze images of objects or symbols (e.g., barcodes).
BACKGROUND
[0003] Machine vision systems are generally configured for use in capturing images of objects or symbols and analyzing the images to identify the objects or decode the symbols. Accordingly, machine vision systems generally include one or more devices for image acquisition and image processing. In conventional applications, these devices can be used to acquire images, or to analyze acquired images, such as for the purpose of decoding imaged symbols such as barcodes or text. In some contexts, machine vision and other imaging systems can be used to acquire images of objects that may be larger than a field of view (FOV) for a corresponding imaging device and/or that may be moving relative to an imaging device.
SUMMARY
[0004] Aspects of the present disclosure relate to high performance machine vision system.
[0005] Some embodiments relate to a method for analyzing image data captured by a machine vision system comprising a primary imager and a secondary imager. The method may include receiving, from the primary imager, first image data and metadata generated by the primary imager, the metadata based at least in part on information provided by the secondary imager to the primary imager; receiving, from the secondary imager, second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and generating correlated system information by correlating the first image data and the second image data based on the metadata.
[0006] Optionally, the method further comprises generating an image based on the correlated system information.
[0007] Optionally, the method further comprises displaying the image via a graphical user interface (GUI).
[0008] Optionally, the method further comprises generating a new image in response to receiving new first image data from the primary imager.
[0009] Optionally, the method further comprises, prior to receiving the second image data: decoding a symbol within the captured image; and generating the second image data based on a decode result.
[0010] Optionally, decoding the symbol and generating the second image data is performed by the secondary imager.
[0011] Optionally, the second image data further comprises a quality indication corresponding to the captured image, the quality indication determined by the primary imager based on the decode result.
[0012] Optionally, receiving the first image data and the metadata comprises receiving the first image data and the metadata over a first wired connection with the primary imager; and receiving the second image data comprises receiving the second image data over a second wired connections with the secondary imager.
[0013] Optionally, the correlated system information comprises status information of the machine vision system comprising at least one of: an indication of whether the primary imager is connected, an indication of whether the secondary imager is connected, a count of machine vision system triggers, a count of multi reads, or a count of objects.
[0014] Optionally, the status information of the machine vision system comprises status information for individual imagers of the primary imager and the secondary imager.
[0015] Optionally, the method further comprises generating a graphical analysis of a metric based on the correlated system information for the primary imager and the secondary imager, respectively.
[0016] Optionally, the graphical analysis comprises a chart visually depicting the metric. [0017] Optionally, the metric comprises a symbol decode rate by the secondary imager.
[0018] Optionally, the method further comprises filtering the correlated system information according to a metric; and generating a representation of the filtered correlated system information. [0019] Optionally, the method further comprises receiving, from a dimensioner associated with the machine vision system, an object dimension.
[0020] Optionally, the method further comprises receiving, from a scale associated with the machine vision system, an object weight.
[0021] Optionally, the metric comprises at least one of: dimensioner results, scale results, or sorter information.
[0022] Optionally, the metric comprises at least one of: machine vision system trigger information or decode results.
[0023] Some embodiments relate to a non-transitory computer-readable medium comprising instructions which, when executed, cause at least one processor to carry out the method described herein.
[0024] Some embodiments relate to a machine vision system. The machine vision system may include a primary imager configured to generate first image data and metadata; a secondary imager in communication with the primary imager and configured to generate second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and at least one processor in communication with the primary imager and the secondary imager and configured to: receive the first image data and the metadata; receive the second image data; and generate correlated system information by correlating the first image data and the second image data based on the metadata, wherein the metadata is generated by the primary imager based on information provided by the secondary imager.
[0025] Optionally, the at least one processor is further configured to: generate an image based on the correlated system information; and transmit the image to a graphical user interface (GUI).
[0026] Optionally, the primary imager is configured to generate the first image data in response to receiving a trigger signal.
[0027] Optionally, the secondary imager is configured to generate the second image data in response to receiving the trigger signal from the primary imager.
[0028] Optionally, the second image data further comprises a decode result corresponding to a symbol within the captured image. [0029] Optionally, the at least one processor communicates with the primary imager via a first connection, and the at least one processor communicates with the secondary imager via a second connection.
[0030] Optionally, the at least one processor is further configured to transmit status information for each of the primary imager and the secondary imager.
[0031] Optionally, the machine vision system further comprises a plurality of secondary imagers in communication with the primary imager and the at least one processor, the second image data generated by the plurality of secondary imagers.
[0032] Some embodiments relate to a computerized method for analyzing runtime data captured by a machine vision system comprising a primary imager and one or more secondary imagers. The method may include receiving, from the primary imager, first image data captured by the primary imager and metadata generated by the primary imager based on information provided by the one or more secondary imagers to the primary imager; receiving, from the one or more secondary imagers, second image data captured by the one or more secondary imagers; correlating the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information; and generating images based on the correlated system information for the primary imager and the one or more secondary imagers, respectively, for display.
[0033] Some embodiments relate to a method for analyzing runtime data captured by a machine vision system comprising a primary imager and one or more secondary imagers. The method may include receiving, from the primary imager, first image data captured by the primary imager and metadata generated by the primary imager based on information provided by the one or more secondary imagers to the primary imager; receiving, from the one or more secondary imagers, second image data captured by the one or more secondary imagers; correlating the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information; and generating one or more graphical analyses of one or more metrics based on the correlated system information for the primary imager and the one or more secondary imagers, respectively.
[0034] Some embodiments relate to a computerized method for analyzing runtime data captured by a machine vision system comprising a primary imager and one or more secondary imagers. The method may include receiving, from the primary imager, first image data captured by the primary imager and metadata generated by the primary imager based on information provided by the one or more secondary imagers to the primary imager; receiving, from the one or more secondary imagers, second image data captured by the one or more secondary imagers; correlating the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information; filtering the correlated system information according to one or more metrics; and generating a representation of the filtered correlated system information.
[0035] There has thus been outlined, rather broadly, the features of the disclosed subject matter in order that the detailed description thereof that follows may be better understood, and in order that the present contribution to the art may be better appreciated. There are, of course, additional features of the disclosed subject matter that will be described hereinafter and which will form the subject matter of the claims appended hereto. It is to be understood that the phraseology and terminology employed herein are for the purpose of description and should not be regarded as limiting.
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] The accompanying drawings may not be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:
[0037] FIG. 1A is a schematic diagram illustrating an exemplary system configured to capture multiple images of sides of an object, according to some embodiments.
[0038] FIG. IB is another schematic diagram of the system of FIG. 1A with additional of a dimensioner and a motion measurement device, according to some embodiments.
[0039] FIG. 2 is a schematic diagram illustrating another exemplary system configured to capture multiple images of sides of an object, according to some embodiments.
[0040] FIG. 3 is a schematic diagram illustrating a third exemplary system configured to capture multiple images of sides of an object, according to some embodiments.
[0041] FIG. 4 is a schematic diagram illustrating a high performance vision system, according to some embodiments. [0042] FIG. 5A is a flow chart illustrating a method for analyzing image data captured by a machine vision system, according to some embodiments.
[0043] FIG. 5B is a flow chart illustrating a method for providing a live dashboard, according to some embodiments.
[0044] FIG. 6A is an exemplary user interface showing live captured images, according to some embodiments.
[0045] FIG. 6B is another exemplary user interface showing live captured images, according to some embodiments.
[0046] FIG. 6C is the exemplary user interface of FIG. 6B when Tunnel Connection Status is selected, according to some embodiments.
[0047] FIG. 7 is an exemplary user interface showing live stitched images, according to some embodiments.
[0048] FIG. 8 is a flow chart illustrating a method for providing a performance dashboard, according to some embodiments.
[0049] FIG. 9 is a schematic diagram illustrating chart types available through a performance dashboard, according to embodiments.
[0050] FIG. 10 is an exemplary timeseries chart showing read rate of a device over time, according to some embodiments.
[0051] FIG. 11 is an exemplary numerical distribution showing dimensions distribution captured by a device, according to some embodiments.
[0052] FIG. 12A is an exemplary timeseries chart showing read rate of another device over time with one (1) hour intervals, according to some embodiments.
[0053] FIG. 12B is an exemplary timeseries chart showing the read rate of FIG. 12A over time with fifteen (15) minute intervals, according to some embodiments.
[0054] FIG. 13 is an exemplary timeseries chart showing read rates of multiple devices over time, according to some embodiments.
[0055] FIG. 14A is an exemplary user interface for creating new events, according to some embodiments.
[0056] FIG. 14B is an exemplary user interface showing created events, according to some embodiments. [0057] FIG. 14C is another exemplary user interface for creating new events, according to some embodiments.
[0058] FIG. 14D is another exemplary user interface showing created events, according to some embodiments.
[0059] FIG. 15 is an exemplary user interface for viewing, filtering, and sorting notifications, according to some embodiments.
[0060] FIG. 16 is a flow chart illustrating a method for providing a result browser, according to some embodiments.
[0061] FIG. 17 is an exemplary user interface showing trigger data in a table, according to some embodiments.
[0062] FIG. 18A is an exemplary user interface for searching trigger data, according to some embodiments.
[0063] FIG. 18B is an exemplary user interface for searching the trigger data by decode results, according to some embodiments.
[0064] FIG. 18C is an exemplary user interface for searching the trigger data by dimensioner, according to some embodiments.
[0065] FIG. 18D is an exemplary user interface showing the result of the search of FIG. 18C, according to some embodiments.
[0066] FIG. 18E is an exemplary user interface for searching the trigger data by scale result, according to some embodiments.
[0067] FIG. 18F is an exemplary user interface for searching the trigger data by sorter information, according to some embodiments.
[0068] FIG. 19A is another exemplary user interface for searching trigger data, according to some embodiments.
[0069] FIG. 19B is an exemplary user interface for searching the trigger data by trigger information, according to some embodiments.
[0070] FIG. 19C is an exemplary user interface for downloading the search results, according to some embodiments.
[0071] FIG. 20 is an exemplary user interface for editing result tags, according to some embodiments. [0072] FIG. 21A is an exemplary user interface showing trigger data in a table, according to some embodiments.
[0073] FIG. 21B is an exemplary user interface for reconfiguring the table showing the trigger data, according to some embodiments.
[0074] FIG. 21C is an exemplary user interface showing the result of a reconfigured table, according to some embodiments.
[0075] FIG. 22 is an exemplary user interface for reviewing trigger data, according to some embodiments.
DETAILED DESCRIPTION
[0076] The techniques described herein provide various visibility features that can be used to analyze machine vision systems. Machine vision systems can be used to perform various tasks or processes, such as inspection processes, manufacturing processes, warehouse processes, and/or other processes that leverage machine vision. A machine vision system may include several devices that are used to perform the machine vision task. The devices can include, for example, one or more imaging devices configured to acquire image data and/or one or more measuring devices (e.g., integrated device sensors) configured to measure objects within a field of view (FOV) of the machine vision system.
[0077] Each device of a machine vision system can capture its own associated data, such as image data and/or other sensor data. Over time, such data can result in a large amount of data for the machine vision system. The inventors have recognized and appreciated that interpreting massive runtime data of a machine vision system, especially in real time, can be quite challenging, and may not be possible depending on the constraints of the machine vision system. For example, real time data interpretation may require transmitting and/or processing image data in a very short time period for the data to be relevant or useful for analyzing the machine vision system. Such interpretation may not be possible depending on latencies or other constraints of the machine vision system. Further, even if such interpretation is possible, too large of a delay in interpreting the runtime data can cause missed opportunities in addressing problems that might be occurring in the machine vision system (e.g., such as a no read result and/or errors in readings).
[0078] The inventors have developed technological improvements to techniques to address these and other problems. According to some embodiments, a machine vision system may include a live dashboard, a performance dashboard, and a result browser. The live dashboard can provide, for viewing, a live stream of images and associated machine vision data (e.g., triggers, results, etc.). The live dashboard can enable real-time analysis of the machine vision system, such as to determine whether the machine vision system including, for example, tunnels, is being triggered properly, whether packages are moving through the system correctly, etc. The performance dashboard can provide, for viewing, graphical analyses of various data associated with the machine vision system. The performance dashboard can enable real-time analysis of the machine vision system, such as to determine whether the system is functioning properly, etc. The result browser may provide presentations by various metrics such as individual triggers. The result browser can enable customized analyses of the machine vision system by system data such as triggers, which may expose hidden problems in the system.
[0079] In some embodiments, a machine vision system may be implemented in a tunnel arrangement (or system), which may include a conveyor and a structure holding the devices such that each device may be positioned at an angle relative to the conveyor resulting in an angled FOV. The FOVs of one or more imaging devices may overlap. The imaging devices may be configured to acquire image data of a shared scene such as objects disposed on the conveyor and moving into the FOVs of the imaging devices by the conveyor. The measuring devices may be configured to acquire measurement data of these objects, which may compensate the image data acquired by the imaging devices to provide various aspects of the performance of the machine vision system.
[0080] Each imaging device may be configured to include multiple imagers, each of which may be configured to capture image data of the objects in the shared scene. One of the multiple imagers may be configured as a primary imager; and the rest of the multiple imagers may be configured as secondary imagers. The secondary imagers may be configured to provide information to the primary imager such that the primary imager may generate metadata based on the information provided by the secondary imagers. The metadata generated by the primary imager may indicate relationship between image data and respective imagers.
[0081] The measuring devices may include a dimensioner and/or a motion measurement device. The dimensioner may be configured to measure dimensions (e.g., height, length, and width) of the objects in the shared scene. The motion measurement device may be configured to track the physical movements of the objects in the shared scene, based on which rates of capture by the imaging devices may be derived. [0082] The imaging devices and measuring devices may acquire respective image data and measurement data corresponding to trigger signals. The trigger signals may be initiated by the imaging devices and measuring devices, an image processing device, or any suitable processors/servers/computing devices integrated with the imaging devices and measuring devices or remote from the imaging devices and measuring devices. According to some embodiments, the image processing device may be configured to provide trigger signals to cause the imaging devices to acquire image data and/or the measuring devices to acquire measurement data. Each trigger signal may be referred to as a trigger. The image processing device may be configured to receive the image data from the imaging devices, the metadata from the primary imagers of the imaging devices, and the measurement data acquired by the measuring devices. The image processing device may be configured to decode symbols from the received image data. The image processing device may be configured to generate composite image data by, for example, stitching image data acquired by multiple imagers. The image processing device may be configured to generate correlated system information, based at least in part on the metadata generated by the primary imagers of the imaging devices, by correlating the received image data to respective imagers of respective imaging devices, correlating the composite image data to sides of the captured objects in the shared scene, and/or correlating the received measurement data to the received image data and/or the composite image data.
[0083] The inventors have developed techniques that can provide various visibility features (e.g., which can be used to analyze machine vision systems). In some embodiments, the techniques use system data, such as the correlated system information, to provide flexible runtime data through a live dashboard. For example, the techniques can provide live captured images and/or live stitched images. The live data can be data provided within a threshold time period (e.g., whereas otherwise data provided after the threshold time period can be considered historical data). In some embodiments, the techniques described herein may provide for generating images for display within a threshold time period from when the image data is acquired by the imagers of the imaging devices, respectively. The threshold time period may be, for example, five to twenty seconds when the images are generated based on composite image data from a plurality of image devices, and two to ten seconds (e.g., five seconds) when the images are generated based on received image data from individual imaging devices. The techniques described herein can continuously provide live data from the machine vision system, and can therefore provide live result streams. Such live data may enable a reviewer of the live data to monitor and/or analyze the machine vision system. In some embodiments, a review can pause the live data so as to investigate areas of interest of the machine vision system, such as data associated with a particular trigger.
[0084] In some embodiments, the techniques allow users to analyze the system data, such as to perform customizable data analysis (e.g., including graphical charts and statistics), data searching and/or data filtering through a performance dashboard. The system data may be aggregated, based at least in part on the correlated system information, into one or more performance metrics, which may provide a holistic view of the performance of the system. The one or more performance metrics may include, for example, one or more metrics for individual devices in the machine vision system, one or more metrics for individual triggers, and/or the like. [0085] The techniques described herein can provide for generating graphical analyses of the one or more metrics for individual devices in the machine vision system. The one or more metrics may include a rate of capture by a respective device, dimensions of objects captured by a respective device, scales of objects captured by a respective device, decode time by a respective device, etc. The graphical analyses may include one or more charts. In some embodiments, the charts may show one or more of the metrics of respective devices over time. For example, a step chart may show rates of capture or good read count by respective imagers in an imaging device. As another example, a histogram chart may show dimensions of objects captured by a respective imager. The one or more metrics for individual devices in the machine vision system may be customized according to requests. For example, minimum values, maximum values, and/or mean values of the one or more metrics for individual devices in the machine vision system may be provided.
[0086] In some embodiments, the techniques described herein may provide a result browser that enables image data from various imaging devices and measurement data from various measuring devices to be organized and searchable based on performance metrics. For example, the techniques described herein may enable the system data to be searched and filtered by one or more metrics for individual triggers. The one or more metrics for individual triggers may include, for example, trigger information, decode results, dimensioner results, scale results, and/or sorter information. A representation of the filtered system data may be generated, which may facilitate analyses of the performance of system (e.g., based on triggers). For example, if a read rate of a system is decreasing (e.g., indicative of symbols not being read), the techniques can allow the system data to be narrowed to a period of interest related to the decreased read rate, which can allow the user to view images from that time period to check for and/or diagnose a problem with the tunnel that may be causing the decreased read rate. In some embodiments, the techniques can allow system data to be broken into sub-results (e.g., data for symbol readers, data for dimensioners, etc.).
[0087] In the following description, numerous specific details are set forth regarding the systems and methods of the disclosed subject matter and the environment in which such systems and methods may operate, etc., in order to provide a thorough understanding of the disclosed subject matter. In addition, it will be understood that the examples provided below are exemplary, and that it is contemplated that there are other systems and methods that are within the scope of the disclosed subject matter.
[0088] FIG. 1A shows an example of a system 100 for capturing multiple images of each side of an object in accordance with an embodiment of the technology. In some embodiments, system 100 can be configured to evaluate symbols (e.g., barcodes, two-dimensional (2D) codes, fiducials, hazmat, machine readable code, etc.) on objects (e.g., objects 118a, 118b) moving through a tunnel 102, such as a symbol 120 on object 118a, including assigning symbols to objects (e.g., objects 118a, 118b). In some embodiments, symbol 120 is a flat 2D barcode on a top surface of object 118a, and objects 118a and 118b are roughly cuboid boxes. Additionally or alternatively, in some embodiments, any suitable geometries are possible for an object to be imaged, and any variety of symbols and symbol locations can be imaged and evaluated, including non-direct part mark (DPM) symbols and DPM symbols located on a top or any other side of an object.
[0089] In FIG. 1A, objects 118a and 118b are disposed on a conveyor 116 that is configured to move objects 118a and 118b in a horizontal direction through tunnel 102 at a relatively predictable and continuous rate, or at a variable rate measured by a device, such as an encoder or other motion measurement device. Additionally or alternatively, objects can be moved through tunnel 102 in other ways (e.g., with non-linear movement). In some embodiments, conveyor 116 can include a conveyor belt. In some embodiments, conveyor 116 can consist of other types of transport systems.
[0090] In some embodiments, system 100 can include imaging devices 112 and an image processing device 132. For example, system 100 can include multiple imaging devices in a tunnel arrangement (e.g., implementing a portion of tunnel 102), representatively shown via imaging devices 112a, 112b, and 112c, each with a field-of-view (“FOV"), representatively shown via FOV 114a, 114b, 114c, that includes part of the conveyor 116. In some embodiments, each imaging device 112 can be positioned at an angle relative to the conveyor top or side (e.g., at an angle relative to a normal direction of symbols on the sides of the objects 118a and 118b or relative to the direction of travel), resulting in an angled FOV. Similarly, some of the FOVs can overlap with other FOVs (e.g., FOV 114a and FOV 114b). In such embodiments, system 100 can be configured to capture one or more images of multiple sides of objects 118a and/or 118b as the objects are moved by conveyor 116. In some embodiments, the captured images can be used to identify symbols on each object (e.g., a symbol 120) and/or assign symbols to each object, which can be subsequently decoded (as appropriate). In some embodiments, a gap in conveyor 116 (not shown) can facilitate imaging of a bottom side of an object (e.g., as described in U.S. Patent Application Publication No. 2019/0333259, filed on April 25, 2018, which is hereby incorporated by reference herein in its entirety) using an imaging device or array of imaging devices (not shown), disposed below conveyor 116). In some embodiments, the captured images from a bottom side of the object may also be used to identify symbols on the object and/or assign symbols to each object, which can be subsequently decoded (as appropriate). Note that although two arrays of three imaging devices 112 are shown imaging a top of objects 118a and 118b, and four arrays of two imaging devices 112 are shown imaging sides of objects 118a and 118b, this is merely an example, and any suitable number of imaging devices can be used to capture images of various sides of objects. For example, each array can include four or more imaging devices. In some embodiments, an array of imaging devices 112 may be referred to as an imaging device, and imaging devices 112 in the array may be referred to as an imager. Additionally, although imaging devices 112 are generally shown imaging objects 118a and 118b without mirrors to redirect a FOV, this is merely an example, and one or more fixed and/or steerable mirrors can be used to redirect a FOV of one or more of the imaging devices as described below with respect to FIGs. 2A and 3, which may facilitate a reduced vertical or lateral distance between imaging devices and objects in tunnel 102. For example, imaging device 112a can be disposed with an optical axis parallel to conveyor 116, and one or more mirrors can be disposed above tunnel 102 to redirect a FOV from imaging devices 112a toward a front and top of objects in tunnel 102.
[0091] In some embodiments, imaging devices 112 can be implemented using any suitable type of imaging device(s). For example, imaging devices 112 can be implemented using 2D imaging devices (e.g., 2D cameras), such as area scan cameras and/or line scan cameras. In some embodiments, imaging device 112 can be an integrated system that includes a lens assembly and an imager, such as a CCD or CMOS sensor. In some embodiments, imaging devices 112 may each include one or more image sensors, at least one lens arrangement, and at least one control device (e.g., a processor device) configured to execute computational operations relative to the image sensor. Each of the imaging devices 112a, 112b, or 112c can selectively acquire image data from different fields of view (FOVs). In some embodiments, system 100 can be utilized to acquire multiple images of each side of an object where one or more images may include more than one object. Object 118 may be associated with one or more symbols, such as a barcode, a QR code, etc. In some embodiments, system 100 can be configured to facilitate imaging of the bottom side of an object supported by conveyor 116 (e.g., the side of object 118a resting on conveyor 116). For example, conveyor 116 may be implemented with a gap (not shown).
[0092] In some embodiments, a gap 122 is provided between objects 118a, 118b. In different implementations, gaps between objects can range in size. In some implementations, gaps between objects can be substantially the same between all sets of objects in a system, or can exhibit a fixed minimum size for all sets of objects in a system. In some embodiments, smaller gap sizes may be used to maximize system throughput.
[0093] In some embodiments, system 100 can include a dimensioning system (not shown), sometime referred to herein as a dimensioner, that can measure dimensions of objects moving toward tunnel 102 on conveyor 116, and such dimensions can be used (e.g., by image processing device 132) in a process to assign a symbol to an object in an image captured as one or more objects move through tunnel 102. Additionally, system 100 can include devices (e.g., an encoder or other motion measurement device, not shown) to track the physical movement of objects (e.g., objects 118a, 118b) moving through the tunnel 102 on the conveyor 116. FIG. IB shows an example of a system for capturing multiple images of each side of an object in accordance with an embodiment of the technology. FIG IB shows a simplified diagram of a system 140 to illustrate an example arrangement of a dimensioner and a motion measurement device (e.g., an encoder) with respect to a tunnel. As mentioned above, the system 140 may include a dimensioner 150 and a motion measurement device 152. In the illustrated example, a conveyor 116 is configured to move objects 118d, 118e along the direction indicated by arrow 154 past a dimensioner 150 before the objects 118d, 118e are imaged by one or more imaging devices 112. In the illustrated embodiment, a gap 156 is provided between objects 118d and 118e and an image processing device 132 may be in communication with imaging devices 112, dimensioner 152 and motion measurement device 152. Dimensioner 150 can be configured to determine dimensions and/or a location of an object supported by support structure 116 (e.g., object 118d or 118e) at a certain point in time. For example, dimensioner 150 can be configured to determine a distance from dimensioner 150 to a top surface of the object, and can be configured to determine a size and/or orientation of a surface facing dimensioner 150. In some embodiments, dimensioner 150 can be implemented using various technologies. For example, dimensioner 150 can be implemented using a 3D camera (e.g., a structured light 3D camera, a continuous time of flight 3D camera, etc.). As another example, dimensioner 150 can be implemented using a laser scanning system (e.g., a LiDAR system). In a particular example, dimensioner 150 can be implemented using a 3D-A1000 system available from Cognex Corporation. In some embodiments, the dimensioning system or dimensioner (e.g., a time-of-flight sensor or computed from stereo) may be implemented in a single device or enclosure with an imaging device (e.g., a 2D camera) and, in some embodiments, a processor (e.g., that may be utilized as the image processing device) may also be implemented in the device with the dimensioner and imaging device.
[0094] In some embodiments, dimensioner 150 can determine 3D coordinates of each corner of the object in a coordinate space defined with reference to one or more portions of system 140. For example, dimensioner 150 can determine 3D coordinates of each of eight corners of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with an origin at dimensioner 150. As another example, dimensioner 150 can determine 3D coordinates of each of eight comers of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with respect to conveyor 116 (e.g., with an origin that originates at a center of conveyor 116).
[0095] In some embodiments, a motion measurement device 152 (e.g., an encoder) may be linked to the conveyor 116 and imaging devices 112 to provide electronic signals to the imaging devices 112 and/or image processing device 132 that indicate the amount of travel of the conveyor 116, and the objects 118d, 118e supported thereon, over a known amount of time. This may be useful, for example, in order to coordinate capture of images of particular objects (e.g., objects 118d, 118e), based on calculated locations of the object relative to a field of view of a relevant imaging device (e.g., imaging device(s) 112). In some embodiments, motion measurement device 152 may be configured to generate a pulse count that can be used to identify the position of conveyor 116 along the direction of arrow 154. For example, motion measurement device 152 may provide the pulse count to image processing device 132 for identifying and tracking the positions of objects (e.g., objects 118d, 118e) on conveyor 116. In some embodiments, the motion measurement device 152 can increment a pulse count each time conveyor 116 moves a predetermined distance (encoder pulse count distance) in the direction of arrow 154. In some embodiments, an object's position can be determined based on an initial position, the change in the pulse count, and the pulse count distance.
[0096] In some embodiments, image processing device 132 (or a control device) can coordinate operations of various components of system 100. For example, image processing device 132 can cause a dimensioner (e.g., dimensioner 150 shown in FIG. IB) to acquire dimensions of an object positioned on conveyor 116 and can cause imaging devices 112 to capture images of each side. In some embodiments, image processing device 132 can control detailed operations of each imaging device, for example, by providing trigger signals to cause the imaging device to capture images at particular times, etc. Alternatively, in some embodiments, another device (e.g., a processor included in each imaging device, a separate controller device, etc.) can control detailed operations of each imaging device. For example, image processing device 132 (and/or any other suitable device) can provide a trigger signal to each imaging device and/or dimensioner (e.g., dimensioner 150 shown in FIG. IB), and a processor of each imaging device can be configured to implement a predesignated image acquisition sequence that spans a predetermined region of interest in response to the trigger. Note that system 100 can also include one or more light sources (not shown) to illuminate surfaces of an object, and operation of such light sources can also be coordinated by a central device (e.g., image processing device 132), and/or control can be decentralized (e.g., an imaging device can control operation of one or more light sources, a processor associated with one or more light sources can control operation of the light sources, etc.). For example, in some embodiments, system 100 can be configured to concurrently (e.g., at the same time or over a common time interval) acquire images of multiple sides of an object, including as part of a single trigger event. For example, each imaging device 112 can be configured to acquire a respective set of one or more images over a common time interval. Additionally or alternatively, in some embodiments, imaging devices 112 can be configured to acquire the images based on a single trigger event. For example, based on a sensor (e.g., a contact sensor, a presence sensor, an imaging device, etc.) determining that object 118 has passed into the FOV of the imaging devices 112, imaging devices 112 can concurrently acquire images of the respective sides of object 118.
[0097] In some embodiments, each imaging device 112 can generate an image set depicting a FOV or various FOVs of a particular side or sides of an object supported by conveyor 116 (e.g., object 118). In some embodiments, image processing device 132 can map 3D locations of one or more comers of object 118 to a 2D location within each image in set of images output by each imaging device. In some embodiments, image processing device can generate a mask that identifies which portion of an image is associated with each side (e.g., a bit mask with a 1 indicating the presence of a particular side, and a 0 indicating an absence of a particular side) based on the 2D location of each comer. In some embodiments, image processing device can stitch images associated with a same side of an object into one image that shows a more complete view of the side of the object (e.g., as described in U.S. Application No. 17/019,742, filed on September 14, 2020, which is hereby incorporated by reference herein in its entirety; and in U.S. Application No. 17/837,998, filed on June 10, 2022, which is hereby incorporated by reference herein in its entirety). In some embodiments, the 3D locations of one or more corners of a target object (e.g., object 118a) as well as the 3D locations of one or more corners of an object 118c (a leading object) ahead of the target object 118a on the conveyor 116 and/or the 3D locations of one or more comers of an object 118b (a trailing object) behind the target object 118a on the conveyor 116 may be mapped to a 2D location within each image in the set of images output by each imaging device. Accordingly, if an image captures more than one object (118a, 118b, 118c), one or more corners of each object in the image may be mapped to the 2D image.
[0098] As mentioned above, one or more fixed and/or steerable mirrors can be used to redirect a FOV of one or more of the imaging devices, which may facilitate a reduced vertical or lateral distance between imaging devices and objects in tunnel 102. FIG. 2 shows another example of a system for capturing multiple images of each side of an object in accordance with an embodiment of the technology. System 200 includes multiple banks of imaging devices 212, 214, 216, 218, 220, 222 and multiple mirrors 224, 226, 228, 230 in a tunnel arrangement 202. For example, the banks of imaging devices shown in FIG. 2 include a left trail bank 212, a left lead bank 214, a top trail bank 216, a top lead bank 218, a right trail bank 220 and a right lead bank 222. In the illustrated embodiment, each bank 212, 214, 216, 218, 220, 222 includes four imaging devices that are configured to capture images of one or more sides of an object (e.g., object 208a) and various FOVs of the one or more sides of the object. For example, top trail bank 216 and mirror 228 may be configured to capture images of the top and back surfaces of an object using imaging devices 234, 236, 238, and 240. In the illustrated embodiment, the banks of imaging devices 212, 214, 216, 218, 220, 222 and mirrors 224, 226, 228, 230 can be mechanically coupled to a support structure 242 above a conveyor 204. Note that although the illustrated mounting positions of the banks imaging devices 212, 214, 216, 218, 220, 222 relative to one another can be advantageous, in some embodiments, imaging devices for imaging different sides of an object can be reoriented relative to the illustrated positions in FIG. 2 (e.g., imaging devices can be offset, imaging devices can be placed at the corners, rather than the sides, etc.). Similarly, while there can be advantages associated with using four imaging devices per bank that are each configured to acquire image data from one or more sides of an object, in some embodiments, a different number or arrangement of imaging devices, a different arrangement of mirror (e.g., using steerable mirrors, using additional fixed mirrors, etc.) can be used to configure a particular imaging device to acquire images of multiple sides of an object. In some embodiments, an imaging device can be dedicated to acquiring images of multiple sides of an object including with overlapping acquisition areas relative to other imaging devices included in the same system. In some embodiments, a bank (e.g., 212, 214, 216, 218, 220, 222) of imaging devices (e.g., 212, 214, 216, 218) may be referred to as an imaging device, and imaging devices (e.g., 212, 214, 216, 218) in the bank may be referred to as an imager.
[0099] In some embodiments, system 200 also includes a dimensioner 206 and an image processing device 232. As discussed above, multiple objects 208a, 208b and 208c may be supported in the conveyor 204 and travel through the tunnel 202 along a direction indicated by arrow 210. In some embodiments, each bank of imaging devices 212, 214, 216, 218, 220, 222 (and each imaging device in a bank) can generate a set of images depicting a FOV or various FOVs of a particular side or sides of an object supported by conveyor 204 (e.g., object 208a).
[00100] Note that although FIGs. 1A and 2 depict a dynamic support structure (e.g., conveyor 116, conveyor 204) that is moveable, in some embodiments, a stationary support structure may be used to support objects to be imaged by one or more imaging devices. FIG. 3 shows another example system for capturing multiple images of each side of an object in accordance with an embodiment of the technology. In some embodiments, system 300 can include multiple imaging devices 302, 304, 306, 308, 310, and 312, which can each include one or more image sensors, at least one lens arrangement, and at least one control device (e.g., a processor device) configured to execute computational operations relative to the image sensor. In some embodiments, imaging devices 302, 304, 306, 308, 310, and/or 312 can include and/or be associated with a steerable mirror (e.g., as described in U.S. Application No. 17/071,636, filed on October 13, 2020, which is hereby incorporated by reference herein in its entirety). Each of the imaging devices 302, 304, 306, 308, 310, and/or 312 can selectively acquire image data from different fields of view (FOVs), corresponding to different orientations of the associated steerable mirror(s). In some embodiments, system 300 can be utilized to acquire multiple images of each side of an object. [00101] In some embodiments, system 300 can be used to acquire images of multiple objects presented for image acquisition. For example, system 300 can include a support structure that supports each of the imaging devices 302, 304, 306, 308, 310, 312 and a platform 316 configured to support one or more objects 318, 334, 336 to be imaged (note that each object 318, 334, 336 may be associated with one or more symbols, such as a barcode, a QR code, etc.). For example, a transport system (not shown), including one or more robot arms (e.g., a robot bin picker), may be used to position multiple objects (e.g., in a bin or other container) on platform 316. In some embodiments, the support structure can be configured as a caged support structure. However, this is merely an example, and support structure can be implemented in various configurations. In some embodiments, support platform 316 can be configured to facilitate imaging of the bottom side of one or more objects supported by the support platform 316 (e.g., the side of an object (e.g., object 318, 334, or 336) resting on platform 316). For example, support structure 316 can be implemented using a transparent platform, a mesh or grid platform, an open center platform, or any other suitable configuration. Other than the presence of support structure 316, acquisition of images of the bottom side can be substantially similar to acquisition of other sides of the object. [00102] In some embodiments, imaging devices 302, 304, 306, 308, 310, and/or 312 can be oriented such that a FOV of the imaging device can be used to acquire images of a particular side of an object resting on support platform 316, such that each side of an object (e.g., object 318) placed on and supported by support platform 316 can be imaged by imaging devices 302, 304, 306, 308, 310, and/or 312. For example, imaging device 302 can be mechanically coupled to the support structure above support platform 316, and can be oriented toward an upper surface of support platform 316, imaging device 304 can be mechanically coupled to the support structure below support platform 316, and imaging devices 306, 308, 310, and/or 312 can each be mechanically coupled to a side of the support structure, such that a FOV of each of imaging devices 306, 308, 310, and/or 312 faces a lateral side of support platform 316.
[00103] In some embodiments, each imaging device can be configured with an optical axis that is generally parallel with another imaging device, and perpendicular to other imaging devices (e.g., when the steerable mirror is in a neutral position). For example, imaging devices 302 and 304 can be configured to face each other (e.g., such that the imaging devices have substantially parallel optical axes), and the other imaging devices can be configured to have optical axis that are orthogonal to the optical axis of imaging devices 302 and 304.
[00104] Note that although the illustrated mounting positions of the imaging devices 302, 304, 306, 308, 310, and 312 relative to one another can be advantageous, in some embodiments, imaging devices for imaging different sides of an object can be reoriented relative the illustrated positions of FIG. 3 (e.g., imaging device can be offset, imaging devices can be placed at the comers, rather than the sides, etc.). Similarly, while there can be advantages (e.g., increased acquisition speed) associated with using six imaging devices that is each configured to acquire image data from a respective side of an object (e.g., the six side of object 118), in some embodiments, a different number or arrangement of imaging devices, a different arrangement of mirrors (e.g., using fixed mirrors, using additional moveable mirrors, etc.) can be used to configure a particular imaging device to acquire images of multiple sides of an object. For example, fixed mirrors disposed such that imaging devices 306 and 310 can capture images of a far side of object 318 and can be used in lieu of imaging devices 308 and 312. In some embodiments, system 300 can be configured to image each of the multiple objects 318, 334, 336 on the platform 316.
[00105] In some embodiments, system 300 can include a dimensioner 330. As described above with respect to FIGs. 1A, IB and 2, a dimensioner can be configured to determine dimensions and/or a location of an object supported by support structure 316 (e.g., object 318, 334, or 336). As mentioned above, in some embodiments, dimensioner 330 can determine 3D coordinates of each comer of the object in a coordinate space defined with reference to one or more portions of system 300. For example, dimensioner 330 can determine 3D coordinates of each of eight comers of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with an origin at dimensioner 330. As another example, dimensioner 330 can determine 3D coordinates of each of eight comers of an object that is at least roughly cuboid in shape within a Cartesian coordinate space defined with respect to support platform 316 (e.g., with an origin that originates at a center of support platform 316).
[00106] In some embodiments, an image processing device 332 can coordinate operations of imaging devices 302, 304, 306, 308, 310, and/or 312 and/or may be configured similar to image processing device described herein (e.g., image processing device 132 of FIG. 1A, image processing device 232 of FIG. 2, and image processing device 408 of FIG. 4).
[00107] FIG. 4 illustrates a high performance vision system 400, which may be configured to provide sufficient visibility into the machine vision system 400 so as to enable the maintenance of the performance of the machine vision system 400, for example, in real time. The machine vision system 400 may include one or more imaging devices 402 (only one of which is illustrated). Each imaging device 402 may include a primary imager 412 and one or more secondary imagers 414. The one or more secondary imagers 414 may be configured to provide information to the primary imager 412 over, for example, a wired connection, which may deliver the information faster than a wireless connection. The primary imager 412 may be configured to generate metadata based on the information provided by the one or moresecondary imagers. The metadata generated by the primary imager 412 may indicate relationship between image data and respective imagers.
[00108] As illustrated, the machine vision system 400 may include a dimensioner 404 and a motion measurement device 406. The dimensioner 404 may be configured to measure dimensions (e.g., height, length, and width) of the objects captured by the imaging device 402, based on which performance metrics of a tunnel such as distances between objects in the tunnel may be derived. The dimensioner 404 may be configured similar to dimensioners described herein (e.g., dimensioner 150 of FIG. IB, dimensioner 206 of FIG. 2, and dimensioner 330 of FIG. 3). The motion measurement device 406 may be configured to track the physical movements of the objects captured by the imaging device 402, based on which performance metrics of a tunnel such as rates of capture by respective imagers 412 and 414 of the imaging device 402 may be derived. The motion measurement device 406 may be configured similar to motion measurement device described herein (e.g., motion measurement device 152 of FIG. IB).
[00109] An image processing device 408 may be configured to control when the imaging device 402, dimensioner 404, and motion measurement device 406 can acquire respective image data and measurement data through, for example, providing trigger signals configured to cause the imaging device 402 to acquire image data and the dimensioner 404 and motion measurement device 406 to acquire measurement data. In some embodiments, the primary imager 412 can receive a trigger. The primary imager 412 can initiate the trigger to the one or more secondary imagers 414. The triggered imagers can capture images. The primary imager 412 can generate metadata about the trigger such as a time of trigger, a count of decode results of images captured according to the trigger, etc.
[00110] The image processing device 408 may be configured to receive the image data from the imaging device 402, the metadata from the primary imager 412 of the imaging device 402, and the measurement data acquired by the dimensioner 404 and motion measurement device 406 over, for example, one or more wired connections, which may deliver the information faster than wireless connections. The image processing device 408 may be configured to receive the image data and metadata from the primary imager 412 of the imaging device 402 over a first wired connection, and the image data from the secondary imagers 414 of the imaging device 402 over one or more second wired connections. Such a configuration may enable the image processing device 408 to receive and interpret respective data simultaneously and in real time.
[00111] In some embodiments, the image processing device 408 may be configured to interpret the received data. The image processing device 408 may include a symbol decoder 416, which may be configured to decode symbols from the received image data from the one or more imaging devices 402. Alternatively or additionally, the primary imager 412 and/or the one or more secondary imagers 414 may be configured to decode symbols in images captured respectively. In some embodiments, the one or more secondary imagers 414 can send decode results to the primary imager 412, and the primary imager 412 can deteremine the quality of the decode results and provide feedback to the one or more secondary imagers 414.
[00112] The image processing device 408 may include an image stitcher 418, which may be configured to generate composite image data by, for example, stitching image data acquired by one or more imagers 412 and 414 over one or more triggers.
[00113] The image processing device 408 may include a correlator 426, which may be configured to generate correlated system information. The correlated system information may be generated based at least in part on the metadata generated by the primary imagers 412 of the one or more imaging devices 402. The correlator 425 may be configured to correlate the received image data to respective imagers 412 and 414 of respective imaging devices 402, and/or the decoded symbols by the symbol decoder 416. The correlator 425 may be configured to correlate the composite image data generated by the image stitcher 418 to sides of the objects captured by the one or more imaging devices 402. The correlator 425 may be configured to correlate the received measurement data to the received image data from respective imagers 412 and 414 of respective imaging devices 402, and/or the composite image data generated by the image stitcher 418. The image processing device 408 may include an aggregator 428, which may be configured to aggregate the correlated system information into one or more performance metrics. The one or more performance metrics may provide a holistic view of the performance of the machine vision system 400, which may include one or more tunnels described herein (e.g., tunnel 102 of FIG. 1A, and tunnel 202 of FIG. 2). The one or more performance metrics may include one or more metrics for individual devices (e.g., imagers 412, 414, dimensioner 404, and motion measurement device 406) in the machine vision system 400, and one or more metrics for individual triggers.
[00114] The data interpreted by the image processing device such as the correlated system information and the aggregated system information may be provided to tunnel analytics 410 for further interpretation and/or generating presentations of the data, based on which maintenance activities may be applied to respective tunnels. Tunnel analytics 410 may include live dashboard 410, performance dashboard 422, and result browser 424.
[00115] The live dashboard 410 may be configured to generate presentations for display based at least in part on information received from the image processing device and enable the presentation to be searchable by the performance metrics. In some embodiments, the live dashboard 410 may generate and display images for display within a threshold time period from when the image data is acquired by the imagers 412 and 414 of the imaging devices 402, respectively. The threshold time period may be, for example, five to twenty seconds when the images are generated based on composite image data from a plurality of imaging devices, and two to ten seconds (e.g., five seconds) when the images are generated based on received image data from individual imaging devices. Such live result streams may enable a reviewer of the live result streams to pause the live result streams so as to investigate areas of interest such as data associated with a particular trigger.
[00116] The performance dashboard 422 may be configured to generate graphical analyses of the one or more metrics for individual devices in the machine vision system. The one or more metrics may include a rate of capture by a respective device, dimensions of objects captured by a respective device, scales of objects captured by a respective device, decode time by a respective device, etc. The graphical analyses may include one or more charts. In some embodiments, the charts may show one or more of the metrics of respective devices over time. For example, a step chart may show rates of capture or good rate count by respective imagers in an imaging device. As another example, a histogram charge may show dimensions of objects captured by a respective imager. The one or more metrics for individual devices in the machine vision system may be customized according to requests. For example, minimum values, maximum values, and/or mean values of the one or more metrics for individual devices in the machine vision system may be provided.
[00117] The result browser 424 may be configured to generate presentations of the data by the one or more metrics for individual triggers. The one or more metrics for individual triggers may include trigger information, decode results, dimensioner results, scale results, and sorter information. The data may be searchable and filterable by the one or more metrics. The result browser 424 may be configured to generate representations of the filtered data, which may facilitate further analyses of the performances of respective tunnels.
[00118] In some embodiments, the image processing device 408 and tunnel analytics 410 may communicate over one or more communication links 430. In some embodiments, communicating link 430 can be any suitable communication network or combination of communication networks. For example, communication link 430 can include a Wi-Fi network (which can include one or more wireless routers, one or more switches, etc.), a peer-to-peer network (e.g., a Bluetooth network), a cellular network (e.g., a 3G network, a 4G network, a 5G network, etc., complying with any suitable standard, such as CDMA, GSM, LTE, LTE Advanced, NR, etc.), a wired network, etc. In some embodiments, communication link 430 can be a local area network (LAN), a wide area network (WAN), a public network (e.g., the Internet), a private or semi-private network (e.g., a corporate or university intranet), any other suitable type of network, or any suitable combination of networks.
[00119] Communications links shown in FIG. 4 (e.g., communication links 430, 432, 434, 436; not all communication links marked in FIG. 4) can each be any suitable communications link or combination of communications links, such as wired links, fiber optic links, Wi-Fi links, Bluetooth links, cellular links, etc. In some embodiments, components of system 400 may communicate directly as compared to through communication network. In some embodiments, the components of system 400 may communicate through one or more intermediary devices not illustrated in FIG. 4.
[00120] FIG. 5A is a flow chart illustrating a method 500 for analyzing image data captured by a machine vision system, according to some embodiments. At step 502, an image processing device (e.g., image processing device 408) may receive, from a primary imager (primary imager 412 of imaging device 402), first image data and metadata generated by the primary imager. The first image data can include images captured by the primary imager. The metadata can indicate triger information such as time of trigger, decode count for each triger, etc. The metadata can be generated by the primary imager based on information provided by one or more secondary imagers (e.g., secondary imagers 414 of imaging device 402) to the primary imager. In some embodiments, the information may be provided by the secondary imagers to the primary imager through wired connections between the primary imager and secondary imagers (e.g., which may deliver the information faster than a wireless connection). In some embodiments, the one or more secondary imagers can decode symbols in the images captured by the one or more secondary imagers, and the information provided by the one or more secondary imagers can include decode results of the symbols in the images captured by the one or more secondary imagers and metadata. At step 504, the image processing device may receive, from the one or more secondary imagers, second image data generated by the one or more secondary imagers. The second image data can include images captured by the one or more secondary imagers and metadata. The second image data can include quality indications for the images captured by the one or more secondary imagers. The quality indications can be determined, by the primary imager, based on the decode results, and sent to the one or more secondary imagers from the primary imager. For example, the primary imager can determine whether a decode result of a symbol in an image is a “good” read or a “bad” read of the symbol such that the image can have a corresponding label associated therewith. Alternatively or additionally, the second image data may also be sent to the primary imager by the secondary imagers and transmitted to the image processing device by the primary imager. In some embodiments, the first image data and second image data and metadata may be provided to the image processing device through wired connections. Alternatively or additionally, wireless connections may be used to transmit the first image data and second image data and metadata. At step 506, the image processing device may correlate the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information. [00121] FIG. 5B is a flow chart illustrating a method 501 for providing a live dashboard (e.g., live dashboard 420), according to some embodiments. At steps 508 and 510, tunnel analytics (e.g., tunnel analytics 410) may generate images based on the correlated system information for the primary imager and the secondary imagers, respectively, for display via a graphical user interface (GUI).
[00122] A live dashboard can provide, for viewing, a live stream of images and associated machine vision data (e.g., triggers, results, etc.). The live dashboard can enable real-time analysis of the machine vision system, such as to determine whether the tunnels are being triggered properly, whether packages are moving through the tunnels correctly, etc. FIG. 6A is an exemplary user interface 600 of a live dashboard that shows live captured images and associated data, according to some embodiments. The user interface 600 may include a source tunnel selection drop-down menu 604 that allows a user to select a tunnel (tunnel 102 of FIG. 1A, and tunnel 202 of FIG. 2) to view in the user interface 600. The user interface 600 may include an imaging device selection drop-down menu 606 configured for the selection of one of more selected image source devices (e.g., imagers 412, 414) in the user interface 600. In the illustrated example, multiple image source devices are selected and shown in an array 608, each row of which may show images captured by a respective image source device. The images may be normalized by selection of box 620. The images may be selected to be further examined in an image viewer 612. In the illustrated example, image 610 is selected. The image viewer 612 may include image control selectable list 614, which may be configured to manipulate the selected image in various ways including, for example, Rotate Left 90, Rotate Right 90, Reset Rotation, Zoom In, Zoom Out, Zoom to Original Size, Reset Zoom, Move Center, Reset All Settings, etc. The user interface 600 may include a result table 616, which may be configured to provide information for individual triggers including, for example, Timestamp, Trigger Index, Read String, Length, Width, Heigh, Object Gap, etc.
[00123] Each entry in the results table 616 can be stored, for example, as an object with the associated data. As described herein, the techniques can provide for processing large streams of data from primary and/or secondary imaging devices, which may be received separately from the devices, and organizing the data into objects. For example, the primary imaging device may provide its own associated imaging data and metadata that is generated based on data for the primary imaging device as well as data from the secondary imaging devices. The secondary imaging devices may also provide their own associated imaging data. The techniques, as described herein, can include combining the image data that is (separately) received from the various imaging devices based on the metadata in order to generate the objects that are used to populate results table 616. In some embodiments, the techniques can provide for downloading data associated with individual entries of the results table 616. For example, the techniques can provide for downloading data associated with individual triggers.
[00124] The user interface 600 may include a status tile 618, which may be configured to provide information such as Triggers 618A, Multi Reads 618B, Packages Dimensioned 618C, etc. Triggers 618A may provide information such as overall throughput (e.g., 75,100 in FIG. 6), good read count (e.g., 74,552 in FIG. 6), no read count (e.g., 548 in FIG. 6), and read rate (e.g., 99.27% in FIG. 6). Multi Reads 618B may provide information such as multi read count when multiple symbols (e.g., barcodes) are decoded from a single imaged object (e.g., 32,939 in FIG. 6), and percentage of multi read count versus overall throughput (e.g., 43.86% in FIG. 6). Packages Dimensioned 618C may provide information such as the packages dimensioned, which may mean the number of packages have dimensions greater than zero (e.g., 75,100 in FIG. 6), Count of packages legal for trade, and Count of packages not legal for trade. The user interface 600 may include a Reset button 622, which may be configured to provide options to reset the live dashboard on a regular basis by, for example, each shift, daily, weekly, which make the statistics more meaningful.
[00125] FIG. 6B is another exemplary user interface 601 of a live dashboard that shows live captured images and associated data, according to some embodiments. As illustrated, the user interface 601 may be configured similar to the user interface 600, but the status tile 619 of the user interface 601 includes an additional interface component Tunnel Connection Status 619A. Tunnel Connection Status 619A may provide status information such as whether the image processing device 408 and/or the devices coupled to the image processing device 408 (imaging devices 412, 414, dimensioner 404, motion measurement device 406) are connected to the tunnel analytics 410. FIG. 6C shows the exemplary user interface 601 when Tunnel Connection Status 619A is selected, according to some embodiments. A dialog box 621 can appear to show the connection status 623 of individual devices 625 configured to be accessible by the tunnel analytics 410 (e.g., dimensioner 404, motion measurement device 406, imaging devices 412, 414, image processing device 408). 1 [00126] FIG. 7 is an exemplary user interface 700 showing live stitched images 702, according to some embodiments. As illustrated, the user interface 700 may be configured similar to the user interface 600 discussed above. The differences may include the images 702 are stitched images, which may be generated based on one or more live images 602. The stitched images 702 provide a more complete view of respective sides of an object. In the illustrated example, stitched images 704, 706, and 708 show the back, bottom, and front of an object, respectively. All six sides of the object may be provided.
[00127] FIG. 8 is a flow chart illustrating a method 800 for providing a performance dashboard (e.g., performance dashboard 422), according to some embodiments. At step 802, an image processing device (e.g., image processing device 408) may receive, from a primary imager (primary imager 412 of imaging device 402), first image data captured by the primary imager and metadata. The metadata can be generated by the primary imager based on information provided by secondary imagers (e.g., secondary imagers 414 of imaging device 402) to the primary imager. In some embodiments, the information may be provided by the secondary imagers to the primary imager through wired connections between the primary imager and secondary imagers (e.g., which may deliver the information faster than a wireless connection). At step 804, the image processing device may receive, from the secondary imagers, second image data captured by the one or more secondary imagers. Alternatively or additionally, the second image data may also be sent to the primary imager by the secondary imagers and transmitted to the image processing device by the primary imager. In some embodiments, the first image data and second image data and metadata may be provided to the image processing device through wired connections. Alternatively or additionally, wireless connections may be used to transmit the first image data and second image data and metadata. At step 806, the image processing device may correlate the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information. At step 808, tunnel analytics (e.g., tunnel analytics 410) may generate one or more graphical analyses of one or more metrics based on the correlated system information for the primary imager and the one or more secondary imagers, respectively.
[00128] A performance dashboard can provide, for viewing, graphical analyses of various data associated with the machine vision system. The performance dashboard can enable real-time analysis of the machine vision system, such as to determine whether the tunnels are functioning properly, etc. FIG. 9 is a schematic diagram illustrating a user interface 900 of a performance dashboard and the types of charts available through the performance dashboard, according to embodiments. The user interface 900 may provide a chart type selection drop-down menu 906. The types of charts may include timeseries chart and numerical distribution. When the option of timeseries chart is selected, the performance dashboard may provide further options listed in the table of 902; and when the option of numerical distribution is selected, the performance dashboard may provide further options listed in the table of 904. The underlying data can be received from, and/or pulled from, the machine vision system devices periodically (e.g., every ten seconds, thirty seconds, minute, etc.).
[00129] Multiple charts of various types, which may show various performance aspects of a selected tunnel, may be provided for examination at the same time. FIG. 10 is an exemplary timeseries chart 1000 showing read rate of a device (e.g., primary imager 412) over time, according to some embodiments. The user interface may provide a button 1002 configured to enable the selection of additional charts. For example, the selection of the button 1002 may lead to another user interface such as the user interface 900 discussed above. FIG. 11 is an exemplary numerical distribution 1100 showing dimensions distribution captured by the device of FIG. 10, according to some embodiments. In some embodiments, these charts may be displayed together. These charts may be used together to determine whether the device is performing properly. For example, a sudden drop of read rate may indicate abnormalities in the tunnel.
[00130] The intervals for a timeseries chart may be selected based on uses. FIG. 12A is an exemplary timeseries chart 1200A showing read rate of another device (e.g., a secondary imager) over time with one (1) hour intervals, according to some embodiments. FIG. 12B is an exemplary timeseries chart 1200B showing the read rate of FIG. 12A over time with fifteen (15) minute intervals, according to some embodiments. This configuration enables the identification of potential issues quickly with, for example, bigger intervals, and then drill down through, for example, smaller intervals.
[00131] A chart may provide one or more metrics of multiple devices. FIG. 13 is an exemplary timeseries chart 1300 showing read rates of multiple devices (e.g., the primary imager 412 and the secondary imagers 414 of the imaging device 402) over time, according to some embodiments. Such a configuration may enable the identification of one or more devices that are under- performing, which may allow users to improve their performances. For example, the techniques can be used to determine when a device is dirty (e.g., a dirty lens), when a device has moved (e.g., bumped by an object moving through the tunnel), when a device is offline, etc.
[00132] A performance dashboard may provide options to create standard and/or custom events . The creation of these events may enable notification and analyses of the occurrence of the events. An event may be defined as a condition around one or more metrics. Such a configuration may enable the creation of various levels of notifications including, for example, a critical level, a warning level, and an info-level. For example, a warning-level notification may be created when the read rate is below 98% and a critical-level notification may be created when the read rate is below 95%; a warning-level notification may be created when the SBS count is greater than 5 and a critical-level notification when the SBS count is greater than 10; an info-level notification may be created when a configuration change happens; an info-level notification may be created when the throughput reaches 100,000; etc. FIG. 14A is an exemplary user interface 1400A for creating new events, according to some embodiments. As illustrated, the user interface 1400A may provide multiple drop-down menus 1402 such that aspects of an event may be selected including, for example, statistic (e.g., Trigger Overrun Count in FIG. 14A), function (e.g., > (Greater than) in FIG. 14A), value (e.g., 5 in FIG. 14A), and threat level (e.g., options among Notification, Warning, and Critical in FIG. 14A). FIG. 14B is an exemplary user interface 1400B showing created events, according to some embodiments. The user interface 1400B may include buttons 1404 configured for actions such as editing an event, removing an event, and/or changing notification type. FIG. 14C is an exemplary user interface 1400C for creating new events, according to some embodiments. The user interface 1400C can be a subpage (instead of a dialog as shown with the user interface 1400 A). The user interface 1400C can allow a user to provide inputs to create the conditions of a new event. As illustrated, the user interface 1400C can allow for the designation or selection of devices such that events can be created for individual devices. Although one device (here, INBD_Top_3-Right_P) is illustrated with an associated selection button 1410, it should be appreciated that multiple device options can be available for selection, and the designation of devices can be achieved by any suitable manner, such as by additionally or alternatively providing text boxes for a user to input the name of a device, etc. The user interface 1400C can also allow for specifying metric conditions for individual events as illustrated via the dropdowns for metric, operator, and time window, and the text field for threshold in the section 1412 for creating a performance condition. Additionally, via section 1414, alert details can be configured including an alert name, an alert level, and any desired alert notes. FIG. 14D is another exemplary user interface 1400D showing created events, according to some embodiments. The user interface 1400B may include buttons 1408 configured for actions such as editing an event, removing an event, and/or changing notification type.
[00133] FIG. 15 is an exemplary user interface 1500 for viewing, filtering, and sorting notifications, according to some embodiments. The user interface 1500 may provide multiple drop-down menus 1502 configured for various sorting options including, for example, timeframe and various filtering options including, for example, timeframe and notification level. The user interface 1500 may also provide input boxes 1504 configured for additional filtering options including, for example, Service Name, Title, Details.
[00134] FIG. 16 is a flow chart illustrating a method 1600 for providing a result browser (e.g., result browser 424), according to some embodiments. At step 1602, an image processing device (e.g., image processing device 408) may receive, from a primary imager (primary imager 412 of imaging device 402), first image data captured by the primary imager and metadata. The metadata may be generated by the primary imager based on information provided by secondary imagers (e.g., secondary imagers 414 of imaging device 402) to the primary imager. In some embodiments, the information may be provided by the secondary imagers to the primary imager through wired connections between the primary imager and secondary imagers (e.g., which may deliver the information faster than a wireless connection). At step 1604, the image processing device may receive, from the secondary imagers, second image data captured by the one or more secondary imagers. Alternatively or additionally, the second image data may also be sent to the primary imager by the secondary imagers and transmitted to the image processing device by the primary imager. In some embodiments, the first image data and second image data and metadata may be provided to the image processing device through wired connections. Alternatively or additionally, wireless connections may be used to transmit the first image data and second image data and metadata. At step 1606, the image processing device may correlate the first image data and the second image data based on the metadata generated by the primary imager to generate correlated system information. At step 1608, tunnel analytics (e.g., tunnel analytics 410) may filter the correlated system information according to one or more metrics. At step 1610, tunnel analytics may generate a representation of the filtered correlated system information. [00135] A result browser may provide presentations by various metrics such as individual triggers. The result browser can enable customized analyses of the machine vision system by system data such as triggers, which may expose hidden problems in the tunnels. FIG. 17 is an exemplary user interface 1700 showing trigger data 1702 in a table 1704, according to some embodiments. As illustrated, each row of the table 1704 may include various aspects of individual triggers such as Date & Time, Trigger Index, Good Read, Read String, Angle, Length, Width, Height, Gaps, and LFT. The user interface 1700 may include drop-down menu 1706 configured to provide searching and filtering options for the table 1704. For example, various aspects of the tunnels may be searched based on user-defined multi-conditional search queries including, for example, No Reads that have package heights greater than a user defined thresholds, Non-legal for trade packages, packages with all dimensions assigned to -1, all packages with all dimensions assigned 0, triggers that are flagged as off-box (which may require 3D calibration), and triggers that have codes on the bottom of the package (which may require 3D calibration).
[00136] FIG. 18A is an exemplary user interface 1800A for searching trigger data, according to some embodiments. The user interface 1800A may provide multiple selection lists 1802 configured for trigger data to be searched by, for example, Decode Results, Dimensioner, Scale Result, and Sorter Information, respectively. For example, selection of the selection lists 1802 may lead to user interfaces 1800B, 1800C, 1800E, and 1800F shown in FIGs. 18B, C, E, and F, respectively. The user interface 1800B may be configured for searching the trigger data by decode results. The user interface 1800B may include input boxes 1804 configured for providing search options such as Symbology, Module Size, etc. The user interface 1800B may include boxes 1806 configured for providing search options such as Assignment Results, Assigned Surface, etc. The user interface 1800C may be configured for search the trigger data by dimensioner. The user interface 1800C may include radio buttons 1808 configured for providing search options such as Legal for Trade, Is Side By Side, etc. The user interface 1800C may include input boxes 1810 configured for providing search options such as Angle, Length, Width, Heigh, Object Gap, etc. FIG. 18D is an exemplary user interface 1800D showing the result of the search of FIG. 18C, according to some embodiments. The user interface 1800E may be configured for search the trigger data by scale result. The user interface 1800E may include radio buttons 1812 configured for providing search options such as Legal For Trade, etc. The user interface 1800E may include input boxes 1814 configured for providing search options such as Weight. The user interface 1800F may be configured for search the trigger data by sorter information. The user interface 1800E may include input boxes 1816 configured for providing search options such as Vendor Token.
[00137] FIG. 19A is another exemplary user interface 1900A for searching trigger data, according to some embodiments. The user interface 1900A may be configured similar to the user interface 1800A discussed above. As illustrated, trigger data may be searched by Trigger Information, Decode Results, Dimensioner, Scale Result, and Sorter Information. The searches by Decode Results, Dimensioner, Scale Result, and Sorter Information may be configured similar to the example shown in FIGs.A-F. FIG. 19B is an exemplary user interface 1900B for searching the trigger data by trigger information, according to some embodiments. The user interface 1900B may include buttons 1902 configured for providing search options such as Multi Read. The user interface 1900B may include input boxes 1904 configured for providing search options such as Trigger Index. The user interface 1900B may include drop-down menu 1906 configured for providing search options such as Trigger Type.
[00138] The results of the searches may be downloaded for further analyses. FIG. 19C is an exemplary user interface 1900C for downloading the search results, according to some embodiments. As illustrated, various data may be saved including, for example, Trigger and Image.
[00139] Individual trigger information may include image data and measurement data. The triggers may be assigned into one or more groups by result tags. FIG. 20 is an exemplary user interface 2000 for editing result tags, according to some embodiments. The illustrated trigger has a result tag “Multi Read” shown in area 2002 of the user interface 2000. The result tag may be edited by, for example, button 2004 of the user interface 2000. New result tag may be added through input box 2006 of the user interface 2000. Such a configuration enables customized grouping of trigger data.
[00140] FIG. 21A is an exemplary user interface 2100A showing trigger data in a table 2102, according to some embodiments. The user interface 2100A may include button 2104, which may activate options for reconfiguring the table 2102. FIG. 21B is an exemplary user interface 2100B for reconfiguring the table showing the trigger data, according to some embodiments. As illustrated, the table 2102 may include radio buttons 2104 such that the table 2102 may be reconfigured based on one or more of Date & Time, Trigger Index, Good Read, Multi Read, Decode Results, Assigned Surface, Dimensioner, Scale, and Sorter Information. Decode Results may include Read String, Assignment Result, and Read String Valid. Assigned Surface may include Left, Right, Top, Front, Back, Top, and Bottom. Dimensioner may include Multi Object, Trigger ID, Damaged, Side By Side, Angle, Length, Width, Height, Gap, Legal For Trade Compliancy, and Error. Scale may include Weight, and Legal for Trade Compliancy. Sort Information may include Vendor Token. FIG. 21C is an exemplary user interface 2100C showing the result of a reconfigured table 2104, according to some embodiments.
[00141] FIG. 22 is an exemplary user interface 2200 for reviewing trigger data, according to some embodiments. As illustrated, the user interface 2200 may include button 2202, which enable the trigger data to be downloaded for further analyses.
[00142] In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes described herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non- transitory computer readable media can include media such as magnetic media (such as hard disks, floppy disks, etc.), optical media (such as compact discs, digital video discs, Blu-ray discs, etc.), semiconductor media (such as RAM, Flash memory, electrically programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), etc.), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, or any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media.
[00143] It should be noted that, as used herein, the term mechanism can encompass hardware, software, firmware, or any suitable combination thereof.
[00144] It should be understood that the above-described acts of the methods 500, 800, and 1600 can be executed or performed in any order or sequence not limited to the order and sequence shown and described in the figures. Also, some of the above acts of the methods 500, 800, and 1600 can be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times.
[00145] Various aspects are described in this disclosure, which include, but are not limited to, the following aspects: 1. A method for analyzing image data captured by a machine vision system comprising a primary imager and a secondary imager, the method comprising: receiving, from the primary imager, first image data and metadata generated by the primary imager, the metadata based at least in part on information provided by the secondary imager to the primary imager; receiving, from the secondary imager, second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and generating correlated system information by correlating the first image data and the second image data based on the metadata.
2. The method of 1, further comprising generating an image based on the correlated system information.
3. The method of 2, further comprising displaying the image via a graphical user interface (GUI).
4. The method of any one of 2-3, further comprising generating a new image in response to receiving new first image data from the primary imager.
5. The method of any one of the preceding, further comprising, prior to receiving the second image data: decoding a symbol within the captured image; and generating the second image data based on a decode result.
6. The method of 5, wherein decoding the symbol and generating the second image data is performed by the secondary imager.
7. The method of 6, wherein: the second image data further comprises a quality indication corresponding to the captured image, the quality indication determined by the primary imager based on the decode result.
8. The method of any one of the preceding, wherein: receiving the first image data and the metadata comprises receiving the first image data and the metadata over a first wired connection with the primary imager; and receiving the second image data comprises receiving the second image data over a second wired connections with the secondary imager.
9. The method of any one of the preceding, wherein: the correlated system information comprises status information of the machine vision system comprising at least one of: an indication of whether the primary imager is connected, an indication of whether the secondary imager is connected, a count of machine vision system triggers, a count of multi reads, or a count of objects.
10. The method of 9, wherein: the status information of the machine vision system comprises status information for individual imagers of the primary imager and the secondary imager.
11. The method of any one of the preceding, further comprising: generating a graphical analysis of a metric based on the correlated system information for the primary imager and the secondary imager, respectively.
12. The method of 11, wherein the graphical analysis comprises a chart visually depicting the metric.
13. The method of any one of 11-12, wherein the metric comprises a symbol decode rate by the secondary imager.
14. The method of any one of 1-10, further comprising: filtering the correlated system information according to a metric; and generating a representation of the filtered correlated system information.
15. The method of any one of the preceding, further comprising: receiving, from a dimensioner associated with the machine vision system, an object dimension.
16. The method of any one of the preceding, further comprising: receiving, from a scale associated with the machine vision system, an object weight.
17. The method of any one of 15-16, wherein the metric comprises at least one of: dimensioner results, scale results, or sorter information.
18. The method of 14, wherein the metric comprises at least one of: machine vision system trigger information or decode results.
19. A non-transitory computer-readable medium comprising instructions which, when executed, cause at least one processor to carry out the method of any one of the preceding.
20. A machine vision system comprising: a primary imager configured to generate first image data and metadata; a secondary imager in communication with the primary imager and configured to generate second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and at least one processor in communication with the primary imager and the secondary imager and configured to: receive the first image data and the metadata; receive the second image data; and generate correlated system information by correlating the first image data and the second image data based on the metadata, wherein the metadata is generated by the primary imager based on information provided by the secondary imager.
21. The machine vision system of 20, wherein the at least one processor is further configured to: generate an image based on the correlated system information; and transmit the image to a graphical user interface (GUI).
22. The machine vision system of any one of 20-21, wherein the primary imager is configured to generate the first image data in response to receiving a trigger signal.
23. The machine vision system of 22, wherein the secondary imager is configured to generate the second image data in response to receiving the trigger signal from the primary imager.
24. The machine vision system of any one of 20-23, wherein the second image data further comprises a decode result corresponding to a symbol within the captured image.
25. The machine vision system of any one of 20-24, wherein the at least one processor communicates with the primary imager via a first connection, and wherein the at least one processor communicates with the secondary imager via a second connection.
26. The machine vision system of any one of 20-25, wherein the at least one processor is further configured to transmit status information for each of the primary imager and the secondary imager. 27. The machine vision system of any one of 20-26, further comprising a plurality of secondary imagers in communication with the primary imager and the at least one processor, the second image data generated by the plurality of secondary imagers.
[00146] Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed embodiments can be combined and rearranged in various ways.

Claims

CLAIMS What is claimed is:
1. A method for analyzing image data captured by a machine vision system comprising a primary imager and a secondary imager, the method comprising: receiving, from the primary imager, first image data and metadata generated by the primary imager, the metadata based at least in part on information provided by the secondary imager to the primary imager; receiving, from the secondary imager, second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and generating correlated system information by correlating the first image data and the second image data based on the metadata.
2. The method of claim 1, further comprising generating an image based on the correlated system information.
3. The method of claim 2, further comprising displaying the image via a graphical user interface (GUI).
4. The method of any one of claims 2-3, further comprising generating a new image in response to receiving new first image data from the primary imager.
5. The method of any one of the preceding claims, further comprising, prior to receiving the second image data: decoding a symbol within the captured image; and generating the second image data based on a decode result.
6. The method of claim 5, wherein decoding the symbol and generating the second image data is performed by the secondary imager.
7. The method of claim 6, wherein: the second image data further comprises a quality indication corresponding to the captured image, the quality indication determined by the primary imager based on the decode result.
8. The method of any one of the preceding claims, wherein: receiving the first image data and the metadata comprises receiving the first image data and the metadata over a first wired connection with the primary imager; and receiving the second image data comprises receiving the second image data over a second wired connections with the secondary imager.
9. The method of any one of the preceding claims, wherein: the correlated system information comprises status information of the machine vision system comprising at least one of: an indication of whether the primary imager is connected, an indication of whether the secondary imager is connected, a count of machine vision system triggers, a count of multi reads, or a count of objects.
10. The method of claim 9, wherein: the status information of the machine vision system comprises status information for individual imagers of the primary imager and the secondary imager.
11. The method of any one of the preceding claims, further comprising: generating a graphical analysis of a metric based on the correlated system information for the primary imager and the secondary imager, respectively.
12. The method of claim 11, wherein the graphical analysis comprises a chart visually depicting the metric.
13. The method of any one of claims 11-12, wherein the metric comprises a symbol decode rate by the secondary imager.
14. The method of any one of claims 1-10, further comprising: filtering the correlated system information according to a metric; and generating a representation of the filtered correlated system information.
15. The method of any one of the preceding claims, further comprising: receiving, from a dimensioner associated with the machine vision system, an object dimension.
16. The method of any one of the preceding claims, further comprising: receiving, from a scale associated with the machine vision system, an object weight.
17. The method of any one of claims 15-16, wherein the metric comprises at least one of: dimensioner results, scale results, or sorter information.
18. The method of claim 14, wherein the metric comprises at least one of: machine vision system trigger information or decode results.
19. A non-transitory computer-readable medium comprising instructions which, when executed, cause at least one processor to carry out the method of any one of the preceding claims.
20. A machine vision system comprising: a primary imager configured to generate first image data and metadata; a secondary imager in communication with the primary imager and configured to generate second image data, wherein the second image data comprises a captured image as captured by the secondary imager; and at least one processor in communication with the primary imager and the secondary imager and configured to: receive the first image data and the metadata; receive the second image data; and generate correlated system information by correlating the first image data and the second image data based on the metadata, wherein the metadata is generated by the primary imager based on information provided by the secondary imager.
21. The machine vision system of claim 20, wherein the at least one processor is further configured to: generate an image based on the correlated system information; and transmit the image to a graphical user interface (GUI).
22. The machine vision system of any one of claims 20-21, wherein the primary imager is configured to generate the first image data in response to receiving a trigger signal.
23. The machine vision system of claim 22, wherein the secondary imager is configured to generate the second image data in response to receiving the trigger signal from the primary imager.
24. The machine vision system of any one of claims 20-23, wherein the second image data further comprises a decode result corresponding to a symbol within the captured image.
25. The machine vision system of any one of claims 20-24, wherein the at least one processor communicates with the primary imager via a first connection, and wherein the at least one processor communicates with the secondary imager via a second connection.
26. The machine vision system of any one of claims 20-25, wherein the at least one processor is further configured to transmit status information for each of the primary imager and the secondary imager.
27. The machine vision system of any one of claims 20-26, further comprising a plurality of secondary imagers in communication with the primary imager and the at least one processor, the second image data generated by the plurality of secondary imagers.
PCT/US2024/013158 2023-01-27 2024-01-26 High performance machine vision system WO2024159126A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363441570P 2023-01-27 2023-01-27
US63/441,570 2023-01-27

Publications (2)

Publication Number Publication Date
WO2024159126A1 true WO2024159126A1 (en) 2024-08-02
WO2024159126A8 WO2024159126A8 (en) 2024-09-12

Family

ID=90354842

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/013158 WO2024159126A1 (en) 2023-01-27 2024-01-26 High performance machine vision system

Country Status (1)

Country Link
WO (1) WO2024159126A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347449A1 (en) * 2013-05-24 2014-11-27 Sony Corporation Imaging apparatus and imaging method
US20190333259A1 (en) 2018-04-25 2019-10-31 Cognex Corporation Systems and methods for stitching sequential images of an object
EP3657356A1 (en) * 2017-07-21 2020-05-27 Sony Corporation Information processing device and information processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140347449A1 (en) * 2013-05-24 2014-11-27 Sony Corporation Imaging apparatus and imaging method
EP3657356A1 (en) * 2017-07-21 2020-05-27 Sony Corporation Information processing device and information processing method
US20190333259A1 (en) 2018-04-25 2019-10-31 Cognex Corporation Systems and methods for stitching sequential images of an object

Also Published As

Publication number Publication date
WO2024159126A8 (en) 2024-09-12

Similar Documents

Publication Publication Date Title
US10402956B2 (en) Image-stitching for dimensioning
CN110212451B (en) Electric power AR intelligence inspection device
US9477891B2 (en) Surveillance system and method based on accumulated feature of object
US20180025233A1 (en) Image-capturing device, recording device, and video output control device
CN102840838A (en) Method and device for displaying indication of quality of the three-dimensional data for surface of viewed object
AU2012340862A1 (en) Geographic map based control
CN111083438B (en) Unmanned inspection method, system and device based on video fusion and storage medium
JP6386311B2 (en) Portable information terminal, information processing method, and program
US20250181861A1 (en) Systems and Methods to Optimize Imaging Settings and Image Capture for a Machine Vision Job
US20250044567A1 (en) Determining an erroneous movement of a microscope
EP3706411A1 (en) Early video equipment failure detection system
CN114842376A (en) State determination method, state determination device, storage medium and electronic device
US12335655B2 (en) Thermal imaging asset inspection systems and methods
WO2024159126A1 (en) High performance machine vision system
US20240143122A1 (en) Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera
US20220414916A1 (en) Systems and methods for assigning a symbol to an object
US12073506B2 (en) Methods, systems, and media for generating images of multiple sides of an object
CN117422991A (en) Intelligent mine detection system and method based on big data and readable storage medium
JP2021009581A (en) Information processing device and off lamp confirmation program
CN115879486A (en) Method for creating an optimized/adaptive region of interest based on the detection of the position of a barcode in a field of view
WO2023220594A1 (en) System and method for dynamic testing of a machine vision system
CN117649642B (en) Abnormal behavior analysis method and system based on multiple video cameras
CN111462252A (en) Method, device and system for calibrating camera device
WO2023220593A1 (en) System and method for field calibration of a vision system
US20240290114A1 (en) Systems and Methods Utilizing Machine Vision and Three-Dimensional Modeling Techniques for Surface Matching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24709920

Country of ref document: EP

Kind code of ref document: A1