US20220353388A1 - Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera - Google Patents

Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera Download PDF

Info

Publication number
US20220353388A1
US20220353388A1 US17/375,846 US202117375846A US2022353388A1 US 20220353388 A1 US20220353388 A1 US 20220353388A1 US 202117375846 A US202117375846 A US 202117375846A US 2022353388 A1 US2022353388 A1 US 2022353388A1
Authority
US
United States
Prior art keywords
indicia
image
pixel count
display region
predetermined ratio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US17/375,846
Other versions
US11507245B1 (en
Inventor
Abhilash Gururaja
Christopher M. West
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebra Technologies Corp
Original Assignee
Zebra Technologies Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebra Technologies Corp filed Critical Zebra Technologies Corp
Priority to US17/375,846 priority Critical patent/US11507245B1/en
Assigned to ZEBRA TECHNOLOGIES CORPORATION reassignment ZEBRA TECHNOLOGIES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GURURAJA, ABHILASH, WEST, Christopher M.
Priority to PCT/US2022/025058 priority patent/WO2022231874A1/en
Priority to BE20225323A priority patent/BE1029310B1/en
Publication of US20220353388A1 publication Critical patent/US20220353388A1/en
Priority to US17/991,588 priority patent/US11861134B2/en
Application granted granted Critical
Publication of US11507245B1 publication Critical patent/US11507245B1/en
Priority to US18/402,390 priority patent/US20240143122A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • H04N5/225
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/51Housings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/66Remote control of cameras or camera parts, e.g. by remote control devices
    • H04N23/661Transmitting camera control signals through networks, e.g. control via the Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source

Definitions

  • machine vision components capable of assisting operators in a wide variety of tasks.
  • machine vision components like cameras
  • these cameras are utilized to track objects passing objects, like those which move on conveyor belts past stationary cameras.
  • these cameras along with the backend software, are used to capture images and determine a variety of parameters associated with the passing items.
  • One aspect of this requires easy to use means to visualize various elements on the screen of a user computing device executing the backend software.
  • the present invention is a method for enhancing image content captured by a machine vision camera.
  • the method may comprise: receiving, at an application executing on a user computing device communicatively coupled to a machine vision camera, an image captured by the machine vision camera, the image including a plurality of indicia, each of the plurality of indicia encoding a payload; identifying, in the image, each of the indicia; for each respective indicia in the image, determining bounds of a respective bounding box that substantially encompass each respective indicia in the image; displaying, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia; receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia; determining, from the respective bounding box of the corresponding indicia, an upper pixel coordinate limit, a lower pixel coordinate limit, a
  • the method may further comprise for each respective indicia in the image, displaying the respective bounding box on an interface of the application.
  • At least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • scaling the image includes scaling the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (ii) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • each of the plurality of entries includes a payload of the corresponding indicia.
  • the method further comprises displaying the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
  • the method further comprises masking a masked region in the display region responsive to receiving the selection, wherein the masked region is a region outside of at least one of (i) the secondary box or (ii) the respective bounding box of the corresponding indicia.
  • At least one of the first predetermined ratio threshold and the second predetermined ratio threshold is user-definable such that a portion of the corresponding indicia is excluded from the scaled image.
  • the method further comprises displaying a miniature version of the image in the display region as an overlay covering a portion of the scaled image, wherein the miniature version of the image includes an indicated portion representing the scaled image.
  • the present invention is a method for enhancing image content captured by a machine vision camera.
  • the method may comprise: receiving, at an application executing on a user computing device communicatively coupled to a machine vision camera, an image captured by the machine vision camera, the image including a plurality of indicia, each of the plurality of indicia encoding a payload; identifying, in the image, each of the indicia; for each of the indicia in the image: determining bounds of a respective bounding box that substantially encompass the respective indicia, determining, from the respective bounding box, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit for the respective indicia, and based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit of the respective bounding box, determining a respective secondary box having an upper bound, a lower bound, a first side bound, and
  • the method further comprises for each of the indicia in the image, displaying the respective bounding box on an interface of the application.
  • At least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • scaling the image further includes scaling the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (2) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • each of the plurality of entries includes a payload of the corresponding indicia.
  • the method further comprises displaying the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
  • the method further comprises masking a masked region in the display region responsive to receiving the selection, wherein the masked region is a region outside of one of (i) the respective secondary box of the corresponding indicia or (ii) the respective bounding box of the corresponding indicia.
  • the present invention is a machine vision system for enhancing image content captured by a machine vision camera.
  • the machine vision system may comprise: a machine vision camera configured to capture an image including a plurality of indicia, each of the plurality of indicia encoding a payload; and a user computing device executing an application, wherein the user computing device is communicatively coupled to the machine vision camera and is configured to: receive the image captured by the machine vision camera, identify, in the image, each of the indicia, for each respective indicia in the image, determine bounds of a respective bounding box that substantially encompass each respective indicia in the image, display, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia, receive, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia, determine, from the respective bounding box of the corresponding indicia
  • the user computing device is further configured to for each of the indicia in the image, display the respective bounding box on an interface of the application.
  • At least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • the user computing device is further configured to scale the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (2) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • FIG. 1 is an example system configured to enhance image content captured by a machine vision camera, in accordance with embodiments described herein.
  • FIG. 2A is a perspective view of the imaging device of FIG. 1 , in accordance with embodiments described herein.
  • FIG. 2B is a block diagram of an example logic circuit for implementing example methods and/or operations described herein.
  • FIGS. 3A and 3B each depict an example application interface illustrating a stage of image enhancement corresponding to decoding indicia and displaying decoded payloads present in an image captured by a machine vision camera, in accordance with embodiments of the present disclosure.
  • FIGS. 4A and 4B each depict an example application interface illustrating a stage of image enhancement corresponding to generating a secondary box that substantially encompasses indicia included in an image captured by a machine vision camera, in accordance with embodiments of the present disclosure.
  • FIGS. 5A-5C each depict an example application interface illustrating a stage of image enhancement corresponding to scaling and/or otherwise emphasizing a chosen indicia within an image captured by a machine vision camera, in accordance with embodiments of the present disclosure.
  • FIG. 6 is a flowchart representative of a method for enhancing image content captured by a machine vision camera, in accordance with embodiments described herein.
  • Machine vision system owners/operators periodically have a need to visually evaluate images captured by the system's imaging equipment. In doing so, there arises a need to zoom in on certain elements (like barcodes) within the captured images for closer evaluation. This can be difficult to accomplish in an automatic manner, especially if multiple elements of the same kind are present on the screen.
  • Approaches described herein address these difficulties and provide a solution which helps automate and simplify the zooming process.
  • FIG. 1 illustrates an example imaging system 100 configured to enhance image content captured by a machine vision camera, in accordance with various embodiments disclosed herein.
  • the imaging system 100 includes a user computing device 102 and an imaging device 104 communicatively coupled to the user computing device 102 via a network 106 .
  • the user computing device 102 and the imaging device 104 may be capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description.
  • the user computing device 102 is generally configured to enable a user/operator to create a machine vision job for execution on the imaging device 104 .
  • the user/operator may then transmit/upload the machine vision job to the imaging device 104 via the network 106 , where the machine vision job is then interpreted and executed.
  • the user computing device 102 may comprise one or more operator workstations, and may include one or more processors 108 , one or more memories 110 , a networking interface 112 , an input/output (I/O) interface 114 , a smart imaging application 116 , and an image enhancement application 128 .
  • a “machine vision job” as referenced herein may be or include any suitable imaging job including any suitable executable tasks, such as machine vision tasks, barcode decoding tasks, and/or any other tasks or combinations thereof.
  • the imaging device 104 is connected to the user computing device 102 via a network 106 , and is configured to interpret and execute machine vision jobs received from the user computing device 102 .
  • the imaging device 104 may obtain a job file containing one or more job scripts from the user computing device 102 across the network 106 that may define the machine vision job and may configure the imaging device 104 to capture and/or analyze images in accordance with the machine vision job.
  • the imaging device 104 may include flash memory used for determining, storing, or otherwise processing imaging data/datasets and/or post-imaging data.
  • the imaging device 104 may then receive, recognize, and/or otherwise interpret a trigger that causes the imaging device 104 to capture an image of the target object in accordance with the configuration established via the one or more job scripts. Once captured and/or analyzed, the imaging device 104 may transmit the images and any associated data across the network 106 to the user computing device 102 for further analysis and/or storage.
  • the imaging device 104 may be a “smart” camera and/or may otherwise be configured to automatically perform sufficient functionality of the imaging device 104 in order to obtain, interpret, and execute job scripts that define machine vision jobs, such as any one or more job scripts contained in one or more job files as obtained, for example, from the user computing device 102 .
  • the job file may be a JSON representation/data format of the one or more job scripts transferrable from the user computing device 102 to the imaging device 104 .
  • the job file may further be loadable/readable by a C++ runtime engine, or other suitable runtime engine, executing on the imaging device 104 .
  • the imaging device 104 may run a server (not shown) configured to listen for and receive job files across the network 106 from the user computing device 102 .
  • the server configured to listen for and receive job files may be implemented as one or more cloud-based servers, such as a cloud-based computing platform.
  • the server may be any one or more cloud-based platform(s) such as MICROSOFT AZURE, AMAZON AWS, or the like.
  • the imaging device 104 may include one or more processors 118 , one or more memories 120 , a networking interface 122 , an I/O interface 124 , and an imaging assembly 126 .
  • the imaging assembly 126 may include a digital camera and/or digital video camera for capturing or taking digital images and/or frames. Each digital image may comprise pixel data that may be analyzed by one or more tools each configured to perform an image analysis task.
  • the digital camera and/or digital video camera of, e.g., the imaging assembly 126 may be configured, as disclosed herein, to take, capture, or otherwise generate digital images and, at least in some embodiments, may store such images in a memory (e.g., one or more memories 110 , 120 ) of a respective device (e.g., user computing device 102 , imaging device 104 ).
  • a memory e.g., one or more memories 110 , 120
  • a respective device e.g., user computing device 102 , imaging device 104 .
  • the imaging assembly 126 may include a photo-realistic camera (not shown) for capturing, sensing, or scanning 2D image data.
  • the photo-realistic camera may be an RGB (red, green, blue) based camera for capturing 2D images having RGB-based pixel data.
  • the imaging assembly may additionally include a three-dimensional (3D) camera (not shown) for capturing, sensing, or scanning 3D image data.
  • the 3D camera may include an Infra-Red (IR) projector and a related IR camera for capturing, sensing, or scanning 3D image data/datasets.
  • IR Infra-Red
  • the photo-realistic camera of the imaging assembly 126 may capture 2D images, and related 2D image data, at the same or similar point in time as the 3D camera of the imaging assembly 126 such that the imaging device 104 can have both sets of 3D image data and 2D image data available for a particular surface, object, area, or scene at the same or similar instance in time.
  • the imaging assembly 126 may include the 3D camera and the photo-realistic camera as a single imaging apparatus configured to capture 3D depth image data simultaneously with 2D image data. Consequently, the captured 2D images and the corresponding 2D image data may be depth-aligned with the 3D images and 3D image data.
  • imaging assembly 126 may be configured to capture images of surfaces or areas of a predefined search space or target objects within the predefined search space.
  • each tool included in a job script may additionally include a region of interest (ROI) corresponding to a specific region or a target object imaged by the imaging assembly 126 .
  • ROI region of interest
  • the composite area defined by the ROIs for all tools included in a particular job script may thereby define the predefined search space which the imaging assembly 126 may capture in order to facilitate the execution of the job script.
  • the predefined search space may be user-specified to include a field of view (FOV) featuring more or less than the composite area defined by the ROIs of all tools included in the particular job script.
  • FOV field of view
  • the imaging assembly 126 may capture 2D and/or 3D image data/datasets of a variety of areas, such that additional areas in addition to the predefined search spaces are contemplated herein. Moreover, in various embodiments, the imaging assembly 126 may be configured to capture other sets of image data in addition to the 2D/3D image data, such as grayscale image data or amplitude image data, each of which may be depth-aligned with the 2D/3D image data.
  • the imaging device 104 may also process the 2D image data/datasets and/or 3D image datasets for use by other devices (e.g., the user computing device 102 , an external server).
  • the one or more processors 118 may process the image data or datasets captured, scanned, or sensed by the imaging assembly 126 .
  • the processing of the image data may generate post-imaging data that may include metadata, simplified data, normalized data, result data, status data, or alert data as determined from the original scanned or sensed image data.
  • the image data and/or the post-imaging data may be sent to the user computing device 102 executing the smart imaging application 116 for viewing, manipulation, and/or otherwise interaction.
  • the image data and/or the post-imaging data may be sent to a server for storage or for further manipulation.
  • the user computing device 102 , imaging device 104 , and/or external server or other centralized processing unit and/or storage may store such data, and may also send the image data and/or the post-imaging data to another application implemented on a user device, such as a mobile device, a tablet, a handheld device, or a desktop device.
  • Each of the one or more memories 110 , 120 may include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others.
  • ROM read-only memory
  • EPROM electronic programmable read-only memory
  • RAM random access memory
  • EEPROM erasable electronic programmable read-only memory
  • other hard drives flash memory, MicroSD cards, and others.
  • a computer program or computer based product, application, or code may be stored on a computer usable storage medium, or tangible, non-transitory computer-readable medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having such computer-readable program code or computer instructions embodied therein, wherein the computer-readable program code or computer instructions may be installed on or otherwise adapted to be executed by the one or more processors 108 , 118 (e.g., working in connection with the respective operating system in the one or more memories 110 , 120 ) to facilitate, implement, or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
  • a computer usable storage medium or tangible, non-transitory computer-readable medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having
  • the program code may be implemented in any desired program language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via Golang, Python, C, C++, C#, Objective-C, Java, Scala, ActionScript, JavaScript, HTML, CSS, XML, etc.).
  • the one or more memories 110 , 120 may store an operating system (OS) (e.g., Microsoft Windows, Linux, Unix, etc.) capable of facilitating the functionalities, apps, methods, or other software as discussed herein.
  • OS operating system
  • the one or more memories 110 may also store the smart imaging application 116 and/or the image enhancement application 128 , which may be configured to enable machine vision job construction/execution, as described further herein. Additionally, or alternatively, the smart imaging application 116 and/or the image enhancement application 128 may also be stored in the one or more memories 120 of the imaging device 104 , and/or in an external database (not shown), which is accessible or otherwise communicatively coupled to the user computing device 102 via the network 106 .
  • OS operating system
  • an external database not shown
  • the one or more memories 110 , 120 may also store machine readable instructions, including any of one or more application(s), one or more software component(s), and/or one or more application programming interfaces (APIs), which may be implemented to facilitate or perform the features, functions, or other disclosure described herein, such as any methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
  • the applications, software components, or APIs may be, include, otherwise be part of, a machine vision based imaging application, such as the smart imaging application 116 and/or the image enhancement application 128 , where each may be configured to facilitate their various functionalities discussed herein.
  • a machine vision based imaging application such as the smart imaging application 116 and/or the image enhancement application 128 , where each may be configured to facilitate their various functionalities discussed herein.
  • one or more other applications may be envisioned and that are executed by the one or more processors 108 , 118 .
  • the one or more processors 108 , 118 may be connected to the one or more memories 110 , 120 via a computer bus responsible for transmitting electronic data, data packets, or otherwise electronic signals to and from the one or more processors 108 , 118 and one or more memories 110 , 120 in order to implement or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
  • the one or more processors 108 , 118 may interface with the one or more memories 110 , 120 via the computer bus to execute the operating system (OS).
  • the one or more processors 108 , 118 may also interface with the one or more memories 110 , 120 via the computer bus to create, read, update, delete, or otherwise access or interact with the data stored in the one or more memories 110 , 120 and/or external databases (e.g., a relational database, such as Oracle, DB2, MySQL, or a NoSQL based database, such as MongoDB).
  • a relational database such as Oracle, DB2, MySQL
  • NoSQL based database such as MongoDB
  • the data stored in the one or more memories 110 , 120 and/or an external database may include all or part of any of the data or information described herein, including, for example, machine vision job images (e.g., images captured by the imaging device 104 in response to execution of a job script) and/or other suitable information.
  • machine vision job images e.g., images captured by the imaging device 104 in response to execution of a job script
  • networking interfaces 112 , 122 may be configured to communicate (e.g., send and receive) data via one or more external/network port(s) to one or more networks or local terminals, such as network 106 , described herein.
  • networking interfaces 112 , 122 may include a client-server platform technology such as ASP.NET, Java J2EE, Ruby on Rails, Node.js, a web service or online API, responsive for receiving and responding to electronic requests.
  • the networking interfaces 112 , 122 may implement the client-server platform technology that may interact, via the computer bus, with the one or more memories 110 , 120 (including the applications(s), component(s), API(s), data, etc. stored therein) to implement or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
  • the networking interfaces 112 , 122 may include, or interact with, one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and that may be used in receipt and transmission of data via external/network ports connected to network 106 .
  • network 106 may comprise a private network or local area network (LAN). Additionally, or alternatively, network 106 may comprise a public network such as the Internet.
  • the network 106 may comprise routers, wireless switches, or other such wireless connection points communicating to the user computing device 102 (via the networking interface 112 ) and the imaging device 104 (via networking interface 122 ) via wireless communications based on any one or more of various wireless standards, including by non-limiting example, IEEE 802.11a/b/c/g (WIFI), the BLUETOOTH standard, or the like.
  • WIFI IEEE 802.11a/b/c/g
  • BLUETOOTH BLUETOOTH
  • the I/O interfaces 114 , 124 may include or implement operator interfaces configured to present information to an administrator or operator and/or receive inputs from the administrator or operator.
  • An operator interface may provide a display screen (e.g., via the user computing device 102 and/or imaging device 104 ) which a user/operator may use to visualize any images, graphics, text, data, features, pixels, and/or other suitable visualizations or information.
  • the user computing device 102 and/or imaging device 104 may comprise, implement, have access to, render, or otherwise expose, at least in part, a graphical user interface (GUI) for displaying images, graphics, text, data, features, pixels, and/or other suitable visualizations or information on the display screen.
  • GUI graphical user interface
  • the I/O interfaces 114 , 124 may also include I/O components (e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc.), which may be directly/indirectly accessible via or attached to the user computing device 102 and/or the imaging device 104 .
  • I/O components e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc.
  • an administrator or user/operator may access the user computing device 102 and/or imaging device 104 to construct jobs, review images or other information, make changes, input responses and/or selections, and/or perform other functions.
  • the user computing device 102 may perform the functionalities as discussed herein as part of a “cloud” network or may otherwise communicate with other hardware or software components within the cloud to send, retrieve, or otherwise analyze data or information described herein.
  • FIG. 2A is a perspective view of the imaging device 104 of FIG. 1 , in accordance with embodiments described herein.
  • the imaging device 104 includes a housing 202 , an imaging aperture 204 , a user interface label 206 , a dome switch/button 208 , one or more light emitting diodes (LEDs) 210 , and mounting point(s) 212 .
  • the imaging device 104 may obtain job files from a user computing device (e.g., user computing device 102 ) which the imaging device 104 thereafter interprets and executes.
  • the instructions included in the job file may include device configuration settings (also referenced herein as “imaging settings”) operable to adjust the configuration of the imaging device 104 prior to capturing images of a target object.
  • imaging settings also referenced herein as “imaging settings”
  • the device configuration settings may include instructions to adjust one or more settings related to the imaging aperture 204 .
  • the job file may include device configuration settings to increase the aperture size of the imaging aperture 204 .
  • the imaging device 104 may interpret these instructions (e.g., via one or more processors 118 ) and accordingly increase the aperture size of the imaging aperture 204 .
  • the imaging device 104 may be configured to automatically adjust its own configuration to optimally conform to a particular machine vision job.
  • the imaging device 104 may include or otherwise be adaptable to include, for example but without limitation, one or more bandpass filters, one or more polarizers, one or more DPM diffusers, one or more C-mount lenses, and/or one or more C-mount liquid lenses over or otherwise influencing the received illumination through the imaging aperture 204 .
  • the user interface label 206 may include the dome switch/button 208 and one or more LEDs 210 , and may thereby enable a variety of interactive and/or indicative features. Generally, the user interface label 206 may enable a user to trigger and/or tune to the imaging device 104 (e.g., via the dome switch/button 208 ) and to recognize when one or more functions, errors, and/or other actions have been performed or taken place with respect to the imaging device 104 (e.g., via the one or more LEDs 210 ).
  • the trigger function of a dome switch/button may enable a user to capture an image using the imaging device 104 and/or to display a trigger configuration screen of a user application (e.g., smart imaging application 116 , image enhancement application 128 ).
  • the trigger configuration screen may allow the user to configure one or more triggers for the imaging device 104 that may be stored in memory (e.g., one or more memories 110 , 120 ) for use in later developed machine vision jobs, as discussed herein.
  • the tuning function of a dome switch/button may enable a user to automatically and/or manually adjust the configuration of the imaging device 104 in accordance with a preferred/predetermined configuration and/or to display an imaging configuration screen of a user application (e.g., smart imaging application 116 , image enhancement application 128 ).
  • the imaging configuration screen may allow the user to configure one or more configurations of the imaging device 104 (e.g., aperture size, exposure length, etc.) that may be stored in memory (e.g., one or more memories 110 , 120 ) for use in later developed machine vision jobs, as discussed herein.
  • a user may utilize the imaging configuration screen (or more generally, the smart imaging application 116 and/or the image enhancement application 128 ) to establish two or more configurations of imaging settings for the imaging device 104 .
  • the user may then save these two or more configurations of imaging settings as part of a machine vision job that is then transmitted to the imaging device 104 in a job file containing one or more job scripts.
  • the one or more job scripts may then instruct the imaging device 104 processors (e.g., one or more processors 118 ) to automatically and sequentially adjust the imaging settings of the imaging device in accordance with one or more of the two or more configurations of imaging settings after each successive image capture.
  • the mounting point(s) 212 may enable a user connecting and/or removably affixing the imaging device 104 to a mounting device (e.g., imaging tripod, camera mount, etc.), a structural surface (e.g., a warehouse wall, a warehouse ceiling, structural support beam, etc.), other accessory items, and/or any other suitable connecting devices, structures, or surfaces.
  • a mounting device e.g., imaging tripod, camera mount, etc.
  • a structural surface e.g., a warehouse wall, a warehouse ceiling, structural support beam, etc.
  • other accessory items e.g., any other suitable connecting devices, structures, or surfaces.
  • the imaging device 104 may be optimally placed on a mounting device in a distribution center, manufacturing plant, warehouse, and/or other facility to image and thereby monitor the quality/consistency of products, packages, and/or other items as they pass through the imaging device's 104 FOV.
  • the mounting point(s) 212 may enable a user to connect the imaging device
  • the imaging device 104 may include several hardware components contained within the housing 202 that enable connectivity to a computer network (e.g., network 106 ).
  • the imaging device 104 may include a networking interface (e.g., networking interface 122 ) that enables the imaging device 104 to connect to a network, such as a Gigabit Ethernet connection and/or a Dual Gigabit Ethernet connection.
  • the imaging device 104 may include transceivers and/or other communication components as part of the networking interface to communicate with other devices (e.g., the user computing device 102 ) via, for example, Ethernet/IP, PROFINET, Modbus TCP, CC-Link, USB 3.0, RS-232, and/or any other suitable communication protocol or combinations thereof.
  • FIG. 2B is a block diagram representative of an example logic circuit capable of implementing, for example, one or more components of the example imaging device 104 of FIG. 2A .
  • the example logic circuit of FIG. 2B is a processing platform 230 capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description.
  • Other example logic circuits capable of, for example, implementing operations of the example methods described herein include field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs).
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • the example processing platform 230 of FIG. 2B includes a processor 232 such as, for example, one or more microprocessors, controllers, and/or any suitable type of processor.
  • the example processing platform 230 of FIG. 2B includes memory (e.g., volatile memory, non-volatile memory) 234 accessible by the processor 232 (e.g., via a memory controller).
  • the example processor 232 interacts with the memory 234 to obtain, for example, machine-readable instructions stored in the memory 234 corresponding to, for example, the operations represented by the flowcharts of this disclosure.
  • the memory 234 also includes the smart imaging application 116 and, optionally, the image enhancement application 128 that are each accessible by the example processor 232 .
  • the smart imaging application 116 and/or the image enhancement application 128 may comprise rule-based instructions, an artificial intelligence (AI) and/or machine learning-based model, and/or any other suitable algorithm architecture or combination thereof configured to, for example, enhance image content captured by a machine vision camera (e.g., imaging device 104 ).
  • the example processor 232 may access the memory 234 to execute the smart imaging application 116 and/or the image enhancement application 128 when the imaging device 104 (via the imaging assembly 126 ) captures an image that includes a plurality of indicia that each encode a payload.
  • machine-readable instructions corresponding to the example operations described herein may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to the processing platform 230 to provide access to the machine-readable instructions stored thereon.
  • removable media e.g., a compact disc, a digital versatile disc, removable flash memory, etc.
  • the example processing platform 230 of FIG. 2B also includes a networking interface 236 to enable communication with other machines via, for example, one or more networks.
  • the example networking interface 236 includes any suitable type of communication interface(s) (e.g., wired and/or wireless interfaces) configured to operate in accordance with any suitable protocol(s) (e.g., Ethernet for wired communications and/or IEEE 802.11 for wireless communications).
  • the example processing platform 230 of FIG. 2B also includes input/output (I/O) interfaces 238 to enable receipt of user input and communication of output data to the user.
  • I/O input/output
  • Such user input and communication may include, for example, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc.
  • FIG. 3A depicts an example application interface 300 which includes an example image 314 captured by an imaging device 104 .
  • the example application interface 300 may be displayed as part of a smart imaging application (e.g., smart imaging application 106 ), an image enhancement application (e.g., image enhancement application 128 ), and/or any other suitable application or combinations thereof.
  • the example application interface 300 may be rendered on an interface of a user computing device (e.g., user computing device 102 ) as a result of the imaging device executing a machine vision job, and may be formatted in accordance with instructions included as part of the smart imaging application 106 .
  • the particular image renderings and/or other aspects of the example application interface 300 may be determined and displayed in accordance with instructions included as part of the image enhancement application 128 , as described herein.
  • each image displayed in the example application interface 300 will include a barcode, such as barcode 302 and barcode 304 .
  • the application e.g., image enhancement application 128
  • the application may receive, from the imaging device, decoded barcode data associated with either barcode 302 / 304 (this may also be referred to as “barcode result data”).
  • This information can be displayed in an appropriate location within the interface.
  • the data decoded from barcode 302 is displayed as a string 306 and data decoded from barcode 304 is displayed as a string 308 within the entry window (noted as “View Results” in FIGS. 3A and 3B ) 310 .
  • the example application interface 300 additionally includes a settings portion 316 and a filmstrip portion 318 .
  • the settings portion 316 may generally allow a user to configure particular actions performed as part of the machine vision job, barcode decoding job, and/or any other suitable executable job that is executed by the imaging device.
  • the user may enable the “Decode All” option illustrated in FIG. 3A to configure the indicia decoder included as part of the machine vision job to decode any decodable indicia that is identifiable within the example image 314 .
  • the indicia decoder may decode each of the payloads from barcodes 302 , 304 , and display the results of the decoding within the entry window 310 .
  • the filmstrip portion 318 may include all captured images by the imaging device during an individual execution of the machine vision job. For example, as a target object passes by the imaging device, the imaging device may capture one or more images of the target object, and each of those captured images may be displayed within the filmstrip portion 318 .
  • FIG. 3B depicts another example application interface 330 which includes another example image 320 captured by an imaging device 104 .
  • This example image 320 includes multiple indicia (e.g., quick response (QR) codes), and several of these indicia 322 , 324 may be decoded by the indicia decoder included as part of the machine vision job executed by the imaging device.
  • the data decoded from indicia 322 is displayed as a string 326
  • the data decoded from indicia 324 is displayed as a string 328 within the entry window 310 .
  • the indicia decoding performed as part of machine vision jobs described in the present disclosure may be configured to decode a payload from any suitable indicia, such as barcodes, QR codes, data matrices, etc.
  • the application (e.g., image enhancement application 128 ) provides a means for a user to select a desired indicia and have the application automatically center the image about that indicia and zoom in on that indicia to a predetermined zoom level. Achieving this functionality can be particularly difficult due to the fact that indicia may come in a wide variety of shapes and sizes (e.g., barcodes 302 , 304 and indicia 322 , 324 ), and that depending on the operating environment, the dimensions of the indicia as they appear in the captured image may vary greatly.
  • the application may achieve this via the following.
  • the user may select the particular indicia by selecting a particular entry from the entry window 310 . This can be done by hovering the pointer of a mouse over a line having the payload associated with the indicia of interest and then making a selection by clicking a mouse. Similar functionality may, for example, be achieved via a keyboard or any other input device that could allow for a selection of a specific entry associated with a desired indicia in the entry window 310 . In other instances, the user may make a selection by hovering the mouse (or any other input device) over the desired indicia in the display region 312 and then executing the selection by clicking a mouse button.
  • references to input devices like a mouse should not be seen as limiting and other input devices should be considered to be within the scope of this disclosure.
  • a user's finger and the respective input functions via a screen may function just like the input functions of a computer mouse.
  • the application determines the bounds of a bounding box that substantially encompasses at least some of the indicia in the image. In some implementations where the determination is made prior to the selection of the desired indicia, the application may be configured to display at least one bounding box around each corresponding indicia visible in the image shown in the display region 312 .
  • each bounding box may be comprised of a series of pixel points which correspond to the outer edges of each respective indicia. Consequently, each point of the bounding box will have an x,y pixel coordinate that is within the coordinate system of the image.
  • the application e.g., image enhancement application 128
  • the application can determine the highest pixel coordinate value in the vertical direction (x-axis) (also referred to as the upper pixel coordinate limit), the lowest pixel coordinate value in the vertical direction (x-axis) (also referred to as the lower pixel coordinate limit), the farthest side (e.g., left side) pixel coordinate value in the horizontal direction (y-axis) (also referred to as the first side pixel coordinate limit), and the farthest other side (e.g., right side) pixel coordinate value in the horizontal direction (y-axis) (also referred to as the second side pixel coordinate limit).
  • FIG. 4A depicts an example application interface 410 that includes the secondary box 412 .
  • the displayed image may include multiple indicia, and more than one (e.g., the barcode contained within the secondary box 412 and the barcode below the secondary box 412 ) may be decodable.
  • the image enhancement application may automatically recognize and decode each indicia included within the image, and may display the payloads resulting from the decoding.
  • the image enhancement application may determine the secondary box 412 in response to a user selection of the indicia included within the secondary box 412 .
  • the image enhancement application may automatically determine the secondary box 412 and a secondary box substantially encompassing the other indicia in response to decoding each indicia.
  • the image enhancement application may determine the secondary box(es) based on the coordinate limits corresponding to the decoded indicia.
  • the displayed image may include multiple indicia, but only one (e.g., the barcode contained within the secondary box 412 ) may be decodable.
  • the image enhancement application may automatically recognize and decode the indicia illustrated within the secondary box 412 . Thereafter, the image enhancement application may determine the secondary box 412 based on the coordinate limits corresponding to the decoded indicia.
  • FIG. 4B depicts an example application interface 420 that includes the secondary box 422 .
  • the displayed image may include portions of multiple indicia, but only one (e.g., the QR code contained within the secondary box 422 ) may be fully decodable.
  • the image enhancement application may automatically recognize and decode the indicia illustrated within the secondary box 422 . Thereafter, the image enhancement application may determine the secondary box 422 based on the coordinate limits corresponding to the decoded indicia.
  • the application may further be configured to reposition the image in the display region such that the center point of the secondary box associated with the indicia of interest (e.g., the indicia that was previously selected for viewing/analysis) is positioned within some threshold distance from the center point of the display region.
  • the threshold distance may be zero and the center point of the secondary box can overlay the center point of the display region.
  • a non-zero distance threshold that could be expressed, for example, in terms of pixels
  • the outcome of such positioning is that the secondary box (and thus the indicia associated with it) will be centered at or visually near the center of the display region.
  • the image enhancement application may display the example application interface 500 in response to receiving a selection of an indicia from a user, and repositioning the image within the display region such that the indicia is substantially centered within the display region.
  • the image enhancement application may receive an indication that a user has selected the indicia 502 within the image, and may proceed to determine the secondary box 504 based on the various coordinate limits (e.g., upper pixel coordinate limit, lower pixel coordinate limit, first side pixel coordinate limit, second side pixel coordinate limit) corresponding to the indicia 502 .
  • various coordinate limits e.g., upper pixel coordinate limit, lower pixel coordinate limit, first side pixel coordinate limit, second side pixel coordinate limit
  • the image enhancement application may reposition the image containing the indicia 502 such that the distance 506 c between the two center-points 506 a , 506 b is at least less than a threshold distance.
  • the application may also compare the center-point 506 a of the secondary box to the center-point 506 b of the display region to determine whether or not to reposition the image in order to more optimally decrease the distance 506 c between the two center-points 506 a , 506 b .
  • the image enhancement application may position the indicia 502 chosen by the user as close to the center-point of the display region as possible (e.g., the distance 506 c between the two center-points 506 a , 506 b is eliminated).
  • the application may also scale the viewpoint (up or down) such that the secondary box 504 occupies some predetermined amount of the display window.
  • the scaling performed by the image enhancement application may be expressed as a ratio of the pixels occupied by the secondary box 504 in a vertical direction to the vertical pixel count of the display region.
  • the desired ratio is 1:2
  • the secondary box 504 has a height of 200 pixels while the display region has a height of 1000 pixels
  • the viewpoint will be scaled 2.5 times such that the secondary box 504 occupies 500 pixels.
  • the image enhancement application may apply the same or a similar approach along the horizontal axis.
  • the horizontal secondary box to display region ratio may not be equal to the vertical secondary box to display region ratio.
  • the application will scale the viewpoint such that the ratio of both the horizontal secondary box to display region and the vertical secondary box to display region is at or below the desired threshold. This can help prevent instances where a relatively narrow but tall viewpoint is scaled based on a horizontal ratio without regard for a vertical ratio.
  • scaling the image to where the horizontal secondary box to display region ratio is 1:2 requires scaling the image by 10.
  • the image enhancement application may limit scaling the image to 1.25, causing the secondary box to increase to dimensions of 500 ⁇ 125 pixels, meeting the 1:2 threshold requirement along the vertical axis.
  • a user may configure the image enhancement application to scale the image such that a portion of the selected indicia is not featured within the display region after the image is scaled.
  • the image enhancement application may determine horizontal distances 508 between the vertical sides of the secondary box 504 and the vertical edges of the display region and vertical distances 509 between the horizontal sides of the secondary box 504 and the horizontal edges of the display region. Assume that the desired aspect ratio for the indicia 502 relative to the display region is 1:2. In this example, the image enhancement application may scale the image such that the total length of the horizontal distances 509 is equal to the length of one vertical side of the secondary box 504 and the total length of the vertical distances 508 is equal to the length of one horizontal side of the secondary box 504 . Of course, it is to be understood that the image enhancement application may scale the image to any suitable aspect ratio, as previously described.
  • the region outside of the selected indicia may be masked to highlight the selected indicia.
  • This mask may be applied outside of the secondary box or outside the bounding box of the selected indicia.
  • the mask may be transparent, opaque, color-changing, sharpness decreasing, or any other in nature which would call the indicia of interest to the forefront of the user's attention.
  • the image enhancement application may display an example application interface 510 that features a selected indicia 512 and a masked background portion 514 .
  • the user may configure the image enhancement application to automatically mask the masked background portion 514 upon completion of the image scaling, and/or at any other point after the user selects the selected indicia 512 .
  • the image enhancement application may provide a variety of graphical indications to allow a user to understand what indicia is currently displayed, and where the indicia is located within the displayed image.
  • FIG. 5C depicts an example application interface 520 depicting the selected indicia 512 , the masked background portion 514 , and a miniature image 522 that includes a scaled image indicator 524 .
  • the miniature image 522 may generally represent the original image from which the displayed image was generated (e.g., via repositioning, scaling, masking), and the scaled image indicator 524 may provide a graphical indication of the displayed image as part of the original image for a user's reference.
  • the image enhancement application may automatically, upon selection of the indicia, scale the image to generate the displayed image and display the miniature image 522 and scaled image indicator 524 so that the user does not lose track of where the displayed image and selected indicia 512 are relative to the original image.
  • the image enhancement application may render an animation, upon the user selection of the selected indicia 512 , within the display region that minimizes the original image into the miniature image 522 and thereafter generate the scaled image indicator 524 as an overlay over the miniature image 522 .
  • FIG. 6 is a flowchart representative of a method 600 for enhancing image content captured by a machine vision camera, in accordance with embodiments described herein.
  • the method 600 includes receiving an image captured by the machine vision camera (block 602 ).
  • the image may be received at an application executing on a user computing device communicatively coupled to a machine vision camera, and the image may include a plurality of indicia (e.g., barcodes, QR codes, etc.).
  • each of the plurality of indicia may encode a payload.
  • the method 600 may also include identifying, in the received image, each of the indicia (block 604 ). Accordingly, for each respective indicia in the image, the method 600 may include determining bounds of a respective bounding box that substantially encompass each respective indicia in the image (block 606 ). For example, the image enhancement application may determine the bounds of a respective bounding box corresponding to a single indicia by identifying the extreme coordinates of the indicia that define the outermost boundary of the indicia within the image. Thus, in certain aspects, the bounding box may be any suitable shape in order to substantially encompass the indicia.
  • the method 600 may also include displaying a plurality of entries, wherein each of the plurality of entries corresponds to a respective indicia of the plurality of indicia (block 608 ).
  • each of the plurality of entries includes a payload of the corresponding indicia.
  • a user may interact with the interface in order to indicate a selection of one or more of the displayed entries and/or indicia.
  • the method 600 may include receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia (block 610 ).
  • the application may mask a region of the display region upon selection of an entry/indicia by a user.
  • the method 600 may include masking a masked region in the display region responsive to receiving the selection.
  • the masked region may be a region outside of at least one of (i) the secondary box or (ii) the respective bounding box of the corresponding indicia.
  • the method 600 may also include determining an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit (block 612 ).
  • the image enhancement application may determine these coordinate limits based on the respective bounding box of the corresponding indicia. Based on these coordinate limits, the image enhancement application may determine a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound (block 614 ).
  • the method 600 may also include displaying the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region (block 616 ).
  • the image enhancement application may display the respective bounding box on an interface of the application.
  • the image enhancement application may display the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
  • the method 600 may also include scaling the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region (block 618 ).
  • at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • the image enhancement application may scale the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (ii) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • the image enhancement application may display a miniature version of the image in the display region as an overlay covering a portion of the scaled image.
  • the miniature version of the image includes an indicated portion representing the scaled image.
  • logic circuit is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines.
  • Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices.
  • Some example logic circuits, such as ASICs or FPGAs are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present).
  • Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions.
  • the above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted.
  • the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)).
  • the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)).
  • the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s).
  • each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)).
  • machine-readable instructions e.g., program code in the form of, for example, software and/or firmware
  • each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal.
  • a includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element.
  • the terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein.
  • the terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%.
  • the term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically.
  • a device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Studio Devices (AREA)

Abstract

Systems and methods for enhancing image content captured by a machine vision camera are disclosed herein. An example method includes receiving an image captured by a machine vision camera that includes a plurality of indicia, determining bounds of a bounding box for each respective indicia, and displaying a plurality of entries corresponding to the respective indicia. The example method further includes receiving a selection of an entry, and determining coordinate limits of the bounding box and a secondary box of the entry. The example method further includes displaying the image such that a center-point of the secondary box is positioned near a center-point of a display region, and scaling the image such that at least one of a vertical pixel count or a horizontal pixel count of the secondary box is within a ratio threshold of a vertical pixel count or a horizontal pixel count of the display region.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims priority from provisional U.S. Patent Application Ser. No. 63/182,392, filed on Apr. 30, 2021, and incorporated herein by reference in its entirety.
  • BACKGROUND
  • Over the years, industrial automation has come to rely heavily on machine vision components capable of assisting operators in a wide variety of tasks. In some implementations, machine vision components, like cameras, are utilized to track objects passing objects, like those which move on conveyor belts past stationary cameras. Often these cameras, along with the backend software, are used to capture images and determine a variety of parameters associated with the passing items. One aspect of this requires easy to use means to visualize various elements on the screen of a user computing device executing the backend software. Thus, there exists a need for improved systems and methods which, provide such features.
  • SUMMARY
  • In an embodiment, the present invention is a method for enhancing image content captured by a machine vision camera. The method may comprise: receiving, at an application executing on a user computing device communicatively coupled to a machine vision camera, an image captured by the machine vision camera, the image including a plurality of indicia, each of the plurality of indicia encoding a payload; identifying, in the image, each of the indicia; for each respective indicia in the image, determining bounds of a respective bounding box that substantially encompass each respective indicia in the image; displaying, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia; receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia; determining, from the respective bounding box of the corresponding indicia, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit; based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit, determining a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound; displaying the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region; and scaling the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region.
  • In a variation of this embodiment, the method may further comprise for each respective indicia in the image, displaying the respective bounding box on an interface of the application.
  • In another variation of this embodiment, at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • In yet another variation of this embodiment, scaling the image includes scaling the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (ii) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • In still another variation of this embodiment, each of the plurality of entries includes a payload of the corresponding indicia.
  • In yet another variation of this embodiment, the method further comprises displaying the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
  • In still another variation of this embodiment, the method further comprises masking a masked region in the display region responsive to receiving the selection, wherein the masked region is a region outside of at least one of (i) the secondary box or (ii) the respective bounding box of the corresponding indicia.
  • In yet another variation of this embodiment, at least one of the first predetermined ratio threshold and the second predetermined ratio threshold is user-definable such that a portion of the corresponding indicia is excluded from the scaled image.
  • In still another variation of this embodiment, the method further comprises displaying a miniature version of the image in the display region as an overlay covering a portion of the scaled image, wherein the miniature version of the image includes an indicated portion representing the scaled image.
  • In another embodiment, the present invention is a method for enhancing image content captured by a machine vision camera. The method may comprise: receiving, at an application executing on a user computing device communicatively coupled to a machine vision camera, an image captured by the machine vision camera, the image including a plurality of indicia, each of the plurality of indicia encoding a payload; identifying, in the image, each of the indicia; for each of the indicia in the image: determining bounds of a respective bounding box that substantially encompass the respective indicia, determining, from the respective bounding box, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit for the respective indicia, and based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit of the respective bounding box, determining a respective secondary box having an upper bound, a lower bound, a first side bound, and a second side bound for the respective indicia; displaying, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia; receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia; displaying the image in a display region of the interface such that a center-point of the respective secondary box of the corresponding indicia is positioned within a predetermined distance threshold from a center-point of the display region; and scaling the image such that at least one of (i) a first vertical pixel count between the upper bound of the respective secondary box of the corresponding indicia and the lower bound of the respective secondary box of the corresponding indicia is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (2) a first horizontal pixel count between the first side bound of the respective secondary box of the corresponding indicia and the second side bound of the respective secondary box of the corresponding indicia is within a second predetermined ratio threshold of a second horizontal pixel count of the display region.
  • In a variation of this embodiment, the method further comprises for each of the indicia in the image, displaying the respective bounding box on an interface of the application.
  • In another variation of this embodiment, at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • In yet another variation of this embodiment, scaling the image further includes scaling the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (2) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • In still another variation of this embodiment, each of the plurality of entries includes a payload of the corresponding indicia.
  • In yet another variation of this embodiment, the method further comprises displaying the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
  • In still another variation of this embodiment, the method further comprises masking a masked region in the display region responsive to receiving the selection, wherein the masked region is a region outside of one of (i) the respective secondary box of the corresponding indicia or (ii) the respective bounding box of the corresponding indicia.
  • In yet another embodiment, the present invention is a machine vision system for enhancing image content captured by a machine vision camera. The machine vision system may comprise: a machine vision camera configured to capture an image including a plurality of indicia, each of the plurality of indicia encoding a payload; and a user computing device executing an application, wherein the user computing device is communicatively coupled to the machine vision camera and is configured to: receive the image captured by the machine vision camera, identify, in the image, each of the indicia, for each respective indicia in the image, determine bounds of a respective bounding box that substantially encompass each respective indicia in the image, display, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia, receive, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia, determine, from the respective bounding box of the corresponding indicia, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit, based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit, determine a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound, display the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region, and scale the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region.
  • In a variation of this embodiment, the user computing device is further configured to for each of the indicia in the image, display the respective bounding box on an interface of the application.
  • In another variation of this embodiment, at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
  • In yet another variation of this embodiment, the user computing device is further configured to scale the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (2) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying figures, where like reference numerals refer to identical or functionally similar elements throughout the separate views, together with the detailed description below, are incorporated in and form part of the specification, and serve to further illustrate embodiments of concepts that include the claimed invention, and explain various principles and advantages of those embodiments.
  • FIG. 1 is an example system configured to enhance image content captured by a machine vision camera, in accordance with embodiments described herein.
  • FIG. 2A is a perspective view of the imaging device of FIG. 1, in accordance with embodiments described herein.
  • FIG. 2B is a block diagram of an example logic circuit for implementing example methods and/or operations described herein.
  • FIGS. 3A and 3B each depict an example application interface illustrating a stage of image enhancement corresponding to decoding indicia and displaying decoded payloads present in an image captured by a machine vision camera, in accordance with embodiments of the present disclosure.
  • FIGS. 4A and 4B each depict an example application interface illustrating a stage of image enhancement corresponding to generating a secondary box that substantially encompasses indicia included in an image captured by a machine vision camera, in accordance with embodiments of the present disclosure.
  • FIGS. 5A-5C each depict an example application interface illustrating a stage of image enhancement corresponding to scaling and/or otherwise emphasizing a chosen indicia within an image captured by a machine vision camera, in accordance with embodiments of the present disclosure.
  • FIG. 6 is a flowchart representative of a method for enhancing image content captured by a machine vision camera, in accordance with embodiments described herein.
  • Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
  • The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
  • DETAILED DESCRIPTION
  • Machine vision system owners/operators periodically have a need to visually evaluate images captured by the system's imaging equipment. In doing so, there arises a need to zoom in on certain elements (like barcodes) within the captured images for closer evaluation. This can be difficult to accomplish in an automatic manner, especially if multiple elements of the same kind are present on the screen. Approaches described herein address these difficulties and provide a solution which helps automate and simplify the zooming process.
  • FIG. 1 illustrates an example imaging system 100 configured to enhance image content captured by a machine vision camera, in accordance with various embodiments disclosed herein. In the example embodiment of FIG. 1, the imaging system 100 includes a user computing device 102 and an imaging device 104 communicatively coupled to the user computing device 102 via a network 106. Generally speaking, the user computing device 102 and the imaging device 104 may be capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description. The user computing device 102 is generally configured to enable a user/operator to create a machine vision job for execution on the imaging device 104. When created, the user/operator may then transmit/upload the machine vision job to the imaging device 104 via the network 106, where the machine vision job is then interpreted and executed. The user computing device 102 may comprise one or more operator workstations, and may include one or more processors 108, one or more memories 110, a networking interface 112, an input/output (I/O) interface 114, a smart imaging application 116, and an image enhancement application 128. It is to be understood, that a “machine vision job” as referenced herein may be or include any suitable imaging job including any suitable executable tasks, such as machine vision tasks, barcode decoding tasks, and/or any other tasks or combinations thereof.
  • The imaging device 104 is connected to the user computing device 102 via a network 106, and is configured to interpret and execute machine vision jobs received from the user computing device 102. Generally, the imaging device 104 may obtain a job file containing one or more job scripts from the user computing device 102 across the network 106 that may define the machine vision job and may configure the imaging device 104 to capture and/or analyze images in accordance with the machine vision job. For example, the imaging device 104 may include flash memory used for determining, storing, or otherwise processing imaging data/datasets and/or post-imaging data. The imaging device 104 may then receive, recognize, and/or otherwise interpret a trigger that causes the imaging device 104 to capture an image of the target object in accordance with the configuration established via the one or more job scripts. Once captured and/or analyzed, the imaging device 104 may transmit the images and any associated data across the network 106 to the user computing device 102 for further analysis and/or storage. In various embodiments, the imaging device 104 may be a “smart” camera and/or may otherwise be configured to automatically perform sufficient functionality of the imaging device 104 in order to obtain, interpret, and execute job scripts that define machine vision jobs, such as any one or more job scripts contained in one or more job files as obtained, for example, from the user computing device 102.
  • Broadly, the job file may be a JSON representation/data format of the one or more job scripts transferrable from the user computing device 102 to the imaging device 104. The job file may further be loadable/readable by a C++ runtime engine, or other suitable runtime engine, executing on the imaging device 104. Moreover, the imaging device 104 may run a server (not shown) configured to listen for and receive job files across the network 106 from the user computing device 102. Additionally, or alternatively, the server configured to listen for and receive job files may be implemented as one or more cloud-based servers, such as a cloud-based computing platform. For example, the server may be any one or more cloud-based platform(s) such as MICROSOFT AZURE, AMAZON AWS, or the like.
  • In any event, the imaging device 104 may include one or more processors 118, one or more memories 120, a networking interface 122, an I/O interface 124, and an imaging assembly 126. The imaging assembly 126 may include a digital camera and/or digital video camera for capturing or taking digital images and/or frames. Each digital image may comprise pixel data that may be analyzed by one or more tools each configured to perform an image analysis task. The digital camera and/or digital video camera of, e.g., the imaging assembly 126 may be configured, as disclosed herein, to take, capture, or otherwise generate digital images and, at least in some embodiments, may store such images in a memory (e.g., one or more memories 110, 120) of a respective device (e.g., user computing device 102, imaging device 104).
  • For example, the imaging assembly 126 may include a photo-realistic camera (not shown) for capturing, sensing, or scanning 2D image data. The photo-realistic camera may be an RGB (red, green, blue) based camera for capturing 2D images having RGB-based pixel data. In various embodiments, the imaging assembly may additionally include a three-dimensional (3D) camera (not shown) for capturing, sensing, or scanning 3D image data. The 3D camera may include an Infra-Red (IR) projector and a related IR camera for capturing, sensing, or scanning 3D image data/datasets. In some embodiments, the photo-realistic camera of the imaging assembly 126 may capture 2D images, and related 2D image data, at the same or similar point in time as the 3D camera of the imaging assembly 126 such that the imaging device 104 can have both sets of 3D image data and 2D image data available for a particular surface, object, area, or scene at the same or similar instance in time. In various embodiments, the imaging assembly 126 may include the 3D camera and the photo-realistic camera as a single imaging apparatus configured to capture 3D depth image data simultaneously with 2D image data. Consequently, the captured 2D images and the corresponding 2D image data may be depth-aligned with the 3D images and 3D image data.
  • In embodiments, imaging assembly 126 may be configured to capture images of surfaces or areas of a predefined search space or target objects within the predefined search space. For example, each tool included in a job script may additionally include a region of interest (ROI) corresponding to a specific region or a target object imaged by the imaging assembly 126. The composite area defined by the ROIs for all tools included in a particular job script may thereby define the predefined search space which the imaging assembly 126 may capture in order to facilitate the execution of the job script. However, the predefined search space may be user-specified to include a field of view (FOV) featuring more or less than the composite area defined by the ROIs of all tools included in the particular job script. It should be noted that the imaging assembly 126 may capture 2D and/or 3D image data/datasets of a variety of areas, such that additional areas in addition to the predefined search spaces are contemplated herein. Moreover, in various embodiments, the imaging assembly 126 may be configured to capture other sets of image data in addition to the 2D/3D image data, such as grayscale image data or amplitude image data, each of which may be depth-aligned with the 2D/3D image data.
  • The imaging device 104 may also process the 2D image data/datasets and/or 3D image datasets for use by other devices (e.g., the user computing device 102, an external server). For example, the one or more processors 118 may process the image data or datasets captured, scanned, or sensed by the imaging assembly 126. The processing of the image data may generate post-imaging data that may include metadata, simplified data, normalized data, result data, status data, or alert data as determined from the original scanned or sensed image data. The image data and/or the post-imaging data may be sent to the user computing device 102 executing the smart imaging application 116 for viewing, manipulation, and/or otherwise interaction. In other embodiments, the image data and/or the post-imaging data may be sent to a server for storage or for further manipulation. As described herein, the user computing device 102, imaging device 104, and/or external server or other centralized processing unit and/or storage may store such data, and may also send the image data and/or the post-imaging data to another application implemented on a user device, such as a mobile device, a tablet, a handheld device, or a desktop device.
  • Each of the one or more memories 110, 120 may include one or more forms of volatile and/or non-volatile, fixed and/or removable memory, such as read-only memory (ROM), electronic programmable read-only memory (EPROM), random access memory (RAM), erasable electronic programmable read-only memory (EEPROM), and/or other hard drives, flash memory, MicroSD cards, and others. In general, a computer program or computer based product, application, or code (e.g., smart imaging application 116, or other computing instructions described herein) may be stored on a computer usable storage medium, or tangible, non-transitory computer-readable medium (e.g., standard random access memory (RAM), an optical disc, a universal serial bus (USB) drive, or the like) having such computer-readable program code or computer instructions embodied therein, wherein the computer-readable program code or computer instructions may be installed on or otherwise adapted to be executed by the one or more processors 108, 118 (e.g., working in connection with the respective operating system in the one or more memories 110, 120) to facilitate, implement, or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein. In this regard, the program code may be implemented in any desired program language, and may be implemented as machine code, assembly code, byte code, interpretable source code or the like (e.g., via Golang, Python, C, C++, C#, Objective-C, Java, Scala, ActionScript, JavaScript, HTML, CSS, XML, etc.).
  • The one or more memories 110, 120 may store an operating system (OS) (e.g., Microsoft Windows, Linux, Unix, etc.) capable of facilitating the functionalities, apps, methods, or other software as discussed herein. The one or more memories 110 may also store the smart imaging application 116 and/or the image enhancement application 128, which may be configured to enable machine vision job construction/execution, as described further herein. Additionally, or alternatively, the smart imaging application 116 and/or the image enhancement application 128 may also be stored in the one or more memories 120 of the imaging device 104, and/or in an external database (not shown), which is accessible or otherwise communicatively coupled to the user computing device 102 via the network 106. The one or more memories 110, 120 may also store machine readable instructions, including any of one or more application(s), one or more software component(s), and/or one or more application programming interfaces (APIs), which may be implemented to facilitate or perform the features, functions, or other disclosure described herein, such as any methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein. For example, at least some of the applications, software components, or APIs may be, include, otherwise be part of, a machine vision based imaging application, such as the smart imaging application 116 and/or the image enhancement application 128, where each may be configured to facilitate their various functionalities discussed herein. It should be appreciated that one or more other applications may be envisioned and that are executed by the one or more processors 108, 118.
  • The one or more processors 108, 118 may be connected to the one or more memories 110, 120 via a computer bus responsible for transmitting electronic data, data packets, or otherwise electronic signals to and from the one or more processors 108, 118 and one or more memories 110, 120 in order to implement or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
  • The one or more processors 108, 118 may interface with the one or more memories 110, 120 via the computer bus to execute the operating system (OS). The one or more processors 108, 118 may also interface with the one or more memories 110, 120 via the computer bus to create, read, update, delete, or otherwise access or interact with the data stored in the one or more memories 110, 120 and/or external databases (e.g., a relational database, such as Oracle, DB2, MySQL, or a NoSQL based database, such as MongoDB). The data stored in the one or more memories 110, 120 and/or an external database may include all or part of any of the data or information described herein, including, for example, machine vision job images (e.g., images captured by the imaging device 104 in response to execution of a job script) and/or other suitable information.
  • The networking interfaces 112, 122 may be configured to communicate (e.g., send and receive) data via one or more external/network port(s) to one or more networks or local terminals, such as network 106, described herein. In some embodiments, networking interfaces 112, 122 may include a client-server platform technology such as ASP.NET, Java J2EE, Ruby on Rails, Node.js, a web service or online API, responsive for receiving and responding to electronic requests. The networking interfaces 112, 122 may implement the client-server platform technology that may interact, via the computer bus, with the one or more memories 110, 120 (including the applications(s), component(s), API(s), data, etc. stored therein) to implement or perform the machine readable instructions, methods, processes, elements or limitations, as illustrated, depicted, or described for the various flowcharts, illustrations, diagrams, figures, and/or other disclosure herein.
  • According to some embodiments, the networking interfaces 112, 122 may include, or interact with, one or more transceivers (e.g., WWAN, WLAN, and/or WPAN transceivers) functioning in accordance with IEEE standards, 3GPP standards, or other standards, and that may be used in receipt and transmission of data via external/network ports connected to network 106. In some embodiments, network 106 may comprise a private network or local area network (LAN). Additionally, or alternatively, network 106 may comprise a public network such as the Internet. In some embodiments, the network 106 may comprise routers, wireless switches, or other such wireless connection points communicating to the user computing device 102 (via the networking interface 112) and the imaging device 104 (via networking interface 122) via wireless communications based on any one or more of various wireless standards, including by non-limiting example, IEEE 802.11a/b/c/g (WIFI), the BLUETOOTH standard, or the like.
  • The I/O interfaces 114, 124 may include or implement operator interfaces configured to present information to an administrator or operator and/or receive inputs from the administrator or operator. An operator interface may provide a display screen (e.g., via the user computing device 102 and/or imaging device 104) which a user/operator may use to visualize any images, graphics, text, data, features, pixels, and/or other suitable visualizations or information. For example, the user computing device 102 and/or imaging device 104 may comprise, implement, have access to, render, or otherwise expose, at least in part, a graphical user interface (GUI) for displaying images, graphics, text, data, features, pixels, and/or other suitable visualizations or information on the display screen. The I/O interfaces 114, 124 may also include I/O components (e.g., ports, capacitive or resistive touch sensitive input panels, keys, buttons, lights, LEDs, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc.), which may be directly/indirectly accessible via or attached to the user computing device 102 and/or the imaging device 104. According to some embodiments, an administrator or user/operator may access the user computing device 102 and/or imaging device 104 to construct jobs, review images or other information, make changes, input responses and/or selections, and/or perform other functions.
  • As described above herein, in some embodiments, the user computing device 102 may perform the functionalities as discussed herein as part of a “cloud” network or may otherwise communicate with other hardware or software components within the cloud to send, retrieve, or otherwise analyze data or information described herein.
  • FIG. 2A is a perspective view of the imaging device 104 of FIG. 1, in accordance with embodiments described herein. The imaging device 104 includes a housing 202, an imaging aperture 204, a user interface label 206, a dome switch/button 208, one or more light emitting diodes (LEDs) 210, and mounting point(s) 212. As previously mentioned, the imaging device 104 may obtain job files from a user computing device (e.g., user computing device 102) which the imaging device 104 thereafter interprets and executes. The instructions included in the job file may include device configuration settings (also referenced herein as “imaging settings”) operable to adjust the configuration of the imaging device 104 prior to capturing images of a target object.
  • For example, the device configuration settings may include instructions to adjust one or more settings related to the imaging aperture 204. As an example, assume that at least a portion of the intended analysis corresponding to a machine vision job requires the imaging device 104 to maximize the brightness of any captured image. To accommodate this requirement, the job file may include device configuration settings to increase the aperture size of the imaging aperture 204. The imaging device 104 may interpret these instructions (e.g., via one or more processors 118) and accordingly increase the aperture size of the imaging aperture 204. Thus, the imaging device 104 may be configured to automatically adjust its own configuration to optimally conform to a particular machine vision job. Additionally, the imaging device 104 may include or otherwise be adaptable to include, for example but without limitation, one or more bandpass filters, one or more polarizers, one or more DPM diffusers, one or more C-mount lenses, and/or one or more C-mount liquid lenses over or otherwise influencing the received illumination through the imaging aperture 204.
  • The user interface label 206 may include the dome switch/button 208 and one or more LEDs 210, and may thereby enable a variety of interactive and/or indicative features. Generally, the user interface label 206 may enable a user to trigger and/or tune to the imaging device 104 (e.g., via the dome switch/button 208) and to recognize when one or more functions, errors, and/or other actions have been performed or taken place with respect to the imaging device 104 (e.g., via the one or more LEDs 210). For example, the trigger function of a dome switch/button (e.g., dome/switch button 208) may enable a user to capture an image using the imaging device 104 and/or to display a trigger configuration screen of a user application (e.g., smart imaging application 116, image enhancement application 128). The trigger configuration screen may allow the user to configure one or more triggers for the imaging device 104 that may be stored in memory (e.g., one or more memories 110, 120) for use in later developed machine vision jobs, as discussed herein.
  • As another example, the tuning function of a dome switch/button (e.g., dome/switch button 208) may enable a user to automatically and/or manually adjust the configuration of the imaging device 104 in accordance with a preferred/predetermined configuration and/or to display an imaging configuration screen of a user application (e.g., smart imaging application 116, image enhancement application 128). The imaging configuration screen may allow the user to configure one or more configurations of the imaging device 104 (e.g., aperture size, exposure length, etc.) that may be stored in memory (e.g., one or more memories 110, 120) for use in later developed machine vision jobs, as discussed herein.
  • To further this example, and as discussed further herein, a user may utilize the imaging configuration screen (or more generally, the smart imaging application 116 and/or the image enhancement application 128) to establish two or more configurations of imaging settings for the imaging device 104. The user may then save these two or more configurations of imaging settings as part of a machine vision job that is then transmitted to the imaging device 104 in a job file containing one or more job scripts. The one or more job scripts may then instruct the imaging device 104 processors (e.g., one or more processors 118) to automatically and sequentially adjust the imaging settings of the imaging device in accordance with one or more of the two or more configurations of imaging settings after each successive image capture.
  • The mounting point(s) 212 may enable a user connecting and/or removably affixing the imaging device 104 to a mounting device (e.g., imaging tripod, camera mount, etc.), a structural surface (e.g., a warehouse wall, a warehouse ceiling, structural support beam, etc.), other accessory items, and/or any other suitable connecting devices, structures, or surfaces. For example, the imaging device 104 may be optimally placed on a mounting device in a distribution center, manufacturing plant, warehouse, and/or other facility to image and thereby monitor the quality/consistency of products, packages, and/or other items as they pass through the imaging device's 104 FOV. Moreover, the mounting point(s) 212 may enable a user to connect the imaging device 104 to a myriad of accessory items including, but without limitation, one or more external illumination devices, one or more mounting devices/brackets, and the like.
  • In addition, the imaging device 104 may include several hardware components contained within the housing 202 that enable connectivity to a computer network (e.g., network 106). For example, the imaging device 104 may include a networking interface (e.g., networking interface 122) that enables the imaging device 104 to connect to a network, such as a Gigabit Ethernet connection and/or a Dual Gigabit Ethernet connection. Further, the imaging device 104 may include transceivers and/or other communication components as part of the networking interface to communicate with other devices (e.g., the user computing device 102) via, for example, Ethernet/IP, PROFINET, Modbus TCP, CC-Link, USB 3.0, RS-232, and/or any other suitable communication protocol or combinations thereof.
  • FIG. 2B is a block diagram representative of an example logic circuit capable of implementing, for example, one or more components of the example imaging device 104 of FIG. 2A. The example logic circuit of FIG. 2B is a processing platform 230 capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description. Other example logic circuits capable of, for example, implementing operations of the example methods described herein include field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs).
  • The example processing platform 230 of FIG. 2B includes a processor 232 such as, for example, one or more microprocessors, controllers, and/or any suitable type of processor. The example processing platform 230 of FIG. 2B includes memory (e.g., volatile memory, non-volatile memory) 234 accessible by the processor 232 (e.g., via a memory controller). The example processor 232 interacts with the memory 234 to obtain, for example, machine-readable instructions stored in the memory 234 corresponding to, for example, the operations represented by the flowcharts of this disclosure. The memory 234 also includes the smart imaging application 116 and, optionally, the image enhancement application 128 that are each accessible by the example processor 232. The smart imaging application 116 and/or the image enhancement application 128 may comprise rule-based instructions, an artificial intelligence (AI) and/or machine learning-based model, and/or any other suitable algorithm architecture or combination thereof configured to, for example, enhance image content captured by a machine vision camera (e.g., imaging device 104). To illustrate, the example processor 232 may access the memory 234 to execute the smart imaging application 116 and/or the image enhancement application 128 when the imaging device 104 (via the imaging assembly 126) captures an image that includes a plurality of indicia that each encode a payload. Additionally, or alternatively, machine-readable instructions corresponding to the example operations described herein may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to the processing platform 230 to provide access to the machine-readable instructions stored thereon.
  • The example processing platform 230 of FIG. 2B also includes a networking interface 236 to enable communication with other machines via, for example, one or more networks. The example networking interface 236 includes any suitable type of communication interface(s) (e.g., wired and/or wireless interfaces) configured to operate in accordance with any suitable protocol(s) (e.g., Ethernet for wired communications and/or IEEE 802.11 for wireless communications).
  • The example processing platform 230 of FIG. 2B also includes input/output (I/O) interfaces 238 to enable receipt of user input and communication of output data to the user. Such user input and communication may include, for example, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc.
  • FIG. 3A depicts an example application interface 300 which includes an example image 314 captured by an imaging device 104. The example application interface 300 may be displayed as part of a smart imaging application (e.g., smart imaging application 106), an image enhancement application (e.g., image enhancement application 128), and/or any other suitable application or combinations thereof. For example, the example application interface 300 may be rendered on an interface of a user computing device (e.g., user computing device 102) as a result of the imaging device executing a machine vision job, and may be formatted in accordance with instructions included as part of the smart imaging application 106. The particular image renderings and/or other aspects of the example application interface 300 may be determined and displayed in accordance with instructions included as part of the image enhancement application 128, as described herein.
  • In some instances, each image displayed in the example application interface 300 will include a barcode, such as barcode 302 and barcode 304. Depending on how a machine vision job is configured, the application (e.g., image enhancement application 128) may receive, from the imaging device, decoded barcode data associated with either barcode 302/304 (this may also be referred to as “barcode result data”). This information can be displayed in an appropriate location within the interface. In the depicted interface 300, the data decoded from barcode 302 is displayed as a string 306 and data decoded from barcode 304 is displayed as a string 308 within the entry window (noted as “View Results” in FIGS. 3A and 3B) 310.
  • As illustrated in FIG. 3A, the example application interface 300 additionally includes a settings portion 316 and a filmstrip portion 318. The settings portion 316 may generally allow a user to configure particular actions performed as part of the machine vision job, barcode decoding job, and/or any other suitable executable job that is executed by the imaging device. For example, the user may enable the “Decode All” option illustrated in FIG. 3A to configure the indicia decoder included as part of the machine vision job to decode any decodable indicia that is identifiable within the example image 314. As a result, the indicia decoder may decode each of the payloads from barcodes 302, 304, and display the results of the decoding within the entry window 310. The filmstrip portion 318 may include all captured images by the imaging device during an individual execution of the machine vision job. For example, as a target object passes by the imaging device, the imaging device may capture one or more images of the target object, and each of those captured images may be displayed within the filmstrip portion 318.
  • FIG. 3B depicts another example application interface 330 which includes another example image 320 captured by an imaging device 104. This example image 320 includes multiple indicia (e.g., quick response (QR) codes), and several of these indicia 322, 324 may be decoded by the indicia decoder included as part of the machine vision job executed by the imaging device. The data decoded from indicia 322 is displayed as a string 326, and the data decoded from indicia 324 is displayed as a string 328 within the entry window 310. Thus, the indicia decoding performed as part of machine vision jobs described in the present disclosure may be configured to decode a payload from any suitable indicia, such as barcodes, QR codes, data matrices, etc.
  • According to some aspects of the present disclosure, the application (e.g., image enhancement application 128) provides a means for a user to select a desired indicia and have the application automatically center the image about that indicia and zoom in on that indicia to a predetermined zoom level. Achieving this functionality can be particularly difficult due to the fact that indicia may come in a wide variety of shapes and sizes (e.g., barcodes 302, 304 and indicia 322, 324), and that depending on the operating environment, the dimensions of the indicia as they appear in the captured image may vary greatly.
  • In some implementations, the application may achieve this via the following. To identify which indicia the user wishes to zoom in on, the user may select the particular indicia by selecting a particular entry from the entry window 310. This can be done by hovering the pointer of a mouse over a line having the payload associated with the indicia of interest and then making a selection by clicking a mouse. Similar functionality may, for example, be achieved via a keyboard or any other input device that could allow for a selection of a specific entry associated with a desired indicia in the entry window 310. In other instances, the user may make a selection by hovering the mouse (or any other input device) over the desired indicia in the display region 312 and then executing the selection by clicking a mouse button. It should be appreciated that throughout this disclosure, references to input devices like a mouse should not be seen as limiting and other input devices should be considered to be within the scope of this disclosure. For example, it should be appreciated that in the event of the application being executed on a mobile device like a tablet or a notebook having touch-screen capabilities, a user's finger and the respective input functions via a screen may function just like the input functions of a computer mouse.
  • Prior or subsequent to the selection of a desired indicia, the application determines the bounds of a bounding box that substantially encompasses at least some of the indicia in the image. In some implementations where the determination is made prior to the selection of the desired indicia, the application may be configured to display at least one bounding box around each corresponding indicia visible in the image shown in the display region 312.
  • Generally, each bounding box may be comprised of a series of pixel points which correspond to the outer edges of each respective indicia. Consequently, each point of the bounding box will have an x,y pixel coordinate that is within the coordinate system of the image. From this, and as discussed herein, for each desired bounding box, the application (e.g., image enhancement application 128) can determine the highest pixel coordinate value in the vertical direction (x-axis) (also referred to as the upper pixel coordinate limit), the lowest pixel coordinate value in the vertical direction (x-axis) (also referred to as the lower pixel coordinate limit), the farthest side (e.g., left side) pixel coordinate value in the horizontal direction (y-axis) (also referred to as the first side pixel coordinate limit), and the farthest other side (e.g., right side) pixel coordinate value in the horizontal direction (y-axis) (also referred to as the second side pixel coordinate limit). Having this data allows the application to derive a secondary box for any desired indicia, where the secondary box is derived not from the shape or the orientation of the indicia, but from its coordinate limits. This provides the benefit of being able to orient the secondary box in a manner that is consistent with the display region, which in most implementations would result in the secondary box being approximately square or rectangular in shape with the top and bottom sides extending along a respective single height coordinate, and the two vertical sides extending along a respective single width coordinate.
  • An example of such a secondary box is illustrated in both of FIGS. 4A and 4B as 412 and 422, respectively. FIG. 4A depicts an example application interface 410 that includes the secondary box 412. As illustrated in FIG. 4A, the displayed image may include multiple indicia, and more than one (e.g., the barcode contained within the secondary box 412 and the barcode below the secondary box 412) may be decodable. In this case, the image enhancement application may automatically recognize and decode each indicia included within the image, and may display the payloads resulting from the decoding. In certain aspects, the image enhancement application may determine the secondary box 412 in response to a user selection of the indicia included within the secondary box 412. Additionally, or alternatively, the image enhancement application may automatically determine the secondary box 412 and a secondary box substantially encompassing the other indicia in response to decoding each indicia. Of course, in either case, the image enhancement application may determine the secondary box(es) based on the coordinate limits corresponding to the decoded indicia.
  • Alternatively, in certain aspects, the displayed image may include multiple indicia, but only one (e.g., the barcode contained within the secondary box 412) may be decodable. In this case, the image enhancement application may automatically recognize and decode the indicia illustrated within the secondary box 412. Thereafter, the image enhancement application may determine the secondary box 412 based on the coordinate limits corresponding to the decoded indicia.
  • As yet another example, FIG. 4B depicts an example application interface 420 that includes the secondary box 422. As illustrated in FIG. 4B, the displayed image may include portions of multiple indicia, but only one (e.g., the QR code contained within the secondary box 422) may be fully decodable. In this case, the image enhancement application may automatically recognize and decode the indicia illustrated within the secondary box 422. Thereafter, the image enhancement application may determine the secondary box 422 based on the coordinate limits corresponding to the decoded indicia.
  • In any event, once the secondary box has been determined, the application (e.g., image enhancement application 128) may further be configured to reposition the image in the display region such that the center point of the secondary box associated with the indicia of interest (e.g., the indicia that was previously selected for viewing/analysis) is positioned within some threshold distance from the center point of the display region. In some aspects, the threshold distance may be zero and the center point of the secondary box can overlay the center point of the display region. In other implementations, a non-zero distance threshold (that could be expressed, for example, in terms of pixels) may be implemented. Ultimately, the outcome of such positioning is that the secondary box (and thus the indicia associated with it) will be centered at or visually near the center of the display region.
  • For example, and as illustrated in FIG. 5A, the image enhancement application may display the example application interface 500 in response to receiving a selection of an indicia from a user, and repositioning the image within the display region such that the indicia is substantially centered within the display region. The image enhancement application may receive an indication that a user has selected the indicia 502 within the image, and may proceed to determine the secondary box 504 based on the various coordinate limits (e.g., upper pixel coordinate limit, lower pixel coordinate limit, first side pixel coordinate limit, second side pixel coordinate limit) corresponding to the indicia 502.
  • Generally, as previously mentioned, the image enhancement application may reposition the image containing the indicia 502 such that the distance 506 c between the two center- points 506 a, 506 b is at least less than a threshold distance. When the image enhancement application repositions the image, the application may also compare the center-point 506 a of the secondary box to the center-point 506 b of the display region to determine whether or not to reposition the image in order to more optimally decrease the distance 506 c between the two center- points 506 a, 506 b. In this manner, the image enhancement application may position the indicia 502 chosen by the user as close to the center-point of the display region as possible (e.g., the distance 506 c between the two center- points 506 a, 506 b is eliminated).
  • Further, when the image enhancement application repositions the image, the application may also scale the viewpoint (up or down) such that the secondary box 504 occupies some predetermined amount of the display window. For instance, the scaling performed by the image enhancement application may be expressed as a ratio of the pixels occupied by the secondary box 504 in a vertical direction to the vertical pixel count of the display region. Thus, if the desired ratio is 1:2, and the secondary box 504 has a height of 200 pixels while the display region has a height of 1000 pixels, the viewpoint will be scaled 2.5 times such that the secondary box 504 occupies 500 pixels. The image enhancement application may apply the same or a similar approach along the horizontal axis.
  • Of course, it should be appreciated that unless the aspect ratio of the secondary box (e.g., secondary box 504) matches the aspect ratio of the display region, the horizontal secondary box to display region ratio may not be equal to the vertical secondary box to display region ratio. Thus, when the image enhancement application scales the image in certain aspects, the application will scale the viewpoint such that the ratio of both the horizontal secondary box to display region and the vertical secondary box to display region is at or below the desired threshold. This can help prevent instances where a relatively narrow but tall viewpoint is scaled based on a horizontal ratio without regard for a vertical ratio.
  • For instance, with a display region of 1000×2000 pixels and a secondary box of 400×100 pixels, scaling the image to where the horizontal secondary box to display region ratio is 1:2 requires scaling the image by 10. However, such an increase would cause the secondary box (and thus the indicia that is displayed therein) to increase to a height of 4000 pixels; which is beyond the display capabilities of the display region. Thus, in this instance the image enhancement application may limit scaling the image to 1.25, causing the secondary box to increase to dimensions of 500×125 pixels, meeting the 1:2 threshold requirement along the vertical axis. Alternatively, in certain aspects, a user may configure the image enhancement application to scale the image such that a portion of the selected indicia is not featured within the display region after the image is scaled.
  • For example, as illustrated in FIG. 5A, the image enhancement application may determine horizontal distances 508 between the vertical sides of the secondary box 504 and the vertical edges of the display region and vertical distances 509 between the horizontal sides of the secondary box 504 and the horizontal edges of the display region. Assume that the desired aspect ratio for the indicia 502 relative to the display region is 1:2. In this example, the image enhancement application may scale the image such that the total length of the horizontal distances 509 is equal to the length of one vertical side of the secondary box 504 and the total length of the vertical distances 508 is equal to the length of one horizontal side of the secondary box 504. Of course, it is to be understood that the image enhancement application may scale the image to any suitable aspect ratio, as previously described.
  • In some instances, as for example in FIGS. 5B and 5C, before/after scaling, the region outside of the selected indicia may be masked to highlight the selected indicia. This mask may be applied outside of the secondary box or outside the bounding box of the selected indicia. The mask may be transparent, opaque, color-changing, sharpness decreasing, or any other in nature which would call the indicia of interest to the forefront of the user's attention.
  • For example, as illustrated in FIG. 5B, the image enhancement application may display an example application interface 510 that features a selected indicia 512 and a masked background portion 514. The user may configure the image enhancement application to automatically mask the masked background portion 514 upon completion of the image scaling, and/or at any other point after the user selects the selected indicia 512. Additionally, or alternatively, the image enhancement application may provide a variety of graphical indications to allow a user to understand what indicia is currently displayed, and where the indicia is located within the displayed image.
  • As an example, FIG. 5C depicts an example application interface 520 depicting the selected indicia 512, the masked background portion 514, and a miniature image 522 that includes a scaled image indicator 524. The miniature image 522 may generally represent the original image from which the displayed image was generated (e.g., via repositioning, scaling, masking), and the scaled image indicator 524 may provide a graphical indication of the displayed image as part of the original image for a user's reference. The image enhancement application may automatically, upon selection of the indicia, scale the image to generate the displayed image and display the miniature image 522 and scaled image indicator 524 so that the user does not lose track of where the displayed image and selected indicia 512 are relative to the original image. Further, in certain aspects, the image enhancement application may render an animation, upon the user selection of the selected indicia 512, within the display region that minimizes the original image into the miniature image 522 and thereafter generate the scaled image indicator 524 as an overlay over the miniature image 522.
  • FIG. 6 is a flowchart representative of a method 600 for enhancing image content captured by a machine vision camera, in accordance with embodiments described herein. The method 600 includes receiving an image captured by the machine vision camera (block 602). The image may be received at an application executing on a user computing device communicatively coupled to a machine vision camera, and the image may include a plurality of indicia (e.g., barcodes, QR codes, etc.). Moreover, each of the plurality of indicia may encode a payload.
  • The method 600 may also include identifying, in the received image, each of the indicia (block 604). Accordingly, for each respective indicia in the image, the method 600 may include determining bounds of a respective bounding box that substantially encompass each respective indicia in the image (block 606). For example, the image enhancement application may determine the bounds of a respective bounding box corresponding to a single indicia by identifying the extreme coordinates of the indicia that define the outermost boundary of the indicia within the image. Thus, in certain aspects, the bounding box may be any suitable shape in order to substantially encompass the indicia.
  • The method 600 may also include displaying a plurality of entries, wherein each of the plurality of entries corresponds to a respective indicia of the plurality of indicia (block 608). In certain aspects, each of the plurality of entries includes a payload of the corresponding indicia. A user may interact with the interface in order to indicate a selection of one or more of the displayed entries and/or indicia. As such, the method 600 may include receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia (block 610).
  • In certain aspects, the application (e.g., image enhancement application) may mask a region of the display region upon selection of an entry/indicia by a user. Thus, the method 600 may include masking a masked region in the display region responsive to receiving the selection. In these aspects, the masked region may be a region outside of at least one of (i) the secondary box or (ii) the respective bounding box of the corresponding indicia.
  • The method 600 may also include determining an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit (block 612). The image enhancement application may determine these coordinate limits based on the respective bounding box of the corresponding indicia. Based on these coordinate limits, the image enhancement application may determine a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound (block 614).
  • The method 600 may also include displaying the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region (block 616). In certain aspects, for each respective indicia in the image, the image enhancement application may display the respective bounding box on an interface of the application. In some aspects, the image enhancement application may display the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
  • The method 600 may also include scaling the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region (block 618). In certain aspects, at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3. In some aspects, the image enhancement application may scale the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (ii) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
  • Moreover, in certain aspects, at least one of the first predetermined ratio threshold and the second predetermined ratio threshold is user-definable such that a portion of the corresponding indicia is excluded from the scaled image. Additionally, or alternatively, the image enhancement application may display a miniature version of the image in the display region as an overlay covering a portion of the scaled image. In these aspects, the miniature version of the image includes an indicated portion representing the scaled image.
  • Additional Considerations
  • The above description refers to a block diagram of the accompanying drawings. Alternative implementations of the example represented by the block diagram includes one or more additional or alternative elements, processes and/or devices. Additionally, or alternatively, one or more of the example blocks of the diagram may be combined, divided, re-arranged or omitted. Components represented by the blocks of the diagram are implemented by hardware, software, firmware, and/or any combination of hardware, software and/or firmware. In some examples, at least one of the components represented by the blocks is implemented by a logic circuit. As used herein, the term “logic circuit” is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines. Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices. Some example logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions. The above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted. In some examples, the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)). In some examples, the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)). In some examples the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s).
  • As used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)). Further, as used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal.
  • In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. Additionally, the described embodiments/examples/implementations should not be interpreted as mutually exclusive, and should instead be understood as potentially combinable if such combinations are permissive in any way. In other words, any feature disclosed in any of the aforementioned embodiments/examples/implementations may be included in any of the other aforementioned embodiments/examples/implementations.
  • The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The claimed invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
  • Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
  • The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter.

Claims (20)

1. A method for enhancing image content captured by a machine vision camera, the method comprising:
receiving, at an application executing on a user computing device communicatively coupled to a machine vision camera, an image captured by the machine vision camera, the image including a plurality of indicia, each of the plurality of indicia encoding a payload;
identifying, in the image, each of the indicia;
for each respective indicia in the image, determining bounds of a respective bounding box that substantially encompass each respective indicia in the image;
displaying, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia;
receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia;
determining, from the respective bounding box of the corresponding indicia, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit;
based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit, determining a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound;
displaying the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region; and
scaling the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region.
2. The method of claim 1, further comprising:
for each respective indicia in the image, displaying the respective bounding box on an interface of the application.
3. The method of claim 1, wherein at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
4. The method of claim 1, wherein scaling the image includes scaling the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (ii) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
5. The method of claim 1, wherein each of the plurality of entries includes a payload of the corresponding indicia.
6. The method of claim 1, further comprising:
displaying the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
7. The method of claim 1, further comprising:
masking a masked region in the display region responsive to receiving the selection, wherein the masked region is a region outside of at least one of (i) the secondary box or (ii) the respective bounding box of the corresponding indicia.
8. The method of claim 1, wherein at least one of the first predetermined ratio threshold and the second predetermined ratio threshold is user-definable such that a portion of the corresponding indicia is excluded from the scaled image.
9. The method of claim 1, further comprising:
displaying a miniature version of the image in the display region as an overlay covering a portion of the scaled image, wherein the miniature version of the image includes an indicated portion representing the scaled image.
10. A method for enhancing image content captured by a machine vision camera, the method comprising:
receiving, at an application executing on a user computing device communicatively coupled to a machine vision camera, an image captured by the machine vision camera, the image including a plurality of indicia, each of the plurality of indicia encoding a payload;
identifying, in the image, each of the indicia;
for each of the indicia in the image:
determining bounds of a respective bounding box that substantially encompass the respective indicia,
determining, from the respective bounding box, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit for the respective indicia, and
based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit of the respective bounding box, determining a respective secondary box having an upper bound, a lower bound, a first side bound, and a second side bound for the respective indicia;
displaying, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia;
receiving, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia;
displaying the image in a display region of the interface such that a center-point of the respective secondary box of the corresponding indicia is positioned within a predetermined distance threshold from a center-point of the display region; and
scaling the image such that at least one of (i) a first vertical pixel count between the upper bound of the respective secondary box of the corresponding indicia and the lower bound of the respective secondary box of the corresponding indicia is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (2) a first horizontal pixel count between the first side bound of the respective secondary box of the corresponding indicia and the second side bound of the respective secondary box of the corresponding indicia is within a second predetermined ratio threshold of a second horizontal pixel count of the display region.
11. The method of claim 10, further comprising:
for each of the indicia in the image, displaying the respective bounding box on an interface of the application.
12. The method of claim 10, wherein at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
13. The method of claim 10, wherein scaling the image further includes scaling the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (2) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
14. The method of claim 10, wherein each of the plurality of entries includes a payload of the corresponding indicia.
15. The method of claim 10, further comprising:
displaying the respective bounding box corresponding to each of the plurality of indicia appearing within a visible portion of the image displayed in the display region.
16. The method of claim 10, further comprising:
masking a masked region in the display region responsive to receiving the selection, wherein the masked region is a region outside of one of (i) the respective secondary box of the corresponding indicia or (ii) the respective bounding box of the corresponding indicia.
17. A machine vision system for enhancing image content captured by a machine vision camera, the system comprising:
a machine vision camera configured to capture an image including a plurality of indicia, each of the plurality of indicia encoding a payload; and
a user computing device executing an application, wherein the user computing device is communicatively coupled to the machine vision camera and is configured to:
receive the image captured by the machine vision camera,
identify, in the image, each of the indicia,
for each respective indicia in the image, determine bounds of a respective bounding box that substantially encompass each respective indicia in the image,
display, on an interface of the application, a plurality of entries, each of the plurality of entries corresponding to a respective indicia of the plurality of indicia,
receive, at the interface, a selection of one of the entries resulting in a selected entry that has a corresponding indicia,
determine, from the respective bounding box of the corresponding indicia, an upper pixel coordinate limit, a lower pixel coordinate limit, a first side pixel coordinate limit, and a second side coordinate limit,
based on the upper pixel coordinate limit, the lower pixel coordinate limit, the first side pixel coordinate limit, and the second side coordinate limit, determine a secondary box having an upper bound, a lower bound, a first side bound, and a second side bound,
display the image in a display region of the interface such that a center-point of the secondary box is positioned within a predetermined distance threshold from a center-point of the display region, and
scale the image such that at least one of (i) a first vertical pixel count between the upper bound and the lower bound is within a first predetermined ratio threshold of a second vertical pixel count of the display region, or (ii) a first horizontal pixel count between the first side bound and the second side bound is within a second predetermined ratio threshold of a second horizontal pixel count of the display region.
18. The system of claim 17, wherein the user computing device is further configured to:
for each of the indicia in the image, display the respective bounding box on an interface of the application.
19. The system of claim 17, wherein at least one of the first predetermined ratio threshold or the second predetermined ratio threshold is inclusively between 1:2 and 2:3.
20. The system of claim 17, wherein the user computing device is further configured to scale the image such that (i) the first vertical pixel count does not exceed the first predetermined ratio threshold of the second vertical pixel count, and (2) the first horizontal pixel count does not exceed the second predetermined ratio threshold of the second horizontal pixel count.
US17/375,846 2021-04-30 2021-07-14 Systems and methods for enhancing image content captured by a machine vision camera Active US11507245B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US17/375,846 US11507245B1 (en) 2021-04-30 2021-07-14 Systems and methods for enhancing image content captured by a machine vision camera
PCT/US2022/025058 WO2022231874A1 (en) 2021-04-30 2022-04-15 Systems and methods for enhancing image content captured by a machine vision camera
BE20225323A BE1029310B1 (en) 2021-04-30 2022-04-29 SYSTEMS AND METHODS FOR ENHANCEMENT OF IMAGE CONTENT CAPTURED BY A MACHINE VIEW CAMERA
US17/991,588 US11861134B2 (en) 2021-04-30 2022-11-21 Systems and methods for enhancing image content captured by a machine vision camera
US18/402,390 US20240143122A1 (en) 2021-04-30 2024-01-02 Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163182392P 2021-04-30 2021-04-30
US17/375,846 US11507245B1 (en) 2021-04-30 2021-07-14 Systems and methods for enhancing image content captured by a machine vision camera

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/991,588 Continuation US11861134B2 (en) 2021-04-30 2022-11-21 Systems and methods for enhancing image content captured by a machine vision camera

Publications (2)

Publication Number Publication Date
US20220353388A1 true US20220353388A1 (en) 2022-11-03
US11507245B1 US11507245B1 (en) 2022-11-22

Family

ID=83807929

Family Applications (3)

Application Number Title Priority Date Filing Date
US17/375,846 Active US11507245B1 (en) 2021-04-30 2021-07-14 Systems and methods for enhancing image content captured by a machine vision camera
US17/991,588 Active US11861134B2 (en) 2021-04-30 2022-11-21 Systems and methods for enhancing image content captured by a machine vision camera
US18/402,390 Pending US20240143122A1 (en) 2021-04-30 2024-01-02 Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera

Family Applications After (2)

Application Number Title Priority Date Filing Date
US17/991,588 Active US11861134B2 (en) 2021-04-30 2022-11-21 Systems and methods for enhancing image content captured by a machine vision camera
US18/402,390 Pending US20240143122A1 (en) 2021-04-30 2024-01-02 Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera

Country Status (2)

Country Link
US (3) US11507245B1 (en)
WO (1) WO2022231874A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD1034577S1 (en) * 2020-06-19 2024-07-09 Zebra Technologies Corporation Data capture device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070146491A1 (en) * 2004-06-09 2007-06-28 Cognex Corporation Human-machine-interface and method for manipulating data in a machine vision system
US20080253608A1 (en) * 2007-03-08 2008-10-16 Long Richard G Systems, Devices, and/or Methods for Managing Images
US20100027894A1 (en) * 2008-07-31 2010-02-04 Imageid System and method for automatic recognition of undetected assets
US20140357312A1 (en) * 2010-11-04 2014-12-04 Digimarc Corporation Smartphone-based methods and systems
US20210133496A1 (en) * 2019-11-05 2021-05-06 Elementary Robotics, Inc. Systems and methods for robot-aided product inspection
US20210281739A1 (en) * 2018-07-24 2021-09-09 Sony Corporation Information processing device and method, and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8819172B2 (en) * 2010-11-04 2014-08-26 Digimarc Corporation Smartphone-based methods and systems
US9892301B1 (en) * 2015-03-05 2018-02-13 Digimarc Corporation Localization of machine-readable indicia in digital capture systems
US9928438B2 (en) * 2016-03-10 2018-03-27 Conduent Business Services, Llc High accuracy localization system and method for retail store profiling via product image recognition and its corresponding dimension database
CN107679529B (en) * 2016-08-01 2019-12-24 杭州海康威视数字技术股份有限公司 Logistics package picture processing method, device and system
EP3454298B1 (en) * 2017-09-06 2019-08-07 Sick AG Camera device and method for recording a flow of objects

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070146491A1 (en) * 2004-06-09 2007-06-28 Cognex Corporation Human-machine-interface and method for manipulating data in a machine vision system
US20080253608A1 (en) * 2007-03-08 2008-10-16 Long Richard G Systems, Devices, and/or Methods for Managing Images
US20100027894A1 (en) * 2008-07-31 2010-02-04 Imageid System and method for automatic recognition of undetected assets
US20140357312A1 (en) * 2010-11-04 2014-12-04 Digimarc Corporation Smartphone-based methods and systems
US20210281739A1 (en) * 2018-07-24 2021-09-09 Sony Corporation Information processing device and method, and program
US20210133496A1 (en) * 2019-11-05 2021-05-06 Elementary Robotics, Inc. Systems and methods for robot-aided product inspection

Also Published As

Publication number Publication date
US20230086311A1 (en) 2023-03-23
US20240143122A1 (en) 2024-05-02
US11861134B2 (en) 2024-01-02
US11507245B1 (en) 2022-11-22
WO2022231874A1 (en) 2022-11-03

Similar Documents

Publication Publication Date Title
US20240143122A1 (en) Systems and Methods for Enhancing Image Content Captured by a Machine Vision Camera
US11809949B2 (en) Systems and methods to optimize imaging settings and image capture for a machine vision job
US20240319850A1 (en) Systems and Methods for Facilitating Selection of Tools for Machine Vision Jobs
US20240193941A1 (en) Identification and Presentation of Edges, Shapes and Unique Objects in an Image Used for a Machine Vision Job Setup
US12131480B2 (en) Method of creating an optimized/adaptive ROI based on detection of barcode location in the FOV
US11727664B2 (en) Systems and methods for determining an adaptive region of interest (ROI) for image metrics calculations
US20230042611A1 (en) Systems and Methods for Enhancing Trainable Optical Character Recognition (OCR) Performance
US11210484B1 (en) Systems and methods for creating machine vision jobs including barcode scanning
CN114091497A (en) System and method for optimizing imaging settings for machine vision tasks
US20240241733A1 (en) Industrial Ethernet Configuration Tool With Preview Capabilities
US11966569B2 (en) Systems and methods for interacting with overlapping regions of interest in machine vision applications
US20240290114A1 (en) Systems and Methods Utilizing Machine Vision and Three-Dimensional Modeling Techniques for Surface Matching
US20240144632A1 (en) ROI Image Windowing
US20240112436A1 (en) Ranked adaptive roi for vision cameras
US20240005653A1 (en) Systems and Methods for Tool Canvas Metadata & Auto-Configuration in Machine Vision Applications
US20240031666A1 (en) Systems and Methods for Changing Programs on Imaging Devices
US11995900B2 (en) Method on identifying indicia orientation and decoding indicia for machine vision systems
US20240104320A1 (en) Systems and Methods to Optimize Decoder Parameters of an Indicia Decoder
CN114091498A (en) System and method for optimizing performance of machine vision system

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ZEBRA TECHNOLOGIES CORPORATION, ILLINOIS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GURURAJA, ABHILASH;WEST, CHRISTOPHER M.;SIGNING DATES FROM 20210712 TO 20210713;REEL/FRAME:059268/0193

STCF Information on status: patent grant

Free format text: PATENTED CASE